Technical Leaders Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/technical-leaders/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Tue, 03 Oct 2023 13:57:54 +0000 en-US hourly 1 Driving Successful Test Automation at Scale: Key Insights https://applitools.com/blog/driving-successful-test-automation-at-scale-key-insights/ Mon, 25 Sep 2023 13:30:00 +0000 https://applitools.com/?p=52139 Scaling your test automation initiatives can be daunting. In a recent webinar, Test Automation at Scale: Lessons from Top Performing Distributed Teams, panelists from Accenture, Bayer, and Eversana shared their...

The post Driving Successful Test Automation at Scale: Key Insights appeared first on Automated Visual Testing | Applitools.

]]>

Scaling your test automation initiatives can be daunting. In a recent webinar, Test Automation at Scale: Lessons from Top Performing Distributed Teams, panelists from Accenture, Bayer, and Eversana shared their insights for overcoming common challenges. Here are their top recommendations.

Establish clear processes for collaboration.
Daily standups, sprint planning, and retrospectives are essential for enabling communication across distributed teams. “The only way that you can build a quality product that actually satisfies the business requirements is [through] that environment where you’ve got the different teams coming together,” said Ariola Qeleposhi, Test Automation Lead at Accenture.

Choose tools that meet current and future needs.
Consider how tools will integrate and the skills required to use them. While a “one-size-fits-all” approach may seem appealing, it may not suit every team’s needs. Think beyond individual products to the overall solution, advised Anand Bagmar, Senior Solution Architect at Applitools. Each product team should have a test pyramid, and tests should run at multiple levels to get real value from your automation.

Start small and build a proof of concept.
Demonstrate how automation reduces manual effort and finds defects faster to gain leadership buy-in. “Proof of concepts will really help to provide a form of evidence in a way to say that, okay, this is our product, this is how we automate or can potentially automate, and what we actually save from that,” said Qeleposhi.

Consider a “quality strategy” not just a “test strategy.”
Involve all roles like business, product, dev, test, and DevOps. “When you think about it as quality, then the role does not matter,” said Bagmar.

Leverage AI and automation as “seatbelts,” not silver bullets.
They enhance human judgment rather than replace it. “Automation is a lot, at least in this instance, it’s like a seatbelt. You don’t think you’ll need it, but when you need it you better have it,” said Kyle Penniston, Senior Software Developer at Bayer.

Build, buy, and reuse.
Don’t reinvent the wheel. Use open-source tools and existing frameworks. “There will be great resources that you can use. Open-source resources, for example, frameworks that might be there that you can use to quickly get started and build on top of that,” said Bagmar.

Provide learning resources for new team members.
For example, Applitools offers Test Automation University with resources for developing automation skills.

Measure and track metrics to ensure value.
Look at reduced manual testing, faster defect finding, test coverage, and other KPIs. “You need to get some metrics really, and then you need to use that from an automation side of things,” said Qeleposhi.

The key to building a solid foundation for scaling test automation is taking an iterative, collaborative approach focused on delivering value and enhancing quality. With the right strategies and tools in place, teams can overcome common challenges and achieve automation success. Watch the full recording.

The post Driving Successful Test Automation at Scale: Key Insights appeared first on Automated Visual Testing | Applitools.

]]>
Playwright vs Selenium: What are the Main Differences and Which is Better? https://applitools.com/blog/playwright-vs-selenium/ Fri, 12 Aug 2022 17:41:46 +0000 https://applitools.com/?p=41852 Wondering how to choose between Playwright vs Selenium for your test automation? Check out a comparison between the two popular test automation tools.

The post Playwright vs Selenium: What are the Main Differences and Which is Better? appeared first on Automated Visual Testing | Applitools.

]]>

Wondering how to choose between Playwright vs Selenium for your test automation? Read on to see a comparison between the two popular test automation tools.

When it comes to web test automation, Selenium has been the dominant industry tool for several years. However, there are many other automated testing tools on the market. Playwright is a newer tool that has been gaining popularity. How do their features compare, and which one should you choose?

What is Selenium?

Selenium is a long-running open source tool for browser automation. It was originally conceived in 2004 by Jason Huggins, and has been actively developed ever since. Selenium is a widely-used tool with a huge community of users, and the Selenium WebDriver interface even became an official W3C Recommendation in 2018.

The framework is capable of automating and controlling web browsers and interacting with UI elements, and it’s the most popular framework in the industry today. There are several tools in the Selenium suite, including:

  • Selenium WebDriver: WebDriver provides a flexible collection of open source APIs that can be used to easily test web applications
  • Selenium IDE: This record-and-playback tool enables rapid test development for both engineers and non-technical users
  • Selenium Grid: The Grid lets you distribute and run tests in parallel on multiple machines

The impact of Selenium goes even beyond the core framework, as a number of other popular tools, such as Appium and WebDriverIO, have been built directly on top of Selenium’s API.

Selenium is under active development and recently unveiled a major version update to Selenium 4. It supports just about all major browsers and popular programming languages. Thanks to a wide footprint of use and extensive community support, the Selenium open source project continues to be a formidable presence in the browser automation space.

What is Playwright?

Playwright is a relatively new open source tool for browser automation, with its first version released by Microsoft in 2020. It was built by the team behind Puppeteer, which is a headless testing framework for Chrome/Chromium. Playwright goes beyond Puppeteer and provides support for multiple browsers, among other changes.

Playwright is designed for end-to-end automated testing of web apps. It’s cross-platform, cross-browser and cross-language, and includes helpful features like auto-waiting. It is specifically engineered for the modern web and generally runs very quickly, even for complex testing projects.

While far newer than Selenium, Playwright is picking up steam quickly and has a growing following. Due in part to its young age, it supports fewer browsers/languages than Selenium, but by the same token it also includes newer features and capabilities that are more aligned with the modern web. It is actively developed by Microsoft.

Selenium vs Playwright

Selenium and Playwright are both capable web automation tools, and each has its own strengths and weaknesses. Depending on your needs, either one could serve you best. Do you need a wider array of browser/language support? How much does a long track record of support and active development matter to you? Is test execution speed paramount? 

Each tool is open source, cross-language and developer friendly. Both support CI/CD (via Jenkins, Azure Pipelines, etc.), and advanced features like screenshot testing and automated visual testing. However, there are some key architectural and historical differences between the two that explain some of their biggest differences.

Selenium Architecture and History

  • Architecture: Selenium uses the WebDriver API to interact between web browsers and browser drivers. It operates by translating test cases into JSON and sending them to the browsers, which then execute the commands and send an HTTP response back.
  • History: Selenium has been in continuous operation and development for 18+ years. As a longstanding open source project, it offers broad support for browsers/languages, a wide range of community resources and an ecosystem of support.

Playwright Architecture and History

  • Architecture: Playwright uses a WebSocket connection rather than the WebDriver API and HTTP. This stays open for the duration of the test, so everything is sent on one connection. This is one reason why Playwright’s execution speeds tend to be faster.
  • History: Playwright is fairly new to the automation scene. It is faster than Selenium and has capabilities that Selenium lacks, but it does not yet have as broad a range of support for browsers/languages or community support. It is open source and backed by Microsoft.

Comparing Playwright vs Selenium Features

It’s important to consider your own needs and pain points when choosing your next test automation framework. The table below will help you compare Playwright vs Selenium.

CriteriaPlaywrightSelenium
Browser SupportChromium, Firefox, and WebKit (note: Playwright tests browser projects, not stock browsers)Chrome, Safari, Firefox, Opera, Edge, and IE
Language SupportJava, Python, .NET C#, TypeScript and JavaScript.Java, Python, C#, Ruby, Perl, PHP, and JavaScript
Test Runner Frameworks SupportJest/Jasmine, AVA, Mocha, and VitestJest/Jasmine, Mocha, WebDriver IO, Protractor, TestNG, JUnit, and NUnit
Operating System SupportWindows, Mac OS and LinuxWindows, Mac OS, Linux and Solaris
ArchitectureHeadless browser with event-driven architecture4-layer architecture (Selenium Client Library, JSON Wire Protocol, Browser Drivers and Browsers)
Integration with CIYesYes
PrerequisitesNodeJSSelenium Bindings (for your language), Browser Drivers and Selenium Standalone Server
Real Device SupportNative mobile emulation (and experimental real Android support)Real device clouds and remote servers
Community SupportSmaller but growing set of community resourcesLarge, established collection of documentation and support options
Open SourceFree and open source, backed by MicrosoftFree and open source, backed by large community

Should You Use Selenium or Playwright for Test Automation?

Is Selenium better than Playwright? Or is Playwright better than Selenium? Selenium and Playwright both have a number of things going for them – there’s no easy answer here. When choosing between Selenium vs Playwright, it’s important to understand your own requirements and research your options before deciding on a winner.

Selenium vs Playwright: Let the Code Speak

A helpful way to go beyond lists of features and try to get a feel for the practical advantages of each tool is to go straight to the code and compare real-world examples side by side. At Applitools, our goal is to make test automation easier for you – so that’s what we did! 

In the video below, you can see a head to head comparison of Playwright vs Selenium. Angie Jones and Andrew Knight take you through ten rounds of a straight-to-the-code battle, with the live audience deciding the winning framework for each round. Check it out for a unique look at the differences between Playwright and Selenium.

If you like these code battles and want more, we’ve also pitted Playwright vs Cypress and Selenium vs Cypress – check out all our versus battles here.

In fact, our original Playwright vs Cypress battle (recap here) was so popular that we’ve even scheduled our first rematch. Who will win this time? Register for the Playwright vs Cypress Rematch now to join in and vote for the winner yourself!

Learn More about Playwright vs Selenium

Want to learn more about Playwright or Selenium? Keep reading below to dig deeper into the two tools.

The post Playwright vs Selenium: What are the Main Differences and Which is Better? appeared first on Automated Visual Testing | Applitools.

]]>
Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid https://applitools.com/blog/comparing-cross-browser-testing-tools-selenium-grid-vs-applitools-ultrafast-grid/ Wed, 29 Jun 2022 15:00:00 +0000 https://applitools.com/?p=39529 How can you choose the best cross-browser testing tool? We'll review the challenges of cross-browser testing and consider leading solutions.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>

How can you choose the best cross-browser testing tool for your needs? We’ll review the challenges of cross-browser testing and consider some leading cross-browser testing solutions.

Nowadays, testing a website or an app using one single browser or device will lead to disastrous consequences, and testing the same website or app on multiple browsers using ONLY the traditional functional testing approach may lead to production issues and lots of visual bugs. 

Combinations of browsers, devices, viewports, and screen orientations (portrait or landscape) can reach the thousands. Performing manual testing on this vast amount of possibilities is no longer feasible, same as just running the usual functional testing scripts hoping to cover the most critical aspects, regions, or functionalities of our sites. 

In this article, we are going to focus on the challenges and leading solutions for cross-browser testing. 

The Challenges of Cross Browser Testing 

What is Cross Browser Testing?

Cross-browser testing makes sure that your web apps work across different web browsers and devices. Usually, you want to cover the most popular browser configurations or the ones specified as supported browsers/devices based on your organization’s products and services.

Why Do We Need Cross Browser Testing?

Basically because rendering is different and modern web apps have responsive design, but you also have to consider that each web browser handles JavaScript differently, and each browser may render things differently based on different viewports or device screen sizes. These rendering differences can result in costly bugs and negative user experience.

Challenges of Cross Browser Testing Today

Cross-browser testing has been around for quite some time now. Traditionally, testers run multiple tests and test in parallel on different browsers and this is fine, from a functional point of view. 

Today, we know for a fact that running only these kinds of traditional functional tests across a set of browsers does not guarantee your website or app’s integrity. But let’s define and understand the difference between Traditional Functional Testing and Visual Testing. Traditional functional testing is a type of software testing where the basic functionalities of an app are tested against a set of specifications. On the other hand, Visual Testing allows you to test for visual bugs, which are extremely difficult to uncover with the traditional functional testing approach.

As mentioned, traditional functional testing on its own will not capture the visual testing aspect and could lead to lack of coverage. You have to take into consideration the possibility of visual bugs, regardless of the amount of elements you actually test. Even if you tested all of them, you may encounter visual bugs that could lead to false negatives, which means, your testing was done, your tests passed and you did not capture the bug. 

Today we have mobile and IoT device proliferation, complex responsive design viewport requirements, and dynamic content. Since rendering the UI is subjective, the majority of cross-browser defects are visual.

To handle all these possibilities or scenarios, you need a tool or framework that not only runs tests but provides reliable feedback – and not just false positives or tests pending to be approved or rejected. 

When it comes to cross-browser testing, you have several options, same as for visual testing. In this article, we will explore some of the most popular cross-browser testing tools. 

Cross-Browser Testing with Your Own In-House Selenium Grid 

If you have the resources, time, and knowledge, you can spin up your own Selenium Grid and do some cross-browser testing. This may be useful based on your project size and approach.

As mentioned, if you understand the components and steps to accomplish this, go for it! 

Now, be aware, to maintain a home-grown Selenium grid cluster is not an easy task. You may find some difficulties or issues when running and maintaining hundreds of browser/nodes. Because of this, most companies end up outsourcing this tasks to vendors like Browserstack or LambdaTest, in order to save time and energy and bring more stability to their Selenium Grid infrastructure. 

Most of these vendors are really expensive, which means that you will need to have a dedicated project budget just for running your UI tests on their cloud. Not to mention the packages or plans you’ll have to acquire to run a decent amount of parallel tests.  

Considerations when Choosing Selenium Grid Solutions

When it comes to cross-browser testing and visual testing, you could use any of the available tools or frameworks, for instance LambdaTest or BrowserStack. But how can we choose? Which one is better? Are they all offering the same thing? 

Before choosing any Selenium Grid solutions, there are some key inherit issues that we must take into consideration:

  1. With a Selenium Grid Solution, you need to run each test multiple times on each and every browser/device that you would like to cover, resulting in much higher maintenance (if your tests fails 5% of the times, and you now need to run the test 10 times on 10 different environments, you are adding much more failures/maintenance overhead). 
  1. Cloud-based Selenium Grid solutions require constant connections between the machine inside your network that is running the test to the browser in the cloud for the entire test execution time. Many grid solutions have reliability issues around that causing environment/connection failure on some tests, and when executing tests at scale this results in some additional failures that the team needs to analyze.
  1. If you try to use cloud-based Selenium Grid solutions to test an internal application, you would need to setup a tunnel from the cloud grid to your company’s network, which creates a security risk and adds additional performance/reliability issues.
  2. Another critical factor for traditional “WebDriver-as-a-Service” platforms is speed. Tests could take 2-4x as much time to complete on those platforms compared to running them on local machines. 

Cross-Browser Testing with Applitools Ultrafast Grid

Applitools Ultrafast Grid is the next generation of cross-browser testing. With the Ultrafast Grid, you can run functional and visual tests once, and it instantly renders all screens across all combinations of browsers, devices, and viewports. 

Visual AI is a technology that improves snapshot comparisons. It goes deeper than pixel-to-pixel comparisons to identify changes that would be meaningful to the human eye.

Visual snapshots provide a much more robust, comprehensive, and simpler mechanism for automating verifications. Instead of writing hundreds of lines of assertions with locators, you can write a single-line snapshot capture using Applitools Eyes.

When you compound that stability with the modern cross-platform testing technology of the Ultrafast Test Grid that stability multiplies. This improved efficiency guarantees delivery of high-quality apps, on-time and without the need of multiple suites or test scripts.

Think and analyze the time that it currently takes to complete a full testing cycle on your end using traditional cross-browser testing solutions. Going from installing, writing, running, analyzing, reporting and maintaining your tests. Engineers now have the Ultrafast Grid and Visual AI technology that can be easily set on your framework, and that is also capable of testing large, modern apps across multiple environments in just minutes. 

Traditional cross-browser testing solutions that offer visual testing, are usually providing this as a separate feature or add-on that you have to pay for. What this feature does is basically taking screenshots for you to compare with other screenshots previously taken. So you can just imagine the amount of time that will take to accept or reject all these tests, and take into account that most of them will not necessarily bring useful intel, as the website or app may not change from one day to another. 

The Ultrafast Grid goes beyond simple screenshots. Applitools SDKs uploads DOM snapshots, not screenshots, to the Ultrafast Grid. Snapshots include all the resources to render a page (HTML, CSS …) and are much smaller than screenshots, so they are basically uploaded faster. 

To learn more about the Ultrafast Grid functionality and configuration, take a look at this article > https://applitools.com/docs/topics/overview/using-the-ultrafast-grid.html

Benefits and Differences when using the Applitools Ultrafast Grid

Here are some of the benefits and differences you’ll find when using this framework:

  1. The Ultrafast Grid uses containers to render web pages on different browsers in a much faster and more reliable way, maximizing speed.
  1. The Ultrafast Grid does not always upload a snapshot for every page. If a page’s resources didn’t change, Ultrafast Grid doesn’t upload them again. Since most page resources don’t change from one test run to another, there’s less to transfer, and upload times are measured in milliseconds.
  1. As mentioned above, with Applitools Ultrafast Grid, you only need to run the test once and you’ll get the results from all browsers/devices. Now that most browsers are W3C compliant, the chances of facing functional differences between different browsers (e.g. a button clicks on one browser and doesn’t click on other browsers) are negligible, so it’s sufficient to run the functional tests just once and this will still find the common browser compatibility issues like rendering/visual differences between browsers.
  1. You can use one algorithm on top of the other. Other solutions only offer the possibility of setting a level of comparison based on three modes, either Strict, Suggested (Normal) or Relax, and this is useful to some extent. But what happens if you need to select a certain region of the page to use a different comparison algorithm? Well, this is possible using the Applitools Region Types feature:
  Images courtesy of the AKC

  1. All of the above occurs on multiple browsers and devices combination at the same time. This is possible using the Ultrafast Grid configuration. For more information check out this article > https://applitools.com/docs/topics/sdk/vg-configuration.html
  1. Applitools offers a Free version that allows you to use mostly all the Framework features, This is really cool and helpful, as you will be able to explore and use high level features like Visual AI, cross-browser testing & visual testing without having to worry about the minutes left on your free trial, as with other solutions. 
  1. One of the unique and cool features of Applitools is the power of the automated maintenance capabilities that prevent the need to approve or reject the same change across different screens/devices. This reduces the overhead involved with managing baselines from different browsers and device configurations.  
Images courtesy of the AKC

Final Thoughts

Selenium Grid Solutions are everywhere, and the price varies between vendors and features. IF you have infinite time, infinite resources and infinite budget, it would be ideal to run all the tests on all the browsers and analyze the results on every code change/build. But for a company trying to optimize their velocity and run tests on every pull request/build, the Applitools Ultrafast Grid provides a compelling balance between performance, stability, cost and risk.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on Automated Visual Testing | Applitools.

]]>
How to Build a Successful QA Team in a Rapid Growth Startup https://applitools.com/blog/how-to-build-qa-team-startup/ Fri, 06 May 2022 20:43:36 +0000 https://applitools.com/?p=38099 Learn how to build an effective QA team that thrives in an environment where things change quickly.

The post How to Build a Successful QA Team in a Rapid Growth Startup appeared first on Automated Visual Testing | Applitools.

]]>

Learn how to build an effective QA team that thrives in an environment where things change quickly.

Within a startup projects can be the highest priority one minute and then sidelined the next. As the QA Lead at a growing startup, recruiting testers was my number one priority in order to keep up with the pace of new projects and demands on my current testers. I’d been told a new project was top priority, so I’d just offered a role to a tester who I thought would be a great fit to hit the ground running on it. 

Luckily, I hired a person who was resilient and took the immediate pressure in stride, but putting so much pressure on them with such a high priority project and requiring them to learn the domain super quickly was not how I would onboard a tester in normal circumstances.

And then, three months of hard work later, the project was scrapped. In a startup, this happens sometimes. It’s never fun, but that doesn’t mean you and your QA team can’t learn from each stage of the experience.

Dealing with Changing Priorities, as a Domain Expert

page not found error on computer screen

Testers often gain considerable domain knowledge while navigating the application from the users perspective multiple times a day. However this can be difficult to sustain when the priorities are changing so frequently. During this project there were changes to designs, user journeys and business logic right up until the last minute.

How does a tester keep on top of all these changes while maintaining tests and finding bugs which are relevant and up-to-date with the latest requirements? Within this environment it can be hard to stay motivated and perform your role at the level you aspire to. As a manager in this situation, I made sure that the tester knew their work would not be compared to normal circumstances and I was aware that all the changing requirements would lead to delays. Dealing with these challenges requires a person with certain hardy attributes.

It’s All a Learning Experience

Tests or code you’ve written which are deprioritized can be disheartening, but looking at it as a learning opportunity can help deal with this. The tester involved saw opportunities to reuse some code they had written for future tests and also felt they could use the test data spreadsheet as a template for other pages of the site. I was really proud of how they dealt with changing priorities and saw opportunities to take learnings with them to future testing scenarios.

It’s Okay to Write Poor Quality Automated Tests

In this fast moving environment, where designs are likely to change and user journeys are not set in stone, it’s okay to write poor quality tests.

When writing code, you want it to be easy to understand, maintain and reuse. However in this environment that isn’t possible, so writing code that you know you will need to refactor later is absolutely fine. The automation tester often would request a code review with a long description explaining why he’d duplicated code or why he’d not moved a piece of logic to a separate function. I always approved their pull requests and suggested they leave a comment as a TODO for them to revisit once the designs and user journeys were more stable. Always reiterating that I’m not judging them for this approach. 

Get Comfortable with Tests Being Red

The tests were red, often. Seeing your tests fail less than 24 hours after you’ve created them can be quite a demoralising feeling. Especially when it’s due to the developer refactoring some part of the UI and forgetting to keep the test-id you needed for your tests. This can be very frustrating and make maintenance difficult.

In a fast moving environment, it’s okay for your tests to be red. The important thing is to keep adding tests as these will be your lifeline when you are getting ready for a release and you have a short amount of time available. These tests will be critical in the lead up to go live.

Dealing with Design Changes, as an Automation Tester

design wireframes different variations

Designs are often not complete before you start development, particularly when working with overly optimistic deadlines. In this project, even the user experience (UX) research wasn’t complete at this stage. Meaning that the foundations of the frontend development process are not finalised. During this project, as mentioned previously, there were changes to designs on a regular basis. This impacts the automation tester quite significantly and can cause them to think they should wait until the frontend is stable. Often it is recommended to not automate when the frontend is changing frequently.

So what do you focus on in this scenario without becoming annoyed by all the wasted effort? Building the structure for the tests including visual, accessibility and performance. As the automation tester knew they couldn’t rely on elements or any specific locators, they focused on whole page visual snapshots, accessibility reports and page render performance metrics.

Visual Snapshots before Functional Checks

As the website was going to be changing on a daily basis, functional tests would be so brittle that we sought other alternatives.

Visual testing seemed like a great alternative as we could easily replace the base snapshots when the designs had stabilised. With this approach we weren’t targeting specific parts of the page or components, which is how I would usually use visual snapshots in order to ignore dynamic content. 

To combat this, within the content management system (CMS) we could create test pages with the same layout as the homepage, for example, to run our visual tests against the whole page. This way the content wouldn’t change on the page and we could make comparisons quickly across different resolutions. This saved us a lot of time and effort compared to functional tests.

Whole Page Accessibility Audits

As images were being swapped out, colour changes and font swaps were happening frequently, meaning developers forgetting about accessibility was a common occurrence.

The accessibility audits on the page allowed developers to get instant feedback on quick fixes they needed to make to ensure the accessibility wasn’t impacted with their most recent changes.

Page Render Performance as a Smoke Test

Marketing would frequently send over a new image, then a developer would update the image on the site, often skipping the optimization step.

Using Google Lighthouse as a smoke test, it was easy to identify images that hadn’t been optimised. Perhaps the image was the wrong size or wasn’t suitable for mobile. This meant we could quickly go back to marketing and ask them to provide a new image. Catching these performance issues as early as possible means you don’t have 100’s of images to optimise at the last minute before release.

Dealing with Projects Being Scrapped, as a Person

passion led us here

Days after the release, when the website was just released to the world, we got the news. Due to time pressures and changing designs right up until days before the site was made live, we didn’t deliver the highest quality website. There were bugs, the journey for the user was a bit clunky and the search results weren’t very accurate. None of this was down to the team that worked on the website, there were really some superstars and people worked weekends or late nights to deliver the website on time. However business stakeholders had decided to engage with an agency, with a view to outsource the website. 

This came as a real shock to the team and wasn’t quite the news everyone was expecting just days after working hard towards a common goal. All the tech debt and automated test coverage we left for post release was now put on hold. So how would you react when the domain knowledge you’ve acquired, code you’ve written and technology you’ve learnt overnight is not required anymore? It can be very disheartening to hear your project has been scrapped and the hard work you put in can seem like it was for nothing.

Lessons Learned, Delivering Rapidly and for Nothing

It’s not all doom and gloom. There are many lessons learned along the way which will help you develop as a resilient member of the team and also how to work in a rapidly changing environment, quite useful if you are working for a startup.

One of the most important lessons that I learned was to focus on what I could control, such as working with the automation tester to come up with solutions to the fast moving changes. I couldn’t control the deadline or the scope changes 2 days before going live. But I can give my advice as someone with experience in these situations before, of the risks late changes will cause.

Another positive to come out of this was the focus on visual, accessibility and performance in a holistic fashion. Usually I would focus on making my tests robust, target specific components and use this at the end of the project for regression purposes. However, now I have another use case for these testing types.

Testing in this setting is not ideal and requires some sacrifices in terms of quality. Leading QA on this project wasn’t an enjoyable experience, required careful management and one that I will not forget anytime soon. But I learned way more on this project than I would have learned if the project had gone smoothly.

The post How to Build a Successful QA Team in a Rapid Growth Startup appeared first on Automated Visual Testing | Applitools.

]]>
Proving a Concept, Automation Style https://applitools.com/blog/how-to-choose-test-automation-tool/ Thu, 07 Apr 2022 21:15:49 +0000 https://applitools.com/?p=36399 Learn how to choose a new test automation tool and the top considerations you need to keep in mind.

The post Proving a Concept, Automation Style appeared first on Automated Visual Testing | Applitools.

]]>

Learn how to choose a new test automation tool and the top considerations you need to keep in mind as you develop a proof-of-concept.

When people ask me about which tool they should use for their automation, I typically explain my view of the automation ecosystem to them. As I discuss in my Bad Spin blog post, this ecosystem is made up of strategyaudience, and environment, but as Alton Brown says, that’s a different show; I have a one-hour talk about the automation ecosystem and choosing a tool; you can contact me if you are interested in hearing it…. But I digress.

As part of the aforementioned talk, I recommend doing one or more proofs of concept or prototypes using the tools that you’ve decided are possible candidates. Yes, there is a subtle, or not so subtle, difference between a prototype and proof of concept, but for our purposes in this writing, we’ll call them the same thing. With that assumption in mind, here are some considerations that are usually appropriate for most automation prototypes; these thoughts have served me well over the years.

Prototype Against Your Application or Product

Creating automation prototypes against “test” websites such as The Internet by Dave Haeffner or Restful Booker by Mark Winteringham can be a good way to exercise an automation tool across multiple application constructs; I’m a big fan of these sites and I do use them from time to time. Nothing, however, compares to creating your prototype against your applications. You know where the “icky bits” of your app are, you know where the 3rd party components are used… or you can find out by asking the developers. There is no substitute for prototyping against your own app(s).

When doing this prototype, don’t shy away from the “hard to automate” portions of your app. These portions are very important because, depending on the frequency with which they are used, they might rule in or rule out specific tools.

Use a Free or a Trial License

As attractive as it may be to focus on “free”, i.e., open source software, you should not automatically discount vendor-sold software. If you find that a vendor-sold product might be a viable candidate for your automation tool, you should consider creating a prototype with it; if you don’t, you won’t know whether it is, in fact, an appropriate tool for you. In fact, it might be the most appropriate tool for you. To be clear, I’m not saying that you should buy a license for that tool just to do a prototype. Most vendors have free-with-limited-features versions or temporary trial licenses. Trial licenses usually have a time limit; trial durations of 7 – 30 days are common.

Run Against Your App’s APIs

Does the tool or framework you’re using for your prototype support testing web services? If yes, awesome! Make sure you prototype against your application’s API in addition to any GUI you provide. Note that most API-capable tools can handle your basic APIs, so make sure to automate against “more challenging” APIs.

Also, does the tool work with your squirrelly authentication and authorization scheme? Does it work with your 3rd party authentication provider? Is there some non-standard payload that your APIs deliver? If so, make sure you check the automation tool against that; to insert some concreteness, I’m living through a difficult authentication paradigm at the time of this writing.

Exercise Concurrency or Parallelization

Not all automation tools support concurrent (i.e., parallel) execution. Even the ones that do may have limitations with respect to your specific context. Try running test scripts in parallel to ensure you are getting the behavior you expect in addition to the performance you desire. Of note, are the logs and reports you get when running in parallel less helpful than those you get when running sequentially?

3rd Party Partners

Which 3rd party service providers does the tool support? More specifically, which managed browser grids and device farms does it support? Is this capability open-ended or does the tool only support specific 3rd parties? Be sure to automate against as many 3rd parties as is feasible to make a responsible decision.

Note that if a tool only supports specific 3rd party infrastructure that is not necessarily an issue. If, however, you do choose that tool, you must be willing to also work with the supported 3rd parties or avoid them completely, e.g., managing your own Selenium grid, device farm, etc.

Simulate a Major Change

One of the challenges in any automation endeavor, regardless of tool choice, is keeping maintenance effort to a minimum, so it’s important to understand a tool’s capability to handle a refactor or pervasive change. During your prototyping activities, try to simulate having to change values in, say, 500 or more test scripts. This simulation may not be easy to set up, but the information you’ll gain about your future maintainability with this tool will be invaluable.

Look at the Result, Log, and Report Files

Though we understand that test automation development is, in fact, software development, there is an important difference from general application development. For example, the result of buying a product on a website is not an email with an order number; the result is that the buyer receives the product that they ordered. In contrast, the result of a test automation script is not just a pass or fail, a yes or a no, a red or a green. The most valuable “products” of a test automation script are its log/report/result files. This is where we can determine the pass/fail status but also, we can determine what did and didn’t happen during a script run. When prototyping with a tool, evaluating the generated artifacts is essential to performing a responsible evaluation of the tool itself.

Some considerations when performing this part of an evaluation include:

  • Are the logged steps sufficient for you to understand what did and did not occur during the script’s execution?
  • If the script failed, is the failure reason sufficiently descriptive for you to debug the issue or report it to another team member?
  • Can you add additional log messages or other execution artifacts to the test run to make it easier to debug?

Most assuredly, the considerations above are a subset of what you want to exercise during a prototyping activity. Every team, organization, application, company, etc. has different needs and requirements. In fact, some of the above may not apply to your specific context.

There is one other thing of which to be mindful. When we create code for a prototype or proof of concept, we are creating it to prove that a concept or an implementation is feasible and is a good candidate for our needs. The code we create during this process should be developed as quickly and economically as is responsible. This means taking shortcuts, “making” things work, driving to an “it works” or “it doesn’t work” for us as soon as is reasonable. Further, this means that we need to be prepared to throw away the code we created during these endeavors.

“Wait! No! We just spent weeks creating this and it’s working! We can’t just throw it away!”

Yes, you can, and you should; in some cases, you must. Because this code was created taking shortcuts, “making” things work and driving to an “it works” or “it doesn’t work” for us conclusion, this code is typically not in a supportable and future-thinking state. In many cases, it will be more economical to rewrite the code than to maintain it over the life of that code. For code that is sufficiently close to an appropriate state of supportability, a refactor of the existing code may be more appropriate than a complete rewrite but that decision is situationally dependent.

Like this? Catch me at an upcoming event!

The post Proving a Concept, Automation Style appeared first on Automated Visual Testing | Applitools.

]]>
Screenshot Testing with Selenium, Cypress and Playwright: 3 Popular Automation Tools Deliver Amazingly Different Screenshots https://applitools.com/blog/popular-automation-tools-amazingly-different-screenshots/ Wed, 16 Mar 2022 20:35:52 +0000 https://applitools.com/?p=35489 Learn the differences between Selenium, Cypress, and Playwright when it comes to automated testing and screenshot quality.

The post Screenshot Testing with Selenium, Cypress and Playwright: 3 Popular Automation Tools Deliver Amazingly Different Screenshots appeared first on Automated Visual Testing | Applitools.

]]>

Learn the differences between Selenium, Cypress, and Playwright when it comes to automated screenshot testing and screenshot quality.

The Issue with Testing Screenshots

Capturing screens is a fundamental piece of testing web assets and reporting defects. There are other use cases for these screenshots, too. I work as the Director of Quality Control for my company. We service the pharmaceutical industry. I was tasked with leveraging existing automation processes to deliver high quality images of client sites to submit for legal review more efficiently.

My journey to find the best way to take screenshots to fill this niche was filled with ups and downs. The journey started with Selenium WebDriver, which fell short of meeting the requirement. Next I moved to Cypress because it showed great promise. It also had limitations I could not work around. Finally, I landed on Playwright and was able to meet the desired output.

I will err on the side of caution and use a site that is completely unrelated to any of our client work. Snopes seems like a safe choice. Besides, we will not do any advanced or prolonged calls against their site. Be nice to your internet neighbors!

Screenshot Testing in Selenium WebDriver

Automating full page screenshots with high fidelity has been troublesome in the past. Selenium WebDriver, the de facto champion of browser automation, lacks a method to capture a full page. Having said that, Selenium drivers do have methods for screenshots. The sample code for taking a screenshot with Selenium WebDriver is straightforward:

takeSnopesScreenshot.js

let {Builder} = require('selenium-webdriver');
let fs = require('fs');

(async function checkSnopes() {
    let driver = await new Builder()
    .forBrowser('chrome')
    .build();

    await driver.get('https://www.snopes.com');
    // Returns base64 encoded string
    let encodedString = await driver.takeScreenshot();
    await fs.writeFileSync('./homepage.png', encodedString, 'base64');
    await driver.quit();
}())

Here are the steps we took to capture our image:

  1. We start by bringing in our Builder class so we can create new webdriver instances.
  2. We also need access to the file system so we can save our image.
  3. Next, we instantiate a new webdriver for the Chrome browser.
  4. Then we instruct the webdriver to open https://www.snopes.com. This should probably be put into a variable. We’ll leave it here since we are only going to one page on the site.
  5. Next, we take a screenshot of the web page.
  6. Then we save the image to disk by converting the base64 encoded string into a .png file.
  7. Finally, we free the driver from service.

We get a screenshot like this after running the code:

An image of the homepage of Snopes.com.

The resulting image looks nice. A couple of things immediately stood out to me as problems here:

  1. It is not a full-page image.
  2. The scrollbar is displayed.

We attempt to fix these issues by setting the window size after initializing the webdriver:

takeSnopesScreenshot2.js

let {Builder} = require('selenium-webdriver');
let fs = require('fs');

(async function checkSnopes() {
    let driver = await new Builder()
    .forBrowser('chrome')
    .build();

    await driver.manage().window().setRect({ width: 1024, height: 2000 });

    await driver.get('https://www.snopes.com');
    // Returns base64 encoded string
    let encodedString = await driver.takeScreenshot();
    await fs.writeFileSync('./homepage.png', encodedString, 'base64');
    await driver.quit();
}())

Two things come to mind if this works the way I anticipate:

  1. I may not know the dimensions of the longest page.
  2. I do not want to add more code to maintain.

The resulting screenshot after resizing the window:

An image of the homepage of Snopes.com. It's still not a full page image.

The second image is high quality. Obviously, the setRect() method did not force the full page image like I had hoped.

This works great for capturing what is in the current viewport, but not so well for a page where scrolling to see all the content is needed. Instead, we would need to “stitch” the images together to get a full-page screenshot. There is also a third-party plugin named aShot for the Java users of Selenium. Not much help in a .NET and JavaScript shop. No easy, consistent method to do full page captures is a deal breaker for my use case.

Screenshot Testing in Cypress

Speedy delivery, simplicity, and consistency are keys for a successful submission. The Champ (Selenium WebDriver) is too cumbersome. That leads to a scrapy newcomer, Cypress.io. Cypress is an amazing framework. Easy to use – check. Fast as lightning – double check. It even has a built-in Screenshot API that captures full page images – yes, please. Here is a quick example of what that looks like:

snopes.test.js

describe('submission screenshots', () => {
    it('takes screenshot of the home page', () => {     
        cy.visit("https://www.snopes.com");   
        cy.screenshot('/screenshots/cypress-homepage');
    });
});

Here’s the breakdown of what we did:

  1. We set up our test suite using the built-in support from Mocha.
  2. Next, we set up the test.
  3. Then we visit snopes.com.
  4. Finally, we take the screenshot and save it as cypress-homepage.png.

A few things to note here:

  1. The code feels less intimidating and more accessible.
  2. We did get a full-page image!
  3. There is a problem, though. Can you see it? (click to view full size image)

Another great-ish image. Cypress, however, appears to suffer from some of the same stitching and consistency issues found in Selenium. Images were not consistent. Some were blurry. Others were missing content – usually near the bottom of the web page. This is not likely the end of the world for most use cases, but those are killer obstacles to overcome to create legal submission documents for a federally-regulated company.

Screenshot Testing in Playwright

I was quickly losing faith in being able to find a good solution to help with our automated screenshot efforts. Then, I found Playwright, Microsoft’s entry for automated UI testing. It is a fantastic framework that checks the boxes for being speedy and easy to use. It also has the capability to capture full page images out of the box. It looks promising. The only question left is the quality of images.

Spoiler alert – It works like a charm . Let’s take a look at some of the code:

snopes.test.js

const {test, expect} = require('@playwright/test');

// The site I will capture screens from.
const url = "https://www.snopes.com/";
const savePath = "./screenshots/";

// I have a wrapper function to beef up Playwright's screenshot method.
async function takeScreenshot(page, label) {   
    await page.screenshot({
      path: `${savePath}${label}.png`,
      fullPage: true,
    });
}

// Start the testing
test.describe('Submission Screenshots:', async () => {

    test('0.0 - Homepage', async({page}) => {
        await page.goto(url);    
        await takeScreenshot(page, "playwright-homepage");
    });

    test('1.0 - Menu', async({page}) => {
        await page.goto(url);
        await page.click("//html/body/header/div[1]/nav/div/div[3]/div/a[2]");
        await takeScreenshot(page, "playwright-menu");
    });

    test('2.0 - Search Results', async({page}) => {
        await page.goto(url);
        await page.fill('input[name="s"]', 'scam');
        await page.press('body', 'Enter');
        await takeScreenshot(page, "playwright-search-results");
    });
});

This snippet is a bit more polished than the others.

  1. We moved the URL into a proper variable.
  2. Next, we created a variable to hold the saved images path.
  3. Next is a wrapper function to take care of the additional information (label) needed for each image.
  4. Inside we call Playwright’s screenshot method.

From there the test suite and individual tests take on the Mocha format like Cypress.

These are the screenshots we get after running the code (click for full size):

Playwright Homepage
Playwright Menu
Playwright Search Results

Those images look amazing. Full page. Crisp. Consistent. The time taken to run through the screens is not bad for what we are doing, but there is considerable overhead when doing file I/O. We added 20 seconds by writing those large files to disk.

The chart below lists the times for capturing just the homepage image versus a visit to the page for each framework. Keep in mind that these times reflect visiting an external site on my personal internet connection. Times on a local build will be significantly faster.

With Screenshot (seconds)Without Screenshot (seconds)
Selenium WebDriver2112
Cypress1811
Playwright2110

Conclusion

These are simple examples of how to use the screenshot capabilities of three common automated testing frameworks. The differences of the resulting images were quite shocking. I presumed that a screenshot was a screenshot.

It is important to note that different here does not mean bad. You have to take your use case into account. Take the extra 20 seconds needed to write three full page screens to disk for example. That is not a typical use case, but it is perfect for the time savings we set out to recoup by automating submission screenshots. Is it perfect? No, but it gets closer with every site we automate.

There is room for enhancement. We can move from capturing and saving screenshots into automated visual testing with Applitools. Visual testing with Applitools is an enhancement worth making, especially for regulatory or compliance testing, because we will be alerted whenever a visual change is detected. This is done by swapping out the screenshot method calls with calls to Applitools Eyes.

The post Screenshot Testing with Selenium, Cypress and Playwright: 3 Popular Automation Tools Deliver Amazingly Different Screenshots appeared first on Automated Visual Testing | Applitools.

]]>
Funding a Common Test Automation Team https://applitools.com/blog/funding-common-test-automation-team/ Wed, 16 Feb 2022 22:39:13 +0000 https://applitools.com/?p=34332 How are you going to fund a common test automation team? There are a few approaches.

The post Funding a Common Test Automation Team appeared first on Automated Visual Testing | Applitools.

]]>

You’ve decided that test automation is working or is going to work for you. That’s good; you’ve done your homework. Furthermore, you’ve decided that having a common or “base” automation team is your preferred organizational approach; you’ve decided this team will build and support the common automation infrastructure that will be shared across the teams in your organization or your company. Awesome!

How are you going to fund that team?

Unfortunately, money does not grow on trees and organizations have pre-defined budgets that are hard or impossible to amend. Additionally, Organization A is not generally interested in helping Organization B, unless helping is in line with meeting Organization A’s goals. This means that if Organization A has governance over the supposedly common automation infrastructure, that organization is only inclined to help Organization B as long as it also helps Organization A or, at least, doesn’t impede Organization A.

This all means that, in most cases, counting on organizations to work together for a common infrastructure “good” is not a tenable business strategy. That being the case, there are a few approaches to funding a common, or “base” automation team that are described below.

The Automation Tax

An Automation Tax is essentially what it sounds like; each year, each team or organization has a flat dollar amount or percentage of their budget allocated to fund the base automation team. In return for this funding, the funding team/organization gets unrestricted use of all code produced by the base team, including code, features, and fixes produced for other teams.

The easy math on this approach is quite attractive to many base team organizations. It’s also easy to “spread the wealth” in that large teams or organizations pay more than small ones do if the funding is a percentage of their budget. The logic here is that larger teams generally require more features and support in shared automation code than do smaller teams. The wealth is spread because all teams, large and small, gain access to features and fixes requested by other teams, regardless of the size of the requestor.

This approach can also make maintenance of the base code easier, at least from a funding standpoint. If the base team is funded a general amount from each user team, then care and feeding of the framework and stack base code is included in that amount. There is no need to scrimp and beg for “additional” funding to perform an upgrade of one of your automation tools or to port to the new version of the language or operating system that you use. It’s all built-in!

Unfortunately, this approach is not all fun and games. The accounting and reporting of work performed needs to be very detailed and will often be challenged by the user base. In addition to challenging the veracity of the reports, some common questions and challenges include:

  • Team A and Team B both want the same feature added to the automation base; which team gets charged for it? It can be the case that each wants the other to pay for it. They will definitely want to see that only one of them paid for it and the base team didn’t charge twice for the work.
  • Team C’s base team tax equates to approximately two people of effort based on the loaded labor rate. Team C wants to know which two people will be working on Team C’s work items. This may not be the best way to staff a base team.
  • I’m the largest funder of the base team; why weren’t all of my requests completed prior to working on items for any other team?

Over time, the tax may need to be reduced because the year-over-year effort may reduce, leaving the base team with idle team members and dissatisfied “customers.”

Direct Funding

For the Direct Funding approach, each year, each team or organization decides what features and support they will need for the upcoming year, add that to their budget, and then fund the base automation team; the funders must also project when they will need each feature or bug fix. The base automation team is obliged to appropriately staff for both the workload and the expected timing of delivery.

From the users’ standpoint, this is an awesome way of funding. A user organization tells the base team what to develop, the base team negotiates the cost and the delivery date, then delivery happens. The priority of the work is pretty clear since each funding organization is paying for specified work by a specified date. If many features or fixes are required in the same timeframe, the staffing and, therefore, the cost to the funding organizations will be higher; this cost, however, is stated upfront during the funding or budgeting period so there are fewer surprises. If a user organization doesn’t like the required funding, they negotiate over the date or the content.

The downsides of this approach typically fall on the base team. The base team has to account for variable staffing based on the expected workload and expectation dates. If many projects are all to be performed in parallel, it can be necessary for the base team to ramp up or ramp down in the middle of a fiscal year; this can be problematic in some companies. Additionally, managing short-notice staffing ramps can necessitate working with an outside partner to provide contractor-based staffing; the base team leadership must work closely with this partner to project the ramp cadence, meaning the base team must have a lot of trust in the partner.

Usage Billing

In the Usage Billing approach, teams or organizations are charged on a person-by-person, period-by-period basis, be that period month-over-month, quarter-over-quarter, or other intervals. The attraction is that it’s pay-as-you-go: if person X doesn’t use it for that period, the organization isn’t charged for that person’s usage for that period.

On the surface, this sounds like an awesome arrangement, as did the Direct Funding approach. As we’ve learned, or are learning, however, nothing’s perfect.

This approach requires high bandwidth communication between the base automation team and the user teams; user teams need to make projections about their usage for the upcoming year and the base automation team needs to staff and prepare for that usage, but only up to the amount funded by projected usage. User organizations also need to project any “big” work items they need in the upcoming year; if the projects are sufficiently large, the “base” usage may not be sufficient. Like the Automation Tax Approach, the per-user, per-period bill may need to reduce as the base matures; it may also need to increase if there are anticipated “big ticket” base development items on the roadmap.

Another consideration is the definition of usage. Is usage simply, starting the tool’s UI? Or does a user need to actually run a test script? Depending on the organization or company, other definitions of usage could be applicable. A definition must be established that is realistic, tolerable to the user teams, and can generate enough funding to account for the effort needed by the base team.

Once a usage definition is established, how should that usage be tracked? There needs to be a way of recording a “use,” which can be challenging in some situations. Is the tool expected to be usable when a user is offline, e.g. on an airplane or in a non-connected area? If so how should that usage be “recorded” so that it can then be reported once the user is again on a network? Must the tool work on a permanently disconnected network such as a testing-only LAN where connections outside that LAN or segment are blocked? If so, how will usage be reported? How should usages by a shared or “service” account, as is often configured for CI/CD pipelines, be recorded? Is that a single user?

This is the solution that is the most sensitive to the technological context in that teams and organizations that use the tool the most, pay the most for its upkeep and evolution. The theory is that the more users a team has, the more changes and support that will be requested by that team so that team should contribute funding at a higher level.

There are challenges in addition to the ones previously described. Some teams will view the per-user cost to be unfair based on the value they think they are receiving. Related to that point, is that it is hard to show fairness; a lot of transparency and reporting must be in place. Finally, it can be hard for the automation team to project staffing and handle staffing changes.

The Hybrid

It may be a stretch to call this a separate approach, per se. It’s really the combination of two of the above-described approaches. The hybrids I’ve seen most often used are “Automation Tax plus Direct Funding” and “Usage Billing plus Direct Funding”.

In both cases, the “Direct Funding” is used to develop features that teams specifically pay for. Many teams may want the same feature, but whichever team requests it first pays for it and ostensibly helps to influence its behavior. Regardless, all features become available to all user teams whether they paid for them or not.

The funds obtained from Usage Billing or Automation Tax, which is paid by all user teams, are used to fund the maintenance and upkeep of the automation base. This maintenance includes bug fixes. Naturally, the hybrid approach will have similar pros and cons as the two approaches that are selected for the hybrid; this combination, however, may be more palatable for some teams than just using a single approach.

Other Considerations

There are many considerations to unpack here. A subtle consideration, but possibly the most important one, is “should I even have a shared base automation team?” By creating a base, shared, or core automation team you are committing to having automation as a core competency for your organization and company. I’m not stating that to deter anyone from this approach; in fact, most “sufficiently large” companies and organizations can benefit from considering test automation as a shared service at some level due to the economies of scale. This is a great approach so long as the cost (as defined by an organization or company) is smaller than the value provided by having a base automation team.

Another consideration is the delineation between what is built by a base automation team and what is built by the user teams. Often any features or fixes that would benefit multiple teams would fall to the base team; this also required good communication with user teams on what they are building, if it would be interesting to other teams, and how to build it in a more generic, i.e. applicable to multiple teams, manner.

We can even blend any of the approaches above with an “internal open-source” concept. For internal open-source, those automation user teams that have the appropriate skill sets can add features to the common automation code. The base automation team would be the stewards of the shared code to help ensure code added by other teams is appropriate for sharing.

Certainly, there are additional pros and cons to any of these approaches based on team experiences or the contexts in which those teams are currently working.  These explanations are meant to serve as a guide so that you can make your own decisions about creating and funding a base team in your specific contexts.

The post Funding a Common Test Automation Team appeared first on Automated Visual Testing | Applitools.

]]>
Why I Joined Applitools (again): Dave Haeffner https://applitools.com/blog/why-join-applitools-again-dave-haeffner/ Wed, 02 Feb 2022 19:40:22 +0000 https://applitools.com/?p=34066 Dave Haeffner shares his career journey, which included taking time off to focus on his family before joining Applitools (again).

The post Why I Joined Applitools (again): Dave Haeffner appeared first on Automated Visual Testing | Applitools.

]]>

Everyone has their own opinions. And some of them? Preposterous. Others? Downright controversial! Like, for instance, thinking that The Bourne Legacy is the best of all the Bourne movies (change my mind). Or that vim is better than emacs… and VSCode (!). You get the idea, and this is nothing new.

But what never gets old is when you can find a group of people where you can connect regardless of your opinions and feel like you belong. To find that group of people who can take you for you are (regardless of your poor taste in movies or questionable choice of text editor), riff on it, and (lovingly) rib you a bit for it. To do this in all facets of life is important, but most recently, I managed to find this group of people in my work life – having recently joined Applitools as a Software Developer. And ironically, this isn’t my first time working here.

In the not-too-distant-past I took some time away from my career to focus on my family. My wife was looking to head back to work after a few years away to focus on raising our young children. I was looking to take a break from work and endeavor on a different kind of challenge (family circus conductor). Fast forward to the end of what I affectionately refer to now as my extended sabbatical, things are different. My kids are older now and in preschool, my wife is back working, and I’m home twiddling my thumbs wondering “What to do?”

So I explored a few options.

Back into entrepreneurship? Sure, why not? So I looked into starting a new business. But man, that’s a lot of work! Last time I did that was in 2011. I did it by myself then and it was very hard. The conclusion? That’s a young man’s game. Nope, next.

Why not partner with someone instead of going it alone? Okay, sure. So I joined a friend’s startup as a technical co-founder. But that didn’t feel like the right fit either. So maybe freelance software developer? Done. That started out okay, but after a few months I realized it was lonely and not challenging me in the ways I was looking to grow. Hmm.

At the end of it all, a question crystallized for me, “What do I want to do and who do I want to do it with?” Me, a vim user with fantastic taste in films. Where I ended up was a headspace eerily similar to where I was in 2018.

Back then I decided that I wanted to make a go of being a full time software developer. To focus on the process of making high quality, high value software. But here was the rub. While I had experience working with a handful of programming languages (at least superficially through my work in software testing), I didn’t have a “proper” background. I don’t have a degree in computer science (I have a degree in network engineering, thank you very much), I’ve never worked as a developer for a company, and hilariously I failed my intro to programming course at university (I did much better the second time though!). But through my work in software testing I was able to parlay that into a position as a software developer working at Applitools, which turned out to be a life-changing experience for me. I got to work with immensely smart and talented people who welcomed me warmly, helped bring me along, and challenged me in ways that supercharged my growth (shoutout to Doron, Tomer, Gil, Amit, and Adam!). And it didn’t hurt that I got to work on fascinating, multi-faceted technical problems that forced me to grow my problem solving skills every day.

I remembered all of this fondly when searching for an answer to my question – “What do I want to do and who do I want to do it with?” Not only did I realize I wanted to continue my journey of software craftsmanship but I also wanted to go back to working alongside great engineers in a collegial environment. With people who both accept me as I am and challenge me to be better. To be in a place where I’m fed an endless supply of technical problems which are fascinating to a software geek like me.

On the tin, this is Applitools. And not for nothing, it also doesn’t hurt that they are building the most innovative stuff in the software testing space. I say this non-hyperbolically with over a decade in this industry (“I’ve seen things you people wouldn’t believe”).

So I was floored when I messaged my old manager to reconnect and tell him what I was thinking. Because very quickly this started a chain reaction of conversations which led to me ultimately answering the question “When can you start?”. Before I knew it, my start date was upon me. And you know what? I was welcomed back just as warmly as when I joined the first time. Now I’m well on the other side of my start date, back in the trenches, working alongside my fellow colleagues. And I gotta say, it’s great to be back!

Interested in joining? Come for the people and technical problems, but stay for the innovation that’s shaking up software testing (and maybe a movie recommendation or two :-)). Take a look at job openings here: http://www.applitools.com/careers

The post Why I Joined Applitools (again): Dave Haeffner appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation Framework: Build vs. Use vs. Buy https://applitools.com/blog/test-automation-framework-build-vs-use-vs-buy/ Fri, 10 Dec 2021 20:09:36 +0000 https://applitools.com/?p=33322 A guide to help you to choose a solution for your team to start with test automation. Should you build, use open source, or buy commercial?

The post Test Automation Framework: Build vs. Use vs. Buy appeared first on Automated Visual Testing | Applitools.

]]>

A guide to help you get started with test automation and understand how to choose the best test automation solution for your team.

Automated testing is a solution to produce high-quality, robust, and reliable Web, Mobile, or Desktop applications or APIs amid the ever-growing complexity of technology, massive competitive pressures, and the need for cost optimization.

Writing Web, Mobile App, API, or Desktop tests is not just nice to have anymore. In different companies, the responsibility of test automation is shared between developers and test engineers to increase productivity and collaboration, especially if they are working in an Agile environment.

And also if you are planning to implement CI/CD pipelines in your team, you need to know that Test Automation now is a vital part of this process and without it, you can’t release your products frequently.

Because of that, in this article, I’d like to share my thoughts from my previous experience about getting started with a Test Automation Framework. Shall we build our own frameworks? Use existing open-source projects/frameworks/engines? Buy a commercial tool or framework?

Before you start thinking about Test Automation Framework, you need to be ready as a team with different inputs/requirements in your Test Strategy.

These include:

  • What are our needs as a testing team from this framework/tool?
  • What are our goals or targets, maybe including the ROI calculation or how we can measure our success with it?
  • What are the technical skills of your team?
  • Who will write the test scripts (Test Engineers, Developer, or both)?
  • Do we have specific business cases you need to consider while selecting a framework or tool?

If we know the answers to the above questions you will decide easily which option is the suitable one for your team.

Sounds interesting! Let’s get started.

Let’s assume that we have 3 teams, each one of them will pick one solution and we will discuss the advantages and disadvantages of each solution.

I usually imagine building a test automation framework like a LEGO game. You have different tools, languages, design patterns, layers, configuration, and dependencies like LEGO pieces and you need to decide which pieces you need to build your own framework.

Lego 1

Image source – https://unsplash.com

With Lego you can build any shapes or blocks, you can architect anything you need and you will have different results.

Team 1: Build Framework

Let’s first clarify: what is the meaning of build?

Build means to build a customized framework in-house based on the open-source testing libraries or SDKs such as Selenium WebDriver, Cypress, Espresso, XCUITest, EarlGrey, Rest Assured, K6, etc.

The idea behind building a test framework from scratch is based on your requirements, your needs, or your target from it to be able to use these pieces correctly.

Lego 2

Image Source – https://www.pngitem.com

From my perspective, there is no template or must-have features for the test framework (ex: you must use BDD) but you always should implement what matches your needs, requirements, and your goal.

So the goal here keep it simple, reusable, reliable, and scalable:

TA framework

An example of a custom TAF features

The Advantages:

  • Customized framework, without extra features and implemented to cover our functional, edge, and critical test cases.
  • Built from scratch and we know every single piece in the framework
  • Integrated with our tech stack.
  • If there is a problem or something in our app that needs custom development, test engineers can ask the developers for help.
  • The maintenance responsibility and the ownership credits are owned by the team.
  • If it will be a native framework built on top of native testing libraries such as Espresso, XCUITest, or EarlGrey for mobile test automation, the mobile engineers can help and write tests scripts as well because it’s the same context and tech stack.
  • If the team or the company has code standards we can use them when building the framework to avoid any future risk or audit issues.

The Disadvantages:

  • Need a good level of experience from the testing team to be able to implement a reusable and scalable framework including mobile and web development skills.
  • Need constant maintenance from the team which requires time and effort.
  • Implement good and up-to-date documentation with the framework structure, architecture, and roadmap to be easy to onboard any new joiner in the team.
  • It will take time till you have a good version you can rely on but for the long-term, it’s a great investment in the team.

Team 2: Use Open-Source Framework/Project

“Don’t reinvent the wheel” – maybe you hear this sentence when you think about building your own framework. In other words, don’t build it but use an existing open-source framework, project, or engine.

Lego 3

Image source – https://www.reviewgeek.com/

But the question here is, is this beneficial?

Open-Source frameworks or engines are great ideas. It saves the team time and effort, you can easily integrate this framework inside your app or your testing project and start writing the automated tests immediately.

The Advantages:

  • Free and ready to use immediately.
  • Open-sourced which means you can fork (make your own copy) and implement new features or make changes in the existing features.
  • Flexibility in supporting multiple programming languages.
  • Saving team’s time and effort and you can calculate the ROI after a short time.
  • Not required a lot of resources (test engineers), because it includes all the features your team needs, so the other test engineers can work on different tasks or tickets.
  • If the framework/engine is popular and a lot of companies are using it, you will find support or answers for your questions in case you face issues from the community.

The Disadvantages:

One of the biggest disadvantages I can see in the open-source projects is the maintenance. For example you can open an open-source project on GitHub today and you can find the following notifications

repository1

repository2

So this is a big issue in my opinion to be honest, because maybe you and your team are relying on this project/framework and suddenly the owner and the maintainers become busy or decide to stop supporting or investing time in this repository.

So if you are using this project you should be able to fork it and be ready to fix any issues or implement new features because there is no one who will do this for you. This requires high technical knowledge and skills from your team to be able to tackle this issue.

Also, remember that most open-source tools are created by people for a specific reason and when that reason is fulfilled, then the support for that tool can wane.

And sometimes if we are working on a banking app or system maybe it will be a difficult decision to select an open-source framework because of the security or audit process in the company.

Other questions you should consider as a team:

  • How long does it take to fix the opening issues from the owner or the maintainers?
  • How long does it take when you request new features?
  • Is the framework scalable, reliable, and easy to use?

How to select the right test automation framework that matches your business and technical needs?

Actually, this is always a common question when we start thinking about applying test automation frameworks/tools in our company or team. Which criteria should we consider? Which functions or features? etc.

And the best answer, in this case, is always: “It depends”.

Because that I made a list of options to help you to select the right tool. Based on the key criteria you can decide which tool will help you in test automation (for sure you can add or remove any criteria because not all of them are required):

  • Ease of developing and maintaining the scripts
  • Ease of test execution
  • Intuitive Test Report generation
  • Cross-browser Testing (web)
  • Emulators, simulators, or physical devices (mobile)
  • Support DDT, BDD, and maybe keyword testing
  • Ease of support from the community and the maintainers.
  • Ease of integration with CI/CD tools (Self-hosted like Jenkins or cloud-based such as Bitrise)
  • Support for visual testing tools such as Applitools
  • Support cloud Testing tools
  • Support Docker and/or Kubernetes

In the end, you will check which framework/tool will fit your business needs and requirements then you can start with a POC (proof of concept) project to apply this tool to the most important business test cases and verify your selection and the ROI.

Team 3: Buy a Commercial Tool

From the first impression, you feel like it’s the easiest choice, but actually, it’s harder than the previous options because it’s related to the team budget, money, and cost.

Lego” title=

Image source – https://zusammengebaut.com/

When the team decides to buy a framework or tool to use in test automation, this is because of different reasons such as:

  • There is a good budget for test automation and they need to use it.
  • The team doesn’t have the required technical skills to build their own framework or use an open-source project, so this tool will help them to write codeless test scripts easily and quickly.
  • They need to implement or write automated test scripts as soon as possible because there is a kick-off for a new project and they should automate the business test cases.
  • There are not enough test engineers for the time being in the company and this tool will not require big effort.
  • Sometimes these tools/frameworks contain exclusive features needed in their business and it will take a long time to implement them in-house.

Advantages:

  • Don’t need an in-house skill set to get up and running.
  • Good documentation, tutorials, webinars, blogs, and training.
  • There is a support team that will help you in case there are issues or questions from the team.
  • Write and run automated test scripts quickly and easily.
  • Unified platform for all types of testing (mobile, web, API, and Desktop).
  • Seamless CI/CD Integration with different platforms.
  • Support parallel execution to speed up as you grow.
  • When we buy it, we will have a license which is a good point if we are using an audit in our company to avoid any future security or data issues.

Disadvantages:

  • The biggest disadvantage of commercial tools has to be the cost
  • Sometimes it’s expensive and there is no flexibility in the pricing packages.
  • Sometimes not suitable for startups or small companies.
  • Not all of them are scalable for the large engineering teams.
  • Not suitable for specific business for example not supporting Salesforce, Blockchain, or ERP applications.
  • Sometimes they are relying on record and playback which is hard to maintain in the future.
  • Sometimes because it’s not required technical skills from the testing team, maybe this will affect their learning curve in the future

The Decision

Always remember “there is no silver bullet” or “magic wand” – you always need to find the suitable solution for your needs. There is no best automation tool or framework, each one has advantages and disadvantages and it always depends on what you expect from this framework. It’s differentiated from one business to another and from one team to another.

In the end, maybe you find that you can have a balanced approach by utilizing in-house, open-source and commercial all together for your Automation Framework, or you might just select one of them.

Conclusion

Test Automation is a journey, not a destination. Continuous learning, improvement, and practicing is required, and maybe the current suitable solution does not fit for your future plans.

Thank You!

Good luck and happy testing!

The post Test Automation Framework: Build vs. Use vs. Buy appeared first on Automated Visual Testing | Applitools.

]]>
What is Visual Testing? https://applitools.com/blog/visual-testing/ Mon, 22 Nov 2021 15:48:00 +0000 https://applitools.com/blog/?p=5069 Visual testing evaluates the visible output of an application and compares that output against the results expected by design. You can run visual tests at any time on any application with a visual user interface.

The post What is Visual Testing? appeared first on Automated Visual Testing | Applitools.

]]>
Visual testing

Learn what visual testing is, why visual testing is important, the differences between visual and functional testing and how you can get started with automated visual testing today.

Editor’s Note: This post was originally published in 2019, and has been recently updated for accuracy and completeness.

What is Meant By Visual Testing?

Visual testing evaluates the visible output of an application and compares that output against the results expected by design. In other words, it helps catch “visual bugs” in the appearance of a page or screen, which are distinct from strictly functional bugs. Automated visual testing tools, like Applitools, can help speed this visual testing up and reduce errors that are occur with manual verification.

You can run visual tests at any time on any application with a visual user interface. Most developers run visual tests on individual components during development, and on a functioning application during end-to-end tests.

In today’s world, in the world of HTML, web developers create pages that appear on a mix of browsers and operating systems. Because HTML and CSS are standards, frontend developers want to feel comfortable with a ‘write once, run anywhere’ approach to their software. Which also translates to “Let QA sort out the implementation issues.” QA is still stuck checking each possible output combination for visual bugs.

This explains why, when I worked in product management, QA engineers would ask me all the time, “Which platforms are most important to test against?” If you’re like most QA team members, your test matrix has probably exploded: multiple browsers, multiple operating systems, multiple screen sizes, multiple fonts — and dynamic responsive content that renders differently on each combination.

If you are with me so far, you’re starting to answer the question: why do visual testing?

Why is Visual Testing Important?

We do visual testing because visual errors happen — more frequently than you might realize. Take a look at this visual bug on Instagram’s app:

The text and ad are crammed together. If this was your ad, do you think there would be a revenue impact? Absolutely.

Visual bugs happen at other companies too: Amazon. GoogleSlack. Robin Hood. Poshmark. Airbnb. Yelp. Target. Southwest. United. Virgin Atlantic. OpenTable. These aren’t cosmetic issues. In each case, visual bugs are blocking revenue.

If you need to justify spending money on visual testing, share these examples with your boss.

All these companies are able to hire some of the smartest engineers in the world. If it happens to Google, or Instagram, or Amazon, it probably can happen to you, too.

Why do these visual bugs occur? Don’t they do functional testing? They do — but it’s not enough.

Visual bugs are rendering issues. And rendering validation is not what functional testing tools are designed to catch. Functional testing measures functional behavior.

Why can’t functional test cover visual issues?

Sure, functional test scripts can validate the size, position, and color scheme of visual elements. But if you do this, your test scripts will soon balloon in size due to checkpoint bloat.

To see what I mean, let’s look at an Instagram ad screen that’s properly rendered. There are 21 visual elements by my count — various icons, text. (This ignores iOS elements at the top like WiFi signal and time, since those aren’t controlled by the Instagram app.)


If you used traditional checkpoints in a functional testing tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium, you’d have to check the following for each of those 21 visual elements:

  1. Visible (true/false)
  2. Upper-left x,y coordinates
  3. Height
  4. Width
  5. Background color

That means you’d need the following number of assertions:

21 visual elements x 5 assertions per element = 105 lines of assertion code

Even with all this assertion code, you wouldn’t be able to detect all visual bugs. Such as whether a visual element can’t be accessed because it’s being covered up, which blocked revenue in the above examples from Yelp, Southwest, United, and Virgin Atlantic. And, you’d miss subtleties like the brand logo, or the red dot under the heart.

But it gets worse: if OS, browser, screen orientation, screen size, or font size changes, your app’s appearance will change as a result. That means you have to write another 105 lines of functional test assertions. For EACH combination of OS/browser/font size/screen size/screen orientation/font size.

You could end up with thousands of lines of assertion code — any of which might need to change with a new release. Trying to maintain that would be sheer madness. No one has time for that.

You need visual testing because visual errors occur. And you need visual testing because you cannot rely on functional tests to catch visual errors.

What is Manual Visual Testing?

Because automated functional testing tools are poorly suited for finding visual bugs, companies find visual glitches using manual testers. Lots of them (more on that in a bit).

For these manual testers, visual testing behaves a lot like this spot-the-difference game:

To understand how time-consuming visual testing can be, get out your phone and time how long it takes for you to find all six visual differences. I took a minute to realize that the writing in the panels doesn’t count. It took me about 3 minutes to spot all six. Or, you can cheat and look at the answers.

Why does it take so long? Some differences are difficult to spot. In other cases, our eyes trick us into finding differences that don’t exist.

Manual visual testing means comparing two screenshots, one from your known good baseline image, and another from the latest version of your app. For each pair of images, you have to invest time to ensure you’ve caught all issues. Especially if the page is long, or has a lot of visual elements. Think “Where’s Waldo”…

Challenges of manual testing

If you’re a manual tester or someone who manages them, you probably know how hard it is to visually test.

If you are a test engineer reading this paragraph, you already know this: web page testing only starts with checking the visual elements and their function on a single operating system, browser, browser orientation, and browser dimension combination. Then continue on to other combinations. And, that’s where a huge amount of test effort lies – not in the functional testing, but in the inspection of visual elements across the combination of an operating system, browser, screen orientation, and browser dimensions.

To put it in perspective, imagine you need to test your app on:

  • 5 operating systems: Windows, MacOS, Android, iOS, and Chrome.
  • 5 popular browsers: Chrome, Firefox, Internet Explorer (Windows only) Microsoft Edge (Windows Only), and Safari (Mac only).
  • 2 screen orientations for mobile devices: portrait and landscape.
  • 10 standard mobile device display resolutions and 18 standard desktop/laptop display resolutions from XGA to 4G.

If you’re doing the math, you think it’s the browsers running on each platform (a total of 21 combinations) multiplied by the two orientations of the ten mobiles (2×10)=20 added to the 18 desktop display resolutions.

21 x (20+18) = 21 x 38 = 798 Unique Screen Configurations to test

That’s a lot of testing — for just one web page or screen in your mobile app.

Except that it’s worse. Let’s say your app has 100 pages or screens to test.

798 Screen Configurations x 100 Screens in-app = 79,800 Screen Configurations to test

Meanwhile, companies are releasing new app versions into production as frequently as once a week, or even once a day.

How many manual testers would you need to test 79,800 screen configurations in a week? Or a day? Could you even hire that many people?

Wouldn’t it be great if there was a way to automate this crazy-tedious process?

Well, yes there is…

What is Automated Visual Testing?

Automated visual testing uses software to automate the process of comparing visual elements across various screen combinations to uncover visual defects.

Automated visual testing piggybacks on your existing functional test scripts running in a tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium. As your script drives your app, your app creates web pages with static visual elements. Functional testing changes visual elements, so each step of a functional test creates a new UI state you can visually test.

Automated visual testing evolved from functional testing. Rather than descending into the madness of writing assertions to check the properties of each visual element, automated visual testing tools visually check the visual appearance of an entire screen with just one assertion statement. This leads to test scripts that are MUCH simpler and easier to maintain.

But, if you’re not careful, you can go down an unproductive rat hole. I’m talking about Snapshot Testing.

What is Snapshot Testing?

First generation automated visual testing uses a technology called snapshot testing. With snapshot testing, a bitmap of a screen is captured at various points of a test run and its pixels are compared to a baseline bitmap.

Snapshot testing algorithms are very simplistic: iterate through each pixel pair, then check if the color hex code is the same. If the color codes are different, raise a visual bug.

Because they can be built relatively easily, there are a number of open-source and commercial snapshot testing tools. Unlike human testers, snapshot testing tools can spot pixel differences quickly and consistently. And that’s a step forward. A computer can highlight the visual differences in the Hocus Focus cartoon easily. A number of these tools market themselves as enabling “pixel perfect testing”.

Sounds like a good idea, right?

What are Problems With Snapshot Testing?

Alas, pixels aren’t visual elements. Font smoothing algorithms, image resizing, graphics cards, and even rendering algorithms generate pixel differences. And that’s just static content. The actual content can vary between any two interfaces. As a result, a tool that expects exact pixel matches between two images can be filled with pixel differences.

If you want to see some examples of bitmap differences affecting snapshot testing, take a look at the blog post we wrote on this topic last year.

Unfortunately, while you might think snapshot testing makes intuitive sense, practitioners like you are finding that the conditions for running successful bitmap comparisons require a stationary target, while your company continues to develop dynamic websites across a range of browsers and operating systems. You can try to force your app to behave a certain way – but you may not always succeed.

Can you share some details of Snapshot Testing Problems?

For example, when testing on a single browser and operating system:

  • Identify and isolate (mute) fields that change over time, such as radio signal strength, battery state, and blinking cursors.
  • Ignore user data that might otherwise change over time, such as visitor count.
  • Determine how to support testing content on your site that must change frequently – especially if you are a media company or have an active blog.
  • Consider how different hardware or software affects antialiasing.

When doing cross-browser testing, you must also consider:

  • Text wrapping, because you cannot guarantee the locations of text wrapping between two browsers using the same specifications. The text can break differently between two browsers, even with identical screen size.
  • Image rendering software, which can affect the pixels of font antialiasing as well as images and can vary from browser to browser (and even on a single browser among versions).
  • Image rendering hardware, which may render bitmaps differently.
  • Variations in browser font size and other elements that affect the text.

If you choose to pursue snapshot testing in spite of these issues, don’t be surprised if you end up joining the group of experienced testers who have tried, and then ultimately abandoned, snapshot testing tools.

Can I See Some Snapshot Testing Problems In Real Life?

Here are some quick examples of these real-life bitmap issues.

If you use pixel testing for mobile apps, you’ll need to deal with the very dynamic data at the top of nearly every screen: network strength, time, battery level, and more:

When you have dynamic content that shifts over time — news, ads, user-submitted content — where you want to check to ensure that everything is laid out with proper alignment and no overlaps. Pixel comparison tools can’t test for these cases. Twitter’s user-generated content is even more dynamic, with new tweets, like, retweet, and comment counts changing by the second.

Your app doesn’t even need to change to confuse pixel tools. If your baselines and test screenshots were captured on different machines with different display settings for anti-aliasing, that can turn pretty much the entire page into a false positive, like this:

Source: storybook.js.org

If you’re using pixel tools and you still have to track down false positives and expose false negatives, what does that say about your testing efficiency?

For these reasons, many companies throw out their pixel tools and go back to manual visual testing, with all of its issues.

There’s a better alternative: using AI — specifically computer vision — for visual testing.

How Do I Use AI for Automated Visual Testing?

The current generation of automated visual testing uses a class of artificial intelligence algorithms called computer vision as a core engine for visual comparison. Typically these algorithms are used to identify objects with images, such as with facial recognition. We call them visual AI testing tools.

AI-powered automated visual testing combines a learning algorithm to interpret the relationship between a rendered page and intended display of visual elements with actual visual elements and locations. Like pixel tools, AI-powered automated visual testing takes page snapshots as your functionally tests run. Unlike pixel-based comparators, AI-powered automated visual test tools use algorithms instead of pixels to determine when errors have occurred.

Unlike snapshot testers, AI-powered automated visual testing tools do not need special environments that remain static to ensure accuracy. Testing and real-world customer data show that AI testing tools have a high degree of accuracy even with dynamic content because the comparisons are based on relationships and not simply pixels.

Here’s a comparison of the kinds of issues that AI-powered visual testing tools can handle compared to snapshot testing tools:

Visual Testing Use CaseSnapshot TestingVisual AI
Cross-browser testingNoYes
Account balancesNoYes
Mobile device status barsNoYes
News contentNoYes
Ad contentNoYes
User submitted contentNoYes
Suggested contentNoYes
Notification iconsNoYes
Content shiftsNoYes
Mouse hoversNoYes
CursorsNoYes
Anti-aliasing settingsNoYes
Browser upgradesNoYes

Some AI-powered test tools have been tested at a false positive rate of 0.001% (or 1 in every 100,000 errors).

AI-Powered Test Tools In Action

An AI-powered automated visual testing tool can test a wide range of visual elements across a range of OS/browser/orientation/resolution combinations. Just running the first baseline of rendering and functional test on a single combination is sufficient to guide an AI-powered tool to test results across the range of potential platforms

Here are some examples of how AI-powered automated visual testing improves visual test results by awareness of content.

This is a comparison of two different USA Today homepage images. When an AI-powered tool looks at the layout comparison, the layout framework matters, not the content. Layout comparison ignores content differences; instead, layout comparison validates the existence of the content and relative placement. Compare that with a bitmap comparison of the same two pages (also called “exact comparison:):

Literally, every non-white space (and even some of the white space) is called out.

Which do you think would be more useful in your validation of your own content?

When Should I Use Visual Testing?

You can do automated visual testing with each check-in of front-end code, after unit testing and API testing, and before functional testing — ideally as part of your CI/CD pipeline running in Jenkins, Travis, or another continuous integration tool.

How often? On days ending with “y”. 🙂

Because of the accuracy of AI-powered automated visual testing tools, they can be deployed in more than just functional and visual testing pre-production. AI-powered automated visual testing can help developers understand how visual element components will render across various systems. In addition to running in development, test engineers can also validate new code against existing platforms and new platforms against running code.

AI-powered tools like Applitools allow different levels of smart comparison.

AI-powered visual testing tools are a key validation tool for any app or web presence that requires a regular change in content and format. For example, media companies change their content as frequently as twice per hour use AI-powered automated testing to isolate real errors that affect paying customers without impacting. And, AI-powered visual test tools are key tools in the test arsenal for any app or web presence going through brand revision or merger, as the low error rate and high accuracy lets companies identify and fix problems associated with major DOM, CSS and Javascript changes that are core to those updates.

Talk to Applitools

Applitools is the pioneer and leading vendor in AI-powered automated visual testing. Applitools has a range of options to help you become incredibly productive in application testing. We can help you test components in development. We can help you find the root cause of the visual errors you have encountered. And, we can run your tests on an Ultrafast Grid that allows you to recreate your visual test in one environment across a number of others on various browser and OS configurations. Our goal is to help you realize the vision we share with our customers – you need to create functional tests for only one environment and let Applitools run the validation across all your customer environments after your first test has passed. We’d love to talk testing with you – feel free to reach out to contact us anytime.

More To Read About Visual Testing

If you liked reading this, here are some more Applitools posts and webinars for you.

  1. Visual Testing for Mobile Apps by Angie Jones
  2. Visual Assertions – Hype or Reality? – by Anand Bagmar
  3. The Many Uses of Visual Testing by Angie Jones
  4. Visual UI Testing as an Aid to Functional Testing by Gil Tayar
  5. Visual Testing: A Guide for Front End Developers by Gil Tayar
  6. Visual Testing FAQ

Find out more about Applitools. Setup a live demo with us, or if you’re the do-it-yourself type, sign up for a free Applitools account and follow one of our tutorials.

The post What is Visual Testing? appeared first on Automated Visual Testing | Applitools.

]]>
Refining Your Test Automation Approach in a Microservice Architecture World https://applitools.com/blog/refining-test-automation-microservice-architecture/ Fri, 12 Nov 2021 22:39:36 +0000 https://applitools.com/?p=32529 What does a microservice architecture mean for quality engineering and test automation? Learn the pros, cons and how it all applies to testing.

The post Refining Your Test Automation Approach in a Microservice Architecture World appeared first on Automated Visual Testing | Applitools.

]]>

As the software development industry moves forward, one thing that never stops is the ever-evolving introduction of new concepts and/or buzzwords. Some are literally just buzzwords wrapped around something that previously existed in the industry in some form or the other. However other concepts go beyond just buzzwords, and need to be taken seriously as it potentially becomes a key factor in influencing how software is and will be built going forward.

Failing to keep up with the concept or trend could mean that previous tried and tested approaches might not be as effective and even worse, might put major question marks against an individual’s relevance and marketability in a thriving, fast paced industry. One such concept is that of “Microservices” or a “Microservice Architecture.” With this concept/approach in mind even more questions come to mind regarding Development, DevOps and other areas, however what does it mean for previous quality engineering, testing and test automation approaches?

If the ground under you moves while standing, naturally it would mean you change direction, or initiate a different response to counter the movement. The same goes for a moving software architecture, your quality engineering approach has to adapt to this gradual or sudden move. In this post I will share some points to keep in mind when tackling your test automation approach in a microservices architecture environment.

Refining Test Automation

What are Microservices in a Practical Setting?

Before we tackle the test automation approach in a microservices architecture, let’s take a quick look at what microservices actually are and a simple practical example of something that you use often that might simplify your understanding of the concept.

A microservice architecture is usually characterised by focused, small, independent services that together create a complete application. Every instance of a particular microservice usually represents a single responsibility within your application.

A practical example would be depicted on something like an online banking application that most of us use on a regular basis (the same concept is applicable to any industry/type of system). In the past, typically a traditional architecture would have all functionality of a banking application bundled as one large system, most often with code referencing different areas of the application in a single deployable package. While these systems still worked, they were often plagued with many shortcomings like:

  • Maintenance nightmares: Amending one feature would impact many unrelated areas due to code dependencies. Pinpointing and digging in to find where a change should be made, in itself took quite a long time.
  • Performance issues: Consider a busy highway, in peak hour traffic with no alternative routes or off-ramps to take. Each car wants to get to a slightly different destination but has to pass the single, busy highway to get there. What results in that scenario is a bottleneck of slow-moving cars that cannot get to its intended destination at a given time. Traditional systems were marred by a similar dilemma. A single path to get to perform either the same or slightly different actions or transactions (for example 1 user wants to make a payment and the other wants to open a new account would follow specific paths in a single code base).
  • Lack of control or flexibility: If a new feature (like international payments) needed to be deployed to production often it meant that the entire application or system needed to be taken down to deploy the change. This often meant a user who wanted to perform an unrelated action like to check his/her balance or even just try to change their username now had to be inconvenienced by the entire application being taken down and they would be prevented from performing their desired action until the site/application came back up again.

Performance bottlenecks

There are many other shortcomings of traditional ‘monolithic’ systems, the above just highlighted a few.

Enter microservices–to ease some of the above shortcomings and to cater to a fast-paced, high demand world, microservices brought in a fresh approach for tackling software delivery.

What a high level microservice architecture would look like in the above online banking application example would be a split of key services/features or even business areas to optimise how software is built, tested, deployed and maintained. So in our example now there would be a split of your key functional areas like a separate “Payment” service, that handles all payment related functionality, then possibly an “Account” service which contains all information about accounts, balances etc. Further to this you could also have a “Customer” service, a “Reporting” service and so on.

Most of the services might be able to function in total isolation but often it does need to have a dependent service available to complete a transaction/task. For example, to complete a payment, the payment service would still need to get information from both the customer and account services in order to initiate the transaction, whereas in this flow the reporting service would not be a dependency to complete the transaction.

That’s a lot of context-setting :-). Now let’s move to what this means for your automation testing approach.

What Did Test Automation Approaches Look Like in a Traditional, Monolithic System?

Previous test automation approaches

With traditional systems, approaches to test automation needed to take (in my view) a restrictive approach. Restrictive as the access into and around lower areas of an application was very difficult, non-exposure to integration points also often meant that some had to create customised test hooks, harnesses and so on. Due to the complexity of this customisation most people just chose to go for the “What you see is what you get” approach

The following points list out some characteristics or results of what typical test automation approaches looked like:

  • Limited automation tools choice: As systems were less flexible, test automation tool options were limited in what they could do and the creativity/innovation of options at hand were quite one dimensional. With the evolution of modern system architecture, it has allowed automation tools vendors and/or individuals to come up with more flexible solutions to cater to test automation needs.
  • Long running tests: As mentioned, tests had to be focused mostly on the UI as this was one area of the application that was easily accessible, or if the test hooks were created to access a lower level, the setup was tedious and often resulted in an increase in test execution times. The lack of flexibility, limited access plus the absence of clearly defined boundaries of an application created this dilemma. This also meant that when tests were executed it was a mission to pinpoint errors in the automation script or where in the application an actual defect existed.
  • Maintenance nightmare: Similar to the point of a monolithic system drawback in itself, test automation also shared a similar problem. Test automation was seen as a separate, outside activity characterised by siloed test automation repositories, with ownership by a separate test automation team or QA team. Tests had multiple aims/objectives which made the code overlap, long-winded and difficult to maintain.
  • Delays between deploys and testing: The very nature of how large, traditional systems were built and deployed meant that there were clear distinctions between Dev and Testing activities. Code deploys followed by a notification to QAs to execute the test scripts meant that the feedback loop now was longer than what we see in teams/setups where microservices architectures are coupled with proper Agile processes.
  • Performance/Load testing as part of one big bang approach: Performance/Load testing is usually tackled as one large system E2E effort. Once again due to system limitations and lack of flexibility even Non-Functional testing like Load, Performance and Security testing are forced to be done once all software development is complete, integrated (with its dependent systems) and deployed.

How Should You Now Refine Your Test Automation Approach to Cater to Microservices?

Thankfully modern system architecture solves many of the drawbacks of traditional systems, and the same is true for the opportunities presented by modern test automation approaches which breathe new life into how we tackle test automation.

modern test automation

How does your automation approach now change to fit in with modern architecture?

  • Greater test automation tool choice and toolset: With the greater ability to access all areas of an application like APIs, UI components etc., test automation tool options have now grown immensely, which bolsters the ability to focus on specific tests targeting specific areas based on intent.
  • Lower level testing: Testing individual components/services in isolation now becomes very realistic and much needed in order to test as one builds. Incorporating aspects like mocking, containerisation, contract tests etc. gives one much more power in how to structure your test automation efforts as software is built.
  • In-sprint automation, focused building, focused testing: As the point above depicts, this brings the added possibility of not leaving test automation as an after activity, but rather something that gets incorporated in your sprint efforts, as code is being developed.
  • Non-functional testing per service, as you build: While isolated services bring greater flexibility in functional test automation, it also brings added opportunities for focused Performance and Security testing. This gives one added control in pinpointing exact NFTs at individual service level so that optimisations can be made to fix bottlenecks or vulnerabilities. Here you would think about performance testing your APIs, or a specific component of a UI, checking for security vulnerabilities of each API and much more.
  • Automated deployments with Automated tests: Even though some form of CI/CD is still possible when dealing with traditional systems, its use case is heightened when dealing with microservices. Test automation tests no longer operate as a separate, siloed entity, but due to its smaller, focused intent, now becomes part of the build, deploy pipelines to ensure one builds in quality from the outset.
  • Bring it all together with a pinch of Integration and E2E system level tests for reassurance:  Depending on the number of dependent services/applications one has in their given ecosystem, sometimes the need arises to make sure that your tests also provide some extra reassurance that everything works together by incorporating integration tests or system E2E tests focusing on multi-service, multi-system touchpoints. This is where adding in a form of system level test automation makes sense to provide extra confidence in your test automation efforts.

Challenges

While there are many benefits of microservices and a refined test automation approach, there are still some areas which provide huge challenges and pain-points. These include:

  1. Greater dependencies on multiple services: As services are not used individually in the customer facing world, when they are individually tested one needs to make sure it resembles a real-world use case given its implementation. Usually this would mean a foolproof mocking approach plus making sure that relevant mocks are kept in-sync and/or maintained regularly.
  2. Closer control around test coverage: As there are now multiple services usually owned by multiple teams, making sure everything is covered becomes a critical balancing act. Measures need to be put in place to address tracking of coverage per service, how to tackle missing areas of coverage and plan of action to handle these risks.
  3. Skillset gaps: With newer test automation tools, approaches like lower level testing forms including service virtualization etc, most test automation experts or team members would need to up-skill in this area as it requires a specific skill set to perform activities needed here to provide maximum benefit.
  4. Test data considerations: Availability of test data is usually a huge factor across most contexts, and the same is true for microservices, while in a way there is greater control in creation or mocking of test data in a microservice test automation approach, making sure the test data is representative of real-world scenarios is also quite tricky.
  5. Possible need for ownership of cross system E2E aspects: If some of the points above have gaps, then a greater need for coordination of cross system E2E test automation areas becomes important. This is to make sure that the sum of the parts make a proper whole.

challenges ahead

Conclusion

As the way systems and applications are built evolves, so too does our QA craft and test automation approaches. While every company operates differently, there are usually some overarching areas or guidelines that one can keep in mind when considering how to tackle your test automation. No solution is without its challenges, but one thing for sure is that as we evolve, we are presented with opportunities to constantly improve and make sure that we never forget the aim of why we choose to automate our testing efforts.

The post Refining Your Test Automation Approach in a Microservice Architecture World appeared first on Automated Visual Testing | Applitools.

]]>
How Visual AI Accelerates Release Velocity https://applitools.com/blog/how-visual-ai-accelerates-release-velocity/ Tue, 02 Nov 2021 17:11:54 +0000 https://applitools.com/?p=32262 In the latest chapter of the “State of AI applied to Quality Engineering 2021-22” report, learn how you can use Visual AI today to release software faster and with fewer bugs.

The post How Visual AI Accelerates Release Velocity appeared first on Automated Visual Testing | Applitools.

]]>

We’re honored to be co-authors with Sogeti on their “State of AI applied to Quality Engineering 2021-22” report. In the latest chapter, learn how you can use Visual AI today to release software faster and with fewer bugs.

In the world of software development, there is a very clear trend – greater application complexity and a faster release cadence. This presents a massive (and growing) challenge for Quality Engineering teams, who must keep up with the advancing pace of development. We think about this a lot at Applitools, and we were glad to be able to collaborate with Sogeti on the latest chapter of their landmark “State of AI applied to Quality Engineering 2021-22” report, entitled Shorten release cycles with Visual AI. This chapter is focused around this QE challenge and offers a vision for how Visual AI can help organizations that have not yet adopted it – not far in the future but today.

What is Visual AI

Visual AI is the ability for machine learning and deep learning algorithms to truly mimic a human’s cognitive understanding of what is seen. This may seem fantastical, but it’s far from science fiction. Our own Visual AI has already been trained on over a billion images, providing 99.9999% accuracy, and leading digital brands are already using it today to accelerate their delivery of innovation.

Leverage Visual AI to Shift Left and Deliver Innovation Faster

Visual AI can be used in a number of ways, and it may be tempting to think of it as a tool that can help you conduct your automated end-to-end tests at the end of development cycles more quickly. Yes, it can do that, but its biggest strength lies elsewhere. Visual AI allows you to shift left and begin to conduct testing “in-sprint” as part of an Agile development cycle.

Testing “in-sprint” means conducting visual validation alongside data validation and gaining complete test coverage of UI changes and visual regressions at every check-in. Bottlenecks are removed and releases are both faster and contain fewer errors, delivering an uncompromised user experience without jeopardizing your brand.

Teams that incorporate automated visual testing throughout their development process simply release faster and higher quality software.

Source: State of AI applied to Quality Engineering 2021-22 and Applitools ”State of Visual Testing” Report, 2019

How Can You Use Visual AI Today?

Wondering how you can move your organization or your team over to the left side of the bar charts above? Fortunately, it’s not hard to get started, and this chapter from Sogeti is an excellent place to begin. Keep reading to learn more about:

  • When you should visually test your UI (and why it’s important)
  • How to automate UI validation with Visual AI (including the three biggest challenges and how to overcome them)
  • Different Visual AI comparison algorithms
  • How Visual AI significantly reduces test creation and maintenance time while increasing coverage
  • Streamlining analysis of test results and root cause analysis
  • Using Visual AI for end-to-end validation
  • Validation at check-in with Visual AI
  • How Visual AI is revolutionizing cross browser testing
Source: State of AI applied to Quality Engineering 2021-22 & Applitools “Impact of Visual AI on Test Automation” Report, 2020

Most users start out by applying Applitools’ Visual AI to their end-to-end tests and quickly discover several things about Applitools. First, it is highly accurate, meaning it finds real differences – not pixel differences. Second, the compare modes give the flexibility needed to handle expected differences no matter what kind of page is being tested. And third, the application of AI goes beyond visual verification and includes capabilities such as auto-maintenance and root cause analysis

State of AI applied to Quality Engineering 2021-22

Deliver Quality Code Faster with Visual AI

Ultimately, what we’re all looking for is to be able to deliver quality code faster, even as complexity grows. Keeping up with the growing pace of change can feel daunting when you’re relying on traditional test automation that only scales linearly with the resources allocated – AI-powered automation is the only way to scale your team’s productivity at the pace today’s software development demands.

Applitools’ Visual AI integrates into your existing test automation practise and is already being used by the world’s leading top companies to greatly accelerate their ability to deliver innovation to their clients, customers and partners, while protecting their brand and ensuring digital initiatives have the right business outcomes. And it’s only getting better. Visual AI continues to progress as it advances the industry towards a future of truly Autonomous Testing, when the collaboration between humans and AI will change. Today, we’re focused on an AI that can handle repetitive/mundane tasks to free up humans for more creative/complex tasks, but we see a future where Visual AI will be able to handle all testing activities, and the role of humans will shift to training the AI and then reviewing the results.

Check out the full chapter, “Shorten release cycles with Visual AI,” below.

The post How Visual AI Accelerates Release Velocity appeared first on Automated Visual Testing | Applitools.

]]>