Webinar Recap Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/webinar-recap/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 01 Dec 2023 18:01:07 +0000 en-US hourly 1 Driving Successful Test Automation at Scale: Key Insights https://applitools.com/blog/driving-successful-test-automation-at-scale-key-insights/ Mon, 25 Sep 2023 13:30:00 +0000 https://applitools.com/?p=52139 Scaling your test automation initiatives can be daunting. In a recent webinar, Test Automation at Scale: Lessons from Top Performing Distributed Teams, panelists from Accenture, Bayer, and Eversana shared their...

The post Driving Successful Test Automation at Scale: Key Insights appeared first on Automated Visual Testing | Applitools.

]]>

Scaling your test automation initiatives can be daunting. In a recent webinar, Test Automation at Scale: Lessons from Top Performing Distributed Teams, panelists from Accenture, Bayer, and Eversana shared their insights for overcoming common challenges. Here are their top recommendations.

Establish clear processes for collaboration.
Daily standups, sprint planning, and retrospectives are essential for enabling communication across distributed teams. “The only way that you can build a quality product that actually satisfies the business requirements is [through] that environment where you’ve got the different teams coming together,” said Ariola Qeleposhi, Test Automation Lead at Accenture.

Choose tools that meet current and future needs.
Consider how tools will integrate and the skills required to use them. While a “one-size-fits-all” approach may seem appealing, it may not suit every team’s needs. Think beyond individual products to the overall solution, advised Anand Bagmar, Senior Solution Architect at Applitools. Each product team should have a test pyramid, and tests should run at multiple levels to get real value from your automation.

Start small and build a proof of concept.
Demonstrate how automation reduces manual effort and finds defects faster to gain leadership buy-in. “Proof of concepts will really help to provide a form of evidence in a way to say that, okay, this is our product, this is how we automate or can potentially automate, and what we actually save from that,” said Qeleposhi.

Consider a “quality strategy” not just a “test strategy.”
Involve all roles like business, product, dev, test, and DevOps. “When you think about it as quality, then the role does not matter,” said Bagmar.

Leverage AI and automation as “seatbelts,” not silver bullets.
They enhance human judgment rather than replace it. “Automation is a lot, at least in this instance, it’s like a seatbelt. You don’t think you’ll need it, but when you need it you better have it,” said Kyle Penniston, Senior Software Developer at Bayer.

Build, buy, and reuse.
Don’t reinvent the wheel. Use open-source tools and existing frameworks. “There will be great resources that you can use. Open-source resources, for example, frameworks that might be there that you can use to quickly get started and build on top of that,” said Bagmar.

Provide learning resources for new team members.
For example, Applitools offers Test Automation University with resources for developing automation skills.

Measure and track metrics to ensure value.
Look at reduced manual testing, faster defect finding, test coverage, and other KPIs. “You need to get some metrics really, and then you need to use that from an automation side of things,” said Qeleposhi.

The key to building a solid foundation for scaling test automation is taking an iterative, collaborative approach focused on delivering value and enhancing quality. With the right strategies and tools in place, teams can overcome common challenges and achieve automation success. Watch the full recording.

The post Driving Successful Test Automation at Scale: Key Insights appeared first on Automated Visual Testing | Applitools.

]]>
Power Up Your Test Automation with Playwright https://applitools.com/blog/power-up-your-test-automation-with-playwright/ Thu, 31 Aug 2023 12:53:00 +0000 https://applitools.com/?p=52108 As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust...

The post Power Up Your Test Automation with Playwright appeared first on Automated Visual Testing | Applitools.

]]>
Locator Strategies with Playwright

As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust set of features to create fast, reliable, and maintainable tests.

In a recent webinar, Playwright Ambassador and TAU instructor Renata Andrade shared several use cases and best practices for using the framework. Here are some of the most valuable takeaways for test automation engineers:

Use Playwright’s built-in locators for resilient tests.
Playwright recommends using attributes like “text”, “aria-label”, “alt”, and “placeholder” to find elements. These locators are less prone to breakage, leading to more robust tests.

Speed up test creation with the code generator.
The Playwright code generator can automatically generate test code for you. This is useful when you’re first creating tests to quickly get started. You can then tweak and build on the generated code.

Debug tests and view runs with UI mode and the trace viewer.
Playwright’s UI mode and VS Code extension provide visibility into your test runs. You can step through tests, pick locators, view failures, and optimize your tests. The trace viewer gives you a detailed trace of all steps in a test run, which is invaluable for troubleshooting.

Add visual testing with Applitools Eyes.
For complete validation, combine Playwright with Applitools for visual and UI testing. Applitools Eyes catches unintended changes in UI that can be missed by traditional test automation.

Handle dynamic elements with the right locators.
Use a combination of attributes like “text”, “aria-label”, “alt”, “placeholder”, CSS, and XPath to locate dynamic elements that frequently change. This enables you to test dynamic web pages.

Set cookies to test personalization.
You can set cookies in Playwright to handle scenarios like A/B testing where the web page or flow differs based on cookies. This is important for testing personalization on websites.

Playwright provides a robust set of features to build, run, debug, and maintain end-to-end web tests. By leveraging the use cases and best practices shared in the webinar, you can power up your test automation and build a successful testing strategy using Playwright. Watch the full recording and see the session materials.

The post Power Up Your Test Automation with Playwright appeared first on Automated Visual Testing | Applitools.

]]>
AI-Powered Test Automation: How GitHub Copilot and Applitools Can Help https://applitools.com/blog/ai-powered-test-automation-how-github-copilot-and-applitools-can-help/ Tue, 22 Aug 2023 21:23:00 +0000 https://applitools.com/?p=51789 Test automation is crucial for any software engineering team to ensure high-quality releases and a smooth software development lifecycle. However, test automation efforts can often be tedious, time-consuming, and require...

The post AI-Powered Test Automation: How GitHub Copilot and Applitools Can Help appeared first on Automated Visual Testing | Applitools.

]]>
Can AI Autogenerate and Run Automated Tests?

Test automation is crucial for any software engineering team to ensure high-quality releases and a smooth software development lifecycle. However, test automation efforts can often be tedious, time-consuming, and require specialized skills. New AI tools are emerging that can help accelerate test automation, handle flaky tests, increase test coverage, and improve productivity.

In a recent webinar, Rizel Scarlett and Anand Bagmar discussed how to leverage AI-powered tools like GitHub Copilot and Applitools to boost your test automation strategy.

GitHub Copilot can generate automated tests.

By providing code suggestions based on comments and prompts, Copilot can help quickly write test cases and accelerate test automation development. For example, a comment like “validate phone number” can generate a full regular expression in seconds. Copilot also excels at writing unit tests, which many teams struggle to incorporate efficiently.

Applitools Execution Cloud provides self-healing test capabilities.

The Execution Cloud allows you to run tests in the cloud or on your local machine. With self-healing functionality, tests can continue running successfully even when there are changes to web elements or locators. This helps reduce flaky tests and maintenance time. Although skeptical about self-healing at first, the experts found Applitools to handle updates properly without clicking incorrect elements.

Together, tools like Copilot and Applitools can transform your test automation.

Copilot generates the initial test cases and Applitools provides a self-healing cloud environment to run them. This combination leads to improved productivity, reduced flaky tests, and increased coverage.

Applitools Eyes and Execution Cloud offer innovative AI solutions for automated visual testing. By leveraging new technologies like these, teams can achieve test automation at scale and ship high-quality software with confidence. To see these AI tools in action and learn how they can benefit your team, watch the full webinar recording.

The post AI-Powered Test Automation: How GitHub Copilot and Applitools Can Help appeared first on Automated Visual Testing | Applitools.

]]>
Ultrafast Cross Browser Testing with Selenium Java https://applitools.com/blog/cross-browser-testing-selenium/ Fri, 09 Sep 2022 15:51:52 +0000 https://applitools.com/?p=42442 Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.

The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.

]]>

Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.

What is Cross Browser Testing?

Cross-browser testing is a form of functional testing in which an application is tested on multiple browsers (Chrome, Firefox, Edge, Safari, IE, etc.) to validate that functionality performs as expected.

In other words, it is designed to answer the question: Does your app work the way it’s supposed to on every browser your customers use?

Why is Cross Browser Testing Important?

While modern browsers generally conform to key web standards today, important problems remain. Differences in interpretations of web standards, varying support for new CSS or other design features, and rendering discrepancies between the different browsers can all yield a user experience that is different from one browser to the next.

A modern application needs to perform as expected across all major browsers. Not only is this a baseline user expectation these days, but it is critical to delivering a positive user experience and a successful app.

At the same time, the number of screen combinations (between screen sizes, devices and versions) is rising quickly. In recent years the number of screens required to test has exploded, rising to an industry average of 81,480 screens and reaching 681,296 for the top 30% of companies.

Ensuring complete coverage of each screen on every browser is a common challenge. Effective and fast cross-browser testing can help alleviate the bottleneck from all these screens that require testing.

Source: 2019 State of Automated Visual Testing

How to Perform Modern Cross Browser Testing in Selenium with Visual Testing

Traditional approaches to cross-browser testing in Selenium have existed for a while, and while they still work, they have not scaled well to handle the challenge of complex modern applications. They can be time-consuming to build, slow to execute and challenging to maintain in the face of apps that change frequently.

Applitools Developer Advocate and Test Automation University Director Andrew Knight (AKA Pandy Knight) recently conducted a hands-on workshop where he explored the history of cross-browser testing, its evolution over time and the pros and cons of different approaches.

Andrew then explores a modern cross-browser testing solution with Selenium and Applitools. He walks you through a live demo (which you can replicate yourself by following his shared Github repo) and explains the benefits and how to get started. He also covers how you can accelerate test automation with integration into CI/CD to achieve Continuous Testing.

Check out the workshop below, and follow along with the Github repo here.

More on Cross Browser Testing in Cypress, Playwright or Storybook

At Applitools we are dedicated to making software testing faster and easier so that testers can be more effective and apps can be visually perfect. That’s why we use our industry-leading Visual AI and built the Applitools Ultrafast Grid, a key component of the Applitools Test Cloud that enables ultrafast cross-browser testing. If you’re looking to do cross-browser testing better but don’t use Selenium, be sure to check out these links too for more info on how we can help:

The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.

]]>
Cypress vs Playwright: Let the Code Speak Recap https://applitools.com/blog/cypress-vs-playwright/ Mon, 18 Jul 2022 15:00:00 +0000 https://applitools.com/?p=40437 Wondering how to decide between Cypress and Playwright for test automation? Check out this head to head battle and see who comes out on top.

The post Cypress vs Playwright: Let the Code Speak Recap appeared first on Automated Visual Testing | Applitools.

]]>

Wondering how to decide between Cypress and Playwright for your test automation? Check out the results of our head to head battle of Cypress vs Playwright and see who comes out on top.

On the 26th of May Applitools hosted another “Let the Code Speak!” event. This time it was focused on two rising stars in web test automation – Cypress and Playwright. Both of these tools have huge fan bases and users that have reasons to either love or doubt these tools. The event had more than 500 attendees that decided on the ultimate winner. To determine the best framework, I (Filip Hric) and Andrew Knight a.k.a Automation Panda shared short code snippet solutions on various different testing problems in front of an online audience. Right after each example, the audience got to vote for the better solution in each of the 10 rounds.

Why Compare Playwright vs Cypress?

Cypress and Playwright both brought some novelties to the world of web test automation. These often work well with modern web apps that are full of dynamic content, often fetched from the server by REST API. For cases like these, automatic waiting is a must. But while these tools carry some similarities, they also differ in many aspects. These stem from the basic architecture of these tools, choice of supported languages, syntax they use, and more.

There is no silver bullet in test automation. All these differences can make a tool super useful, or inefficient, based on the project you are working on. It is comparisons like these that aim to help you decide, and have a little fun along the way. 

If you missed the event, there’s no need to worry. The whole recording is available online, and if you want to check out the code snippets that were used, I recommend you to take a look into the GitHub repository that we have created for this event.

Cypress vs Playwright Head to Head – Top 10 Features Compared

Round 1: How to Interact with Elements

We started off with the basics, interacting with a simple login form. The goal was to compare the most simple flow, and the one you usually start your test automation with. 

At first sight these two code samples don’t look too different from one another. But the crowd decided that the Cypress syntax was slightly more concise and voted 61% in its favor.

Round 2: How to Test iframes

Although iframes are not as common as they used to be, they can still present a challenge to QA engineers. In fact, they need an additional plugin in Cypress, which was probably why it lost this round. Playwright has native API to switch to any given iframe that takes away the extra leg work of installing a plugin.

Round 3: How Cypress and Playwright Handle Waiting and Retrying

With the nature of modern apps, waiting for changes is essential. Single page applications re-render their content all the time. This is something testing tools need to account for nowadays. Built in waiting and retrying capabilities prove to give the edge to modern testing tools like Cypress and Playwright. 

Taking a look at the code, this could be anyone’s win, but this round went to Cypress with a 53% of audience vote.

Round 4: How to Test Native Browser Alerts in Cypress vs Playwright

Given the different design of each tool, it was interesting to see how each of them deal with native browser events. Playwright communicates with the browser using a websocket server, while Cypress is injected inside the browser and automates the app from there. Handling native browser events might be more complicated then and it has proven to be the case in this round. While Playwright showed a consistent solution for alerts and prompts, Cypress had its own solution for all three cases, which caused a sweeping 91% victory by Playwright in this round.

Round 5: Navigation to New Windows

In the next example, we attempted to automate a page that opens a new window. The design of each of the tools proved to be a deciding factor once again. While Playwright has an API to handle a newly opened tab, Cypress reaches for a hack solution that removes the target attribute from a link and prevents opening of a new window entirely. While I argued that this is actually a good enough solution, testers in the audience did not agree and out-voted Cypress 80:20 in favor of Playwright.

Round 6: Handling API Requests

Being able to handle API requests is an automation superpower. You can use them to setup your application, seed data, or even log in. Or you can decide to create a whole API test suite! Both Cypress and Playwright handle API requests really well. In Playwright, you create a new context and fire API request from that context. Cypress uses its existing command chain syntax to both fire a request and to test it. Two thirds of the audience liked the Cypress solution better and gave it their vote.

Round 7: Using the Page Objects Pattern

Although page objects are generally not considered the best option for Cypress, it is still a popular pattern. They provide necessary abstraction and help make the code more readable. There are many different ways to approach page objects. The voting was really close here. During the live event it actually seemed like Playwright won this one, after the show we found out that this round ended up with a tie. 

Round #8 – Cypress and Playwright Language Support

The variety of languages that testers use nowadays is pretty big. That’s why Playwright’s wider support of languages seems like a clear winner in this round. Cypress however tries to cater to the developer’s workflow, where the support of JavaScript and TypeScript is good enough. However, this may feel like a pain point to testers that come from different language backgrounds and are not used to writing their code in these languages. It seemed that the audience agreed that wider languages support is better and voted 77% in favor of Playwright.

Round 9: Browser Support in Cypress and Playwright

Although Chrome is the most popular browser and has become dominant in most countries, browser support is still important when testing web applications. Both tools have good support for various browsers, although Cypress currently lacks support for Safari or WebKit. Maybe this helped the audience decide on the final round win of Playwright.

Round 10: Speed and Overall Performance

Last round of the event was all about the speed. Everyone likes their tests to run fast, so they can get the feedback about the state of their application as soon as possible. Playwright was a clear winner this time, as its execution time was 4x faster than Cypress. Some latest improvements on Cypress’ side have definitely helped, but Playwright is still king in terms of speed.

And the (real) winner of Cypress vs Playwright is…

The whole code battle ended up in 7:3 favor of Playwright. After the event, we met for a little aftershow, discussed the examples in more depth and answered some questions. This was a great way to provide some more context to the examples and discuss things that had not been said.

I really liked a take from someone on Twitter who said that the real winners were the testers and QA engineers that get to pick between these awesome tools. I personally hope that more Cypress users have tried Playwright after the event and vice versa.

This event was definitely fun, and while it’s interesting to compare code snippets and different abilities of the tools, we are well aware that these do not tell the whole story. A tester’s daily life is full of debugging, maintenance, complicated test design decisions, considering risks and effectiveness of automation… Merely looking at small pieces of code will not tell us how well these tools perform in real life. We’d love to take a look into these aspects as well, so we are planning a rematch with a slightly different format. Save the date of September 8th for the battle and stay tuned to this page for more info on the rematch. We’ll see who’s the winner next time! ?

The post Cypress vs Playwright: Let the Code Speak Recap appeared first on Automated Visual Testing | Applitools.

]]>
Our Best Test Automation Videos of 2022 (So Far) https://applitools.com/blog/best-test-automation-videos-2022/ Fri, 20 May 2022 21:07:55 +0000 https://applitools.com/?p=38624 Check out some of our most popular events of the year. All feature test automation experts sharing their knowledge and their stories.

The post Our Best Test Automation Videos of 2022 (So Far) appeared first on Automated Visual Testing | Applitools.

]]>

We’re approaching the end of May, which means we’re just a handful of weeks the midpoint of 2022 already. If you’re like me, you’re wondering where the year has gone. Maybe it has to do with life in the northeastern US where I live, where we’ve really just had our first week of warm weather. Didn’t winter just end?

As always, the year is flying by, and it can be hard to keep up with all the great videos or events you might have wanted to watch or attend. To help you out, we’ve rounded up some of our most popular test automation videos of 2022 so far. These are all top-notch workshops or webinars with test automation experts sharing their knowledge and their stories – you’ll definitely want to check them out.

Cross Browser Test Automation

Cross-browser testing is a well-known challenge to test automation practitioners. Luckily, Andy Knight, AKA the Automation Panda, is here to walk you through a modern approach to getting it done. Whether you use Cypress, Playwright, or are testing Storybook components, we have something for you.

Modern Cross Browser Testing with Cypress

For more, see this blog post: How to Run Cross Browser Tests with Cypress on All Browsers (plus bonus post specifically covering the live Q&A from this workshop).

Modern Cross Browser Testing in JavaScript Using Playwright

For more, see this blog post: Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser.

Modern Cross Browser Testing for Storybook Components

For more, see this blog post: Testing Storybook Components in Any Browser – Without Writing Any New Tests!

Test Automation with GitHub or Chrome DevTools

GitHub and Chrome DevTools are both incredibly popular with the developer and testing communities – odds are if you’re reading this you use one or both on a regular basis. We recently spoke with developer advocates Rizel Scarlett of GitHub and Jecelyn Yeen of Google as they explained how you can leverage these extremely popular tools to become a better tester and improve your own testing experience. Click through for more info about each video and get watching.

Make Testing Easy with GitHub

For more, see this blog post: Using GitHub Copilot to Automate Tests.

Automating Tests with Chrome DevTools Recorder

For more, see this blog post: Creating Your First Test With Google Chrome DevTools Recorder.

Test Automation Stories from Our Customers

When it comes to implementing and innovating around test automation, you’re never alone, even though it doesn’t always feel that way. Countless others are struggling with the same challenges that you are and coming up with solutions. Sometimes all it takes is hearing how someone else solved a similar problem to spark an idea or gain a better understanding of how to solve your own.

Accelerating Visual Testing

Nina Westenbrink, Software Engineer at a leading European telecom, talks about how the visual time to test the company’s design system was decreased and simplified, offering helpful tips and tricks along the way. Nina also speaks about her career as a woman in testing and how to empower women and overcome biases in software engineering.

Continuously Testing UX for Enterprise Platforms

Govind Ramachandran, Head of Testing and Quality Assurance for Asia Technology Services at Manulife Asia, discusses challenges around UI/UX testing for enterprise-wide digital programs. Check out his blueprint for continuous testing of the customer experience using Figma and Applitools.

This is just a taste of our favorite videos that we’ve shared with the community from 2022. What were yours? You can check out our full video library here, and let us know your own favorites @Applitools.

The post Our Best Test Automation Videos of 2022 (So Far) appeared first on Automated Visual Testing | Applitools.

]]>
Cross Browser Testing with Cypress Workshop Q&A https://applitools.com/blog/cross-browser-testing-cypress-workshop-qa/ Wed, 09 Feb 2022 21:30:32 +0000 https://applitools.com/?p=34261 After our webinar on Cross Browser Testing with Cypress we had so many great questions we couldn’t answer them all at the time, so we're tackling them now.

The post Cross Browser Testing with Cypress Workshop Q&A appeared first on Automated Visual Testing | Applitools.

]]>

On February 1, 2022, I gave a webinar entitled Cross Browser Testing with Cypress, in which I explained how to run Cypress tests against any browser using Applitools Visual AI and the Ultrafast Test Cloud. We had many attendees and lots of great questions – so many questions that we couldn’t answer them all during the event. In this article, I do my best to provide answers to as many remaining questions as possible.

Questions about the Webinar

Is there a recording for the webinar?

Yes, indeed! The recording is here.

Where is the repository for the example code?

Here it is: https://github.com/applitools/workshop-cbt-cypress. The repository also contains a full walkthrough in the WORKSHOP.md file.

Questions about Cypress

Can we run Cypress tests through the command line?

Yes! The npx cypress open command opens the Cypress browser window for launching tests, while the npx cypress run command launches tests purely from the command line. Use npx cypress run for Continuous Integration (CI) environments.

So, in a Cypress test case, we don’t need to create any driver object for opening a browser?

Correct! When you initialize a Cypress project, it sets up all the imports and references you need. Just call the cy object. To navigate to our first page, call cy.visit(…) and provide the URL as a string.

Can Cypress handle testing with iFrames?

Yes! Check out this cookbook recipe, Working with iFrames in Cypress.

How does cy.contains(…) work?

The cy.contains(…) call selects elements based on their text. For example, if a button has the text “Add Account”, then cy.contains(“Account”) would locate it. Check the Cypress docs for more detailed information.

Can Cypress access backend intranet APIs?

I haven’t done that myself, but it looks like there are ways to set up Windows Authentication and proxies with Cypress.

Questions about Applitools

How do I establish baseline snapshots?

The first time you take a visual snapshot with Applitools Eyes, it saves the snapshot as the baseline. The next time the same snapshot is taken, it is treated as a checkpoint and compared against the baseline.

Does Applitools Eyes fail a test if a piece of content changes on a page?

Every time Applitools Eyes detects a change, it asks the tester to decide if the change is good (“thumbs up”) or bad (“thumbs down”). Applitools enables testers to try different match levels for comparisons. For example, if you want to check for layout changes but ignore differences in text and color, you can use “layout” matching. Alternatively, if the text matters to you but layout changes don’t, you can use “content” matching. Applitools also enables testers to ignore regions of the snapshots. For example, a timestamp field will be different for each snapshot, so those could easily be ignored.

How do we save a new baseline snapshot if pages change during development?

When a tester marks a change as “good,” the new snapshot is automatically saved as the new baseline.

What happens if we have thousands of tests and developers change the UI? Will I need to modify thousands of baselines?

Hopefully not! Most UI changes are localized to certain pages or areas of an app. In that case, only those baselines would need updates. However, if the UI changes affect every screen, such as a theme change, then you might need to refresh all baselines. That isn’t as bad as it sounds: Applitools has AI-powered maintenance capabilities. When you accept one new snapshot as a baseline, Applitools will automatically scan all other changes in the current batch and accept similar changes. That way, you don’t need to grind through a thousand “thumbs-up” clicks. Alternatively, you could manually delete old baselines through the Applitools dashboard and rerun your tests to establish fresh ones. You could also establish regions to ignore on snapshots for things like headers or sidebars to mitigate the churn caused by cross-cutting changes.

Does Applitools Eyes wait for assets such as images, videos, and animations to load before taking snapshots?

No. The browser automation tool or the automation code you write must handle waiting.

(* Actually, there is a way when not using the Ultrafast Test Cloud. The classic SDKs include a parameter that you can set for controlling Eyes snapshot retries when there is no match.)

Can we accept some visual changes while rejecting others for one checkpoint?

Yes, you can set regions on a snapshot to use different match levels or to be ignored entirely.

Can we download the snapshot images from our tests?

Yes, you can download snapshot images from the Applitools Eyes dashboard.

Does Applitools offer any SDKs for visually testing Windows desktop apps?

Yes! Applitools offers SDKs for Windows CodedUI, Windows UFT, and Windows Apps. Applitools also works with Power Automate Desktop.

Does the Applitools Ultrafast Grid use real or emulated mobile devices?

Presently, it uses emulated mobile devices.

Can I publicly share my Applitools API key?

No, do NOT share your API key publicly! That should be kept secret. Don’t let others run their tests using your account!

Questions about Applitools with Cypress

How do I set up Applitools to work with Cypress?

Follow the Applitools Cypress Tutorial. You’ll need to:

  1. Register an Applitools account.
  2. Install the @applitools/eyes-cypress package.
  3. Run npx eyes-init to set up Applitools Eyes.
  4. Set the APPLITOOLS_API_KEY environment variable to your API key.

Cypress cannot locally run tests against Safari, Internet Explorer, or mobile browsers. Can Cypress tests check snapshots on these browsers in the Applitools Ultrafast Test Cloud?

Yes! Snapshots capture the whole page, not just pixelated images. Applitools can render snapshots using any browser configuration, even ones not natively supported by Cypress.

Can Applitools Eyes focus on specific elements instead of an entire page?

Yes! You can check a specific web element as a “region” of a page like this:

cy.eyesCheckWindow({
  target: 'region',
  selector: {
    type: 'css',
    selector: '.my-element'
  }
});

Can we run visual tests with Cypress using a free Applitools account?

Yes, but you will not be able to access all of Applitools’ features with a free account, and your test concurrency will be limited to 1.

Can we perform traditional assertions together with visual snapshots?

Sure! Visual testing eliminates the need for most traditional assertions, but sometimes, old-school assertions can be helpful for checking things like text formatting. Cypress uses Chai for assertions.

Questions about Testing

If a project has little-to-no test automation in place, should we start writing visual tests right away, or should we start with traditional functional tests and add visual tests later?

Visual tests are arguably easier to automate than traditional functional tests because they simplify assertions. Apply the 80/20 rule: start with a small “smoke” test suite that simply captures snapshots of different pages in your web app. Run that suite for every code change and see the value it delivers. Next, build on it by covering more interactions than navigation. Then, once those are doing well, try to automate more complicated behaviors. At that point, you might need some traditional assertions to complement visual checkpoints.

Can we compare snapshots from a staging environment to a production environment?

Yes, you can compare results across test environments as long as the snapshots have the same app, test, and tag names.

Can we schedule visual tests to run every day?

Certainly! Both Applitools and Cypress can integrate with any Continuous Integration (CI) system.

Does visual testing have any disadvantages when compared to traditional functional testing?

Presently, visual testing does not check specific text formatting, such as dates or currencies. You’ll need to use traditional assertions for that type of pattern matching. Nevertheless, you can use visual testing together with traditional techniques to automate the best functional tests possible.

How do we test UX things like heading levels, fonts, and text sizes?

If you take visual snapshots of pages, then Applitools Eyes will detect differences like these. You could also automate traditional assertions to verify specific attributes such as a specific heading number or font name, but those kinds of assertions tend to be brittle.

What IDE should we use for developing Cypress tests with Applitools?

Any JavaScript editor should work. Visual Studio Code and JetBrains WebStorm are popular choices.

What tool or framework should we use for API testing?

Cypress has built-in API support with the cy.request(...) method, making it easy to write end-to-end tests that interact with both the frontend and backend. However, if you want to automate tests purely for APIs, then you should probably use a tool other than Cypress. Postman is one of the most popular tools for API testing. If you want to stick to JavaScript, you could look into SuperTest and Nock.

Can we do load testing with cross-browser testing?

Load testing is the practice of adding different intensities of “load” to a system while running functional and performance tests. For web apps, “load” is typically a rate of requests (like 100 requests per second). As load increases, performance degrades, and functionality might start failing. You can do load testing with cross-browser testing, but keep in mind that any failures due to load would probably happen the same way for any browser. Load hits the backend, not the frontend. Repeating load tests for a multitude of different browser configurations may not be worthwhile.

The post Cross Browser Testing with Cypress Workshop Q&A appeared first on Automated Visual Testing | Applitools.

]]>
The Best Test Automation Framework Is… https://applitools.com/blog/what-is-the-best-test-automation-framework/ Tue, 19 Oct 2021 10:01:00 +0000 https://applitools.com/?p=31662 Catch a recap of my recent keynote, where I spoke about the context and the criteria required to make any test automation framework the “best.”

The post The Best Test Automation Framework Is… appeared first on Automated Visual Testing | Applitools.

]]>

In this social world, it is very easy to get biased into believing that some practice is the best practice, or some automation tool or framework is “the best.” When anyone makes the statement – “this <some_practice> is a best practice”, or “this tool <name_of_tool> or framework <name_of_framework>” is the best, there are 2 things that come to mind:

  1. The person is promoting the practice / tool / framework as a “silver bullet” – something that will solve all problems, magically.
  1. The said practice / tool / framework actually worked best for them, in the context of the team adopting it

So when I hear anyone saying – “best ….” I get suspicious and think which category do they belong to – “silver bullet,” or knowledgeable folks who have done their study and determined what is working well for them.

Doing the study is extremely important to determine what is good or bad. In the context of Test Automation, there are a lot of parameters that need to be considered before you reach a decision about what tool or framework is going to become “the best tool / framework” for you. I classify these parameters as negotiables and non-negotiables.

I had the privilege of delivering the opening keynote at the recent Future of Testing event focused on Test Automation Frameworks on September 30th 2021. My topic was “The best test automation framework is …”. I spoke about the context, and the non-negotiable and negotiable criteria required to make any test automation framework the “best.”

How to Choose the Best Test Automation Framework…

Understanding the Context

Here are the questions to answer to determine the context:

Negotiable and Non-Negotiable Criteria

Once you understand the context, then apply that information to determine your non-negotiable and negotiable criteria.

Start Evaluating

Now that you understand all the different parameters, here are the steps to get started.

You can see the full mind map I used in my keynote presentation below.

You can also download the PDF version of this mind map from here.

Catch the Keynote Video

To hear more about how to choose the best test automation framework, you can watch the whole video from my keynote presentation here, or by clicking below.

The post The Best Test Automation Framework Is… appeared first on Automated Visual Testing | Applitools.

]]>
Cutting-edge Functional UI Testing Techniques – Live Coding Webinar w/ Adam Carmi https://applitools.com/blog/ui-testing-techniques/ Mon, 30 Mar 2020 15:09:00 +0000 https://applitools.com/?p=17383 It is no secret that many teams struggle with automated functional UI testing – some to the point where it is completely abandoned – even though the UI is the...

The post Cutting-edge Functional UI Testing Techniques – Live Coding Webinar w/ Adam Carmi appeared first on Automated Visual Testing | Applitools.

]]>

It is no secret that many teams struggle with automated functional UI testing – some to the point where it is completely abandoned – even though the UI is the most significant interface of the system.

In this session, Adam CarmiApplitools CTO and Co-founder — reviewed the main weaknesses of traditional approaches to UI testing, and how they negatively affect test stability, maintainability, coverage, execution speed, and overall ROI. He also discussed how these weaknesses become even more severe when running tests across multiple devices and browsers.

Adam demonstrated how Visual AI — the innovative technology powering Applitools’ testing engine — can be applied on your existing pipeline to efficiently implement functional UI testing with a fraction of the effort while drastically increasing test coverage and reducing test execution time.

Adam also showed a live coding session, where he converted a traditional UI test into a Visual AI-based test in minutes, and executed it across dozens of devices and browsers in seconds using the Applitools Ultrafast Grid.

Adam’s Functional UI Testing slide deck:

https://www.slideshare.net/slideshow/embed_code/key/pV803UfrsUZXOL

Full webinar recording:

Additional Recommended Resources:

— HAPPY TESTING —

The post Cutting-edge Functional UI Testing Techniques – Live Coding Webinar w/ Adam Carmi appeared first on Automated Visual Testing | Applitools.

]]>
How To Ace High-Performance Test for CI/CD https://applitools.com/blog/how-to-ace-high-performance-test-for-ci-cd/ Thu, 26 Mar 2020 15:14:00 +0000 https://applitools.com/?p=17388 If you run continuous deployment today, you need high-performance testing. You know the key takeaway shared by our guest presenter, Priyanka Halder: test speed matters. Priyanka Halder presented her approach...

The post How To Ace High-Performance Test for CI/CD appeared first on Automated Visual Testing | Applitools.

]]>

If you run continuous deployment today, you need high-performance testing. You know the key takeaway shared by our guest presenter, Priyanka Halder: test speed matters.

Priyanka Halder presented her approach to achieving success in a hyper-growth company through her webinar for Applitools in January 2020. The title of her speech sums up her experience at GoodRx:

“High-Performance Testing: Acing Automation In Hyper-Growth Environments.”

Hyper-growth environments focus on speed and agility. Priyanka focuses on the approach that lets GoodRx not only develop but also test features and releases while growing at an exponential rate.

About Priyanka

Priyanka Halder is head of quality at GoodRx, a startup focused on finding all the providers of a given medication for a patient – including non-brand substitutes – and helping over 10 million Americans find the best prices for those medications.  Priyanka joined in 2018 as head of quality engineering – with a staff of just one quality engineer. She has since grown the team 1200% and grown her team’s capabilities to deliver test speed, test coverage, and product reliability. As she explains, past experience drives current success.

Priyanka’s career includes over a dozen years of test experience at companies ranging from startups to billion-dollar companies. She has extensive QA experience in managing large teams and deploying innovative technologies and processes, such as visual validation, test stabilization pipelines, and CICD. Priyanka also speaks regularly at testing and technology conferences. She accepted invitations to give variations of this particular talk eight times in 2019.

One interesting note: she says she would lik to prove to the world that 100% bug-free software does not exist.

Start With The Right Foundation

Three Little Pigs

Priyanka, as a mother, knows the value of stories. She sees the story of the Three Little Pigs as instructive for anyone trying to build a successful test solution in a hyper-growth environment. Everyone knows the story: three pigs each build their own home to protect themselves from a wolf. The first little pig builds a straw house in a couple of hours. The second little pig builds a home from wood in a day. The third little pig builds a solid infrastructure of brick and mortar – and that took a number of days. When the wolf comes to eat the pigs, he can blow down the straw house and the wood house, but the solid house saves the pigs inside.

Priyanka shares from her own experience. – She encounters many wolves in a hyper-growth environment. The only safeguard comes from building a strong foundation. Priyanka describes a hyper-growth environment and how high-performance testing works.  She describes the technology and team needed for high-performance testing. And, she describes what she delivered (and continues to deliver) at GoodRx.

Define High-Performance Testing

So, what is high-performance testing?

Fundamentally, high-performance testing maximizes quality in a hyper-growth startup. To succeed, she says, you must embrace the ever-changing startup mentality, be one step ahead, and constantly provide high-quality output without being burned out.

Agile startups share many common characteristics:

  • Chaotic – you need to be comfortable with change
  • Less time – all hands on deck all the time for all the issues
  • Less resources – you have to build a team where veterans are mentors and not enemies
  • Market pressure – teams need to understand and assess risk
  • Reward – do it right and get some clear benefits and perks

If you do it right, it can lead to satisfaction. If you do it wrong, it leads to burnout. So – how do you do it right?

Why High-Performance Testing?

Leveraging data collected by another company Priyanka showed how the technology for app businesses changed drastically over the past decade. These differences include:

  • Scope – instead of running a dedicated app, or on a single browser, today’s apps run on multiple platforms (web app and mobile)
  • Frequency – we release apps on demand (not annually, quarterly, monthly or daily)
  • Process – we have gone from waterfall to continuous delivery
  • Framework – we used to use singe-stack on premise software – today we are using open source, best of breed, cloud based solutions for developing and delivering.

The assumptions of “test last” that may have worked a decade back can’t work anymore. So, we need a new paradigm.

How To Achieve High-Performance Testing

Priyanka talked about her own experience. Among other things, teams need to know that they will fail early as they try to meet the demands of a hyper-growth environment. Her approach, based on her own experiences, is to ask questions:

  • Does the team appreciate that failures can happen?
  • Does the team have inconsistencies? Do they have unclear requirements? Set impossible deadlines? Use waterfall while claiming to be agile? Note those down.

Once you know the existing situation, you can start to resolve contradictions and issues. For example, you can use a mind map to visualize the situation. You can divide issues and focus on short term work (feature team for testing) vs. long term work (framework team). Another important goal – figure out how to find bugs early (aka Shift Left). Understand which tools are in place and which you might need. Know where you stand today vis-a-vis industry standards for release throughput and quality. Lastly, know the strength of your team today for building an automation framework, and get AI and ML support to gain efficiencies.

Building a Team

Next, Priyanka spoke about what you need to build a team for high-performance testing.

Screen Shot 2020 03 25 at 9.33.53 PM

In the past, we used to have a service team. They were the QA team and had their own identity. Today, we have true agile teams, with integrated pods where quality engineers are the resource for their group and integrate into the entire development and delivery process.

So, in part you need skills. You need engineers who know test approaches that can help their team create high-quality products. Some need to be familiar with behavior-driven design or test-driven design. Some need to know the automation tools you have chosen to use. And, some need to be thinking about design-for-testability.

One huge part of test automation involves framework. You need a skill set familiar with building code that self-identifies element locators, builds hooks for automation controls, and ensures consistency between builds for automation repeatability.

Beyond skills, you need individuals with confidence and flexibility. They need to meld well with the other teams. In a truly agile group, team members distribute themselves through the product teams as test resources. While they may connect to the main quality engineering team, they still must be able to function as part of their own pod.

Test Automation

Priyanka asserts that good automation makes high-performance testing possible.

In days gone by, you might have bought tools from a single vendor. Today, open source solutions provide a rich source for automation solutions. Open source generally has lower maintenance costs, generally lets you ship faster, and expands more easily.

Screen Shot 2020 03 25 at 10.06.28 PM

Open source tools come with communities of users who document best practices for using those tools. You might even learn best-practice processes for integrating with other tools. The communities give you valuable lessons so you can learn without having to fail (or learn from the failures of others).

Priyanka describes aspects of software deployment processes that you can automate.  Among the features and capabilities you can automate:

  • Assertions on Action
  • Initialization and Cleanup
  • Data Modeling/Mocking
  • Configuration
  • Safe Modeling Abstractions
  • Wrappers and Helpers
  • API Usage
  • Future-ready Features
  • Local and Cloud Setups
  • Speed
  • Debugging Features
  • Cross Browser
  • Simulators/Emulators/Real Devices
  • Built-in reporting or easy to plug in

Industry Standards

You can measure all sorts of values from testing. Quality, of course. But what else? What are the standards these days? Who knows what are typical test times for test automation?

Priyanka shares data from Sauce Labs about standards.  Sauce surveyed a number companies and discussed benchmark settings for four categories: test quality; test run time; test platform coverage; and test concurrency. The technical leaders at these companies set some benchmarks they thought aligned with best-in-class industry standards.

In detail:

  • Quality – pass at least 90% of all tests run
  • Run Time – average of all tests run two minutes or less
  • Platform Coverage – tests cover five critical platforms on average
  • Concurrency – at peak usage, tests utilize at least 75% of available capacity

Next, Priyanka shared the data Sauce collected from the same companies about how they fared against the average benchmarks discussed.

  • Quality – 18% of the companies achieved 90% pass rate
  • Run time – 36% achieved the 2 minute or less average
  • Platform coverage – 63% reached the five platform overage
  • Concurrency – 71% achieved the 75% utilization mark
  • However, only 6.2% of the companies achieved the mark on all four.

Test speed became a noticeable issue. While 36% ran on average in two minutes or faster, a large number of companies exceeded five minutes – more than double.

Investigating Benchmarks

These benchmarks are fascinating – especially run time – because test speed is key to faster overall delivery. The longer you have to wait for testing to finish, the slower your dev release cycle times.

Sadly, lots of companies think they’re acing automation, but so few are meeting key benchmarks. Just having automation doesn’t help. It’s important to use automation that helps meet these key benchmarks.

Another area worth investigating involves platform coverage. While Chrome remains everyone’s favorite browser, not everyone is on Chrome.  Perhaps 2/3 of users run Chrome, but Firefox, Safari, Edge and others still command attention. More importantly, lots of companies want to run mobile, but only 8.1% of company tests run on mobile. Almost 92% of companies run desktop tests and then resize their windows for the mobile device.  Of the mobile tests, only 8.9% run iOS native apps and 13.2% run Android native apps. There’s a gap at a lot of companies.

GoodRx Strategies

Priyanka dove into the capabilities that allow GoodRx to solve the high- performance testing issues.

Test In Production

The first capabilities GoodRx uses a Shift Right approach that moves testing into the realm of production.

Screen Shot 2020 03 25 at 10.19.12 PM

Production testing? Yup – but it’s not spray-and-pray. GoodRx’s approach includes the following:

  • Feature Flag – Test in production. Ship fast, test with real data.
  • Traffic Allocation – gradually introduce new features and empower targeted users with data. Hugely important for finding corner cases without impacting the entire customer base.
  • Dog Fooding – use a CDN like Fastly to deploy, route internal users to new features.

The net reduce – this reduces overhead, lets the app get tested with real data test sets, and identify issues without impacting the entire customer base. So, the big release becomes a set of small releases on a common code base, tested by different people to ensure that the bulk of your customer base doesn’t get a rude awakening.

AI/ML

Next, Priyanka talked about GoodRx uses AI/ML tools to augment her team. These tools make her team more productive – allowing her to meet the quality needs of the high-performance environment.

First, Priyanka discussed automated visual regression – using AI/ML to automate the validation of rendered pages. Here, she talked about using Applitools – as she says, the acknowledged leader in the field. Priyanka talked about how GoodRx uses Applitools.

At GoodRx, there may be one page used for a transaction. But, GoodRx supports hundreds of drugs in detail, and a user can dive into those pages that describe the indications and cautions about individual medications.  To ensure that those pages remain consistent, GoodRx validates these pages using Applitools. Trying to validate these pages manually would take six hours. Applitools validates these pages in minutes and allows GoodRx to release multiple times a day.

Screen Shot 2020 03 25 at 10.20.40 PM

To show this, Priyanka used an example of visual differences. She showed a kids cartoon with visual differences. Then she showed what happens if you do a normal image comparison – pixel-based comparison.

Screen Shot 2020 03 25 at 10.22.01 PM

A bit-wise comparison will fail too frequently.  Using the Applitools AI system, they can set up Applitools to look at the images that have already been approved and quickly validate the pages being tested.

Screen Shot 2020 03 25 at 10.23.29 PM

Applitools can complete a full visual regression in less than 12 minutes to run 350 test cases, which runs 2,500 checks.  Manually, it takes six hours.

Screen Shot 2020 03 25 at 10.24.29 PM

Priyanka showed the kinds of real-world bugs that Applitools uncovered. One – a screenshot from her own site GoodRx. A second from amazon.com, and a third from macys.com. She showed examples with corrupt display – and ones that Selenium alone could not catch.

ReportPortal.io

Next, Priyanka moved on to ReportPortal.io. As she says, when you ace automation, you need to know where you stand. You need to build trust around your automation platform by showing how it is behaving. All your data – test times, bugs discovered, etc. reportportal.io shows how tests are running at different times of the day.  Another display shows flakiest tests and longest-running tests to help the team release seamlessly and improve their statistics.

Any failed test case in reportportal.io can link the test results log directly into the reportportal.io user interface.

GoodRx uses behavior-driven design (BDD), and their BDD approach lets them describe the behavior they want for a given feature – how it should behave in good and bad cases – and ensure that those cases get covered.

High-Performance Testing – The Burn Out

Priyanka made it clear that high-performance environments take a toll on people. Everywhere.

She showed a slide referencing a blog by Atlassian talking about work burnout symptoms – and prevention. From her perspective, the symptoms of workplace stress include:

  • Being cynical or critical at work
  • Dragging yourself to work and having trouble getting started
  • Irritable or impassion, lack energy, hard to concentrate, headache
  • Lack of satisfaction from achievement
  • Use food, drugs or alcohol to feel better or simply not to feel

So, what should a good team lead do when she notices signs of burnout? Remind people to take steps to prevent burnout. These include:

  • Avoid unachievable deadlines. Don’t take on too much work. Estimate, add buffer, add resource.
  • Do what gives you energy – avoid what drains you
  • Manage digital distraction – the grass will always be greener on the other side
  • Do something outside your work – Engage in activities that bring you joy
  • Say No too many projects – gauge your bandwidth and communicate
  • Make self-care a priority – meditation/yoga/massage
  • Have a strong support system – talk to you family, friends, seek help
  • Unplugging for short periods helps immensely

The point here is that hyper-growth environments can take a toll on everyone – employees, managers. Unrealistic demands can permeate the organization. Use care to make sure that this doesn’t happen to you or your team.

GoodRx Case Study

Why not look at Priyanka’s direct experience at GoodRx? Her employer, GoodRx, provides prices transparency for drugs. GoodRx lets individuals search for drugs they might need or use for various conditions. Once an individual selects a drug, GoodRx lets the individual see the prices for that drug in various locations to find the best price for that drug.

The main customers are people who don’t have insurance or have high-deductible insurance. In some cases, GoodRx offers coupons to keep the prices low.  GoodRx also provides GoodRx Care – a telemedicine consultation system – to help answer patient questions about drugs. Rather than see a doctor, GoodRx Care costs anywhere between $5 and $20 for a consultation.

Because the GoodRx web application provides high value for its customers, often with high demand, the app must maintain proper function, high performance, and high availability.

Set Goals

Screen Shot 2020 03 25 at 10.28.44 PM

The QA goals Priyanka designed needed to meet the demands of this application. Her goals included:

  • Distributed QA team 24/7 QA support
  • Dedicated SDET Team who specializes in test
  • A robust framework that will make any POC super simple (plug and play)
  • Test stabilization pipeline using Travis
  • 100% automation support to reduce regression time 90%

Build a Team

Screen Shot 2020 03 25 at 10.30.03 PM

As a result, Priyanka needed to hire a team that could address these goals. She showed the profile she developed on LinkedIn to find people that met her criteria – dev-literate, test-literate engineers who could work together as a team and function successfully. More emphasis on test automation and coding abilities rose to the top.

Build a Tech Stack

Screen Shot 2020 03 25 at 10.33.22 PM

Next, Priyanka and her team invested in tech stack:

  • Python and Selenium WebDriver
  • Behave for BDD
  • Browserstack for a cloud runner
  • Applitools for visual regression
  • Jenkins/Travis and Google Drone for CI
  • Jira, TestRail for documentation

CICD success criteria requirements came up with four issues:

  • Speed and parallelization
  • BDD for easy debug and read
  • Cross-browser cross-device coverage in CICD
  • Visual validation

Set QA expectations for CI/CD testing

Finally, Priyanka and her team had to set expectations for testing.  How often would they test? How often would they build?

The QA for CI/CD means that test and build become asynchronous. Regardless of the build state,

  • Hourly; QA runs 73 tests hourly against the latest build to sanity check the site.
  • On Build: Any new build runs 6 cross-browser and makes sure all critical business paths get covered.
  • Nightly 300 test regression tests on top of other tests.

Some of these were starting points, but most got refined over time.

Priyanka’s GoodRx Quality Timeline

Next, Priyanka talked about how her team grew from the time she joined until now.

She started in June 2018. At that point, GoodRx had one QA engineer.

  • In her first quarter, she added a QA Manager, QA Analyst, and a Senior SDET. They added offshore reprocessing to support releases.
  • By October 2018 they had fully automated P0/P1 tests. Her team had added Spinnaker pipeline integration. They were running cross-browser testing with real mobile device tests.
  • By December 2018 she added two more QA Analysts and 1 more SDET.  Her team’s tests fully covered regression and edge cases.
  • And, she pressed on. In early 2019, they had built automation-driven releases. They had added Auth0 support – her team was hyper-productive.
  • Then, she discovered her team had started to burnout.  Two of her engineers quit. This was an eye-opening time for Priyanka. Her lessons about burnout came from this period. She learned how to manage her team through this difficult period.

By August 2019 she had the team back on an even keel and had hired three QA engineers and one more SDET.

And, in November 2019 they achieved 100% mobile app automation support.

GoodRx Framework for High-Performance Testing

Finally, Priyanka gave a peek into the GoodRx framework, which helps her team build and maintain test automation.

The browser base class provides access for test automation. Using the browser base class eliminates the need to use Selenium embed click.

The page class simplifies the web element location. The page class structure assigns a unique XPath to each web element. Automation benefits by having clean XPath elements for automation purposes.

Screen Shot 2020 03 25 at 10.47.12 PM

The element wrapper class allows for behaviors like lazy loading.  Instead of having to program exceptions into the test code, the element wrapper class standardizes interaction between the browser under test and the test infrastructure.

Screen Shot 2020 03 25 at 10.47.27 PM

Finally, for every third-party application or tool that integrates using an SDK, like Applitools GoodRx deploys an SDK Wrapper. As one of her SDET team figured, the wrapper ensures that an SDK change from a third party can mess up your test behavior. Using a wrapper is a good practice for handling situations when the service you use encounters something unexpected.

The framework results in a more stable test infrastructure that can rapidly change to meet the growth and change demands of GoodRx.

Conclusions

Hyper-growth companies put demands on their quality engineers to achieve quickly. Test speed matters, but it cannot be achieved consistently without investment. Just as Priyanka started with the story of the Three Little Pigs, she made clear that success requires investment in automation, people, AI/ML, and framework.

To watch the entire webinar:

For More Information

The post How To Ace High-Performance Test for CI/CD appeared first on Automated Visual Testing | Applitools.

]]>
When Quality Engineering Meets Product Management https://applitools.com/blog/quality-product-management/ Fri, 14 Feb 2020 18:48:54 +0000 https://applitools.com/blog/?p=6972 How do you bridge the gap between quality engineering and product management? In October 2019, Evan Wiley from Capital One presented on this topic for an Applitools webinar. Evan introduced...

The post When Quality Engineering Meets Product Management appeared first on Automated Visual Testing | Applitools.

]]>

How do you bridge the gap between quality engineering and product management?

In October 2019, Evan Wiley from Capital One presented on this topic for an Applitools webinar. Evan introduced a number of cool tools that bridge the gap between the product specified by the product managers and the nuts and bolts of testing. Interested? Read on.

Evan’s Background

Evan Wiley spent six years in Quality Engineering at Capital One before moving into product management. In his engineering time, Evan discovered that product managers and quality engineers share complementary views on their products. Product managers envision behaviors that quality engineers execute in their tests.

Evan experienced this relationship directly when he was invited by product managers to attend “empathy interviews.” In these interviews, team members speak with customers to understand the customer’s environment, needs, fears, and expectations. In attending these interviews, Evan heard the first-hand lives of customers who were using the results of his work. These interviews both informed his work in quality engineering and later fueled his move to product management

What Is Quality Engineering?

Screen Shot 2020 02 13 at 2.03.06 PM

Evan described the work of quality engineering as finding bugs within products early and often. Noting that the job varies from organization to organization and company to company, Evan described the responsibilities including:

  • Manual testing – ensuring that the product can be exercised and behaves as expected
  • Test automation – creating automated tests that can be run to assure behavioral coverage without manual intervention
  • Production testing – verifying behavior with a production-ready product.
  • Test case design – ensuring that tests cover both positive cases, or normal function, as well as negative cases, or handling problematic inputs, internal inconsistencies, and other errors.
  • Test execution – running tests and reporting results
  • Penetration testing – running tests and reporting results for tests based on known threat vectors
  • Accessibility testing – ensuring that customers with known visual and other disabilities can navigate and use the product successfully.

Evan noted that the nature of this work changes as products, product delivery platforms, and customer environments evolve.  And, things change constantly.

What is Product Management?

Screen Shot 2020 02 13 at 2.03.20 PM

Evan next dove into describing the role of product management. Frankly, describing product management can take a two-day course, and even then not cover what everyone thinks product management might do at their company. I know this as I remain a poorly-hidden closet product manager.

Evan does not try to describe product management comprehensively. Instead, he focuses on how empathy interviews drive product managers to make product decisions.  Primarily, Evan says, empathy interviews help product managers become the “voice of the customer.”

One part of empathy interviews guides testing. For example, does your test environment match the customer’s environment? Do your test conditions match the customer’s test conditions?

A larger set of questions helps product managers understand problems their customers try to solve, how to prioritize solutions to these problems, which of these problems need to be higher on the near-term backlog versus further back, and how customers might respond to different marketing messages.

And, when product managers take their products to the field, they can validate a customer’s reaction with expectations from empathy interviews to help make their empathy interviews even more effective. The initial empathy interview forms the basis for product management key performance indicators.

Ultimately, the needs of product managers and quality engineers diverge significantly. But involving quality engineering in customer empathy interviews help the overlap succeed.

Screen Shot 2020 02 12 at 12.35.21 PM

Quality Engineering and Product Manager Needs Overlap

Quality over Quantity

Screen Shot 2020 02 13 at 2.03.31 PM

Evan spends a bit of time discussing how Capital One prioritizes quality over quantity.  Evan points out that the company makes this choice, and that the decision permeates the company culture.  At Capital One, that goal permeates all engineering – not just quality engineering.

Evan explains with an example from another company.

“If there’s a quality engineer at Facebook, they might have a lot of test cases, and in some cases, they can stand in for the end-user with the knowledge of being a user. So, if they’re testing, say, logout from Facebook, they might think, ‘I can make that simpler for an end-user because I’d want it to be.’ And this insight empowers the quality engineers to work directly with the developers tweak a behavior for an end-user.”

To this end, Evan sees that the whole of engineering contributes to quality over quantity culture. Hiring processes, skill selection, and testing approaches involve transparency that allows for a breadth of experience and diversity of perspectives on the team.

And, this mix of backgrounds leads to cross-training product managers with quality engineers. Bringing both groups together inside Capital One leads, in Evan’s perspective, to better outcomes for customers.

Evan gave the example of testing a set of features across multiple browsers alongside product managers. He was able to show the product managers where the different browsers handled certain functions differently as a way to build the culture of quality at Capital One.

Gherkin Scenarios

Screen Shot 2020 02 13 at 2.04.09 PM

Next, Evan demonstrated the use of Gherkin Scenarios for writing stories that describe the behavior of a product. If you don’t know about Gherkin, it’s basic statements are:

  • Scenario
  • Given
  • When
  • Then
  • And

So, for example, Evan talks about the Google home page. He imagines the product manager writes something like this:

These scenarios have several useful properties.

First, they help the product manager describe the detailed behavior a user will experience before any software gets written. Product managers can validate these stories with prospective customers and identify any issues that might impact the behavior of the product.

Second, developers get an understanding of the intended user experience and outcome to plan their coding activity.

Third, engineers can use their experience to determine scenarios that might not have been described and what should be the intended behavior in those situations.  And, engineers can ask relevant questions involving both design as well as behavior.

A Gherkin Example

Imagine some questions that come from the scenario listed above. Here are a few:

  • How much time should the user experience between clicking the “Google Search” button and getting a response back?
  • What happens when the user has 600ms latency between the browser and server?
  • Since the scenario specifies that the test uses valid text for a Google search, define valid text.
  • What about the scenario when the user enters not valid text?

These questions lead to more scenarios, which lead to more complete product specifications, which lead to more complete products. The larger number of scenarios also leads to more complete testing.  Quality engineers read these scenarios and begin planning their acceptance tests based on these different scenarios and conditions.

So, much of the engineering-product management conversation ends up as quality engineering talking with product management about scenarios – tightening the connection between product management and quality engineering.

Evan did not talk about it, but a tool like Cucumber reads in Gherkin scenarios to help build test automation.

Visual Validation Baselines

From there, Evan moved on to talk about visual validation. And, for visual validation, Evan talked about Applitools – a tool he uses at Capital One.

First, Evan spoke about the idea of user acceptance testing. When you run through the scenarios with product managers, you will end up with the screens that were described. You want to capture the screen images and go through them with product managers to make sure they meet the needs of the customers as understood by the product management team.

So, part of the testing involves capturing images, and that means following your Gherkin scenarios to make sure you capture all the right steps. Evan showed some examples based on a Google page and describing how those test steps in the Gherkin scenarios became captured visual images.  Evan pointed out that these visual images begin to define how the application should behave – a baseline for the application.

Screen Shot 2020 02 13 at 2.15.42 PM

As you go through the different pages, you can complete the baseline acceptance tests. Once you have a saved good baseline, you know the state of an accepted page going forward.

If you find a behavior you don’t like, you can highlight the behavior and share it with the developers.

You can find problems during visual testing that do not appear in development. For instance, someone realizes you need to test against some smaller viewport sizes. Or, you test a mobile app, but not the mobile browser version.

So, you build your library of baseline images for each test condition. You make sure to include behaviors on target browsers, operating systems, and viewport sizes in your Gherkin scenarios. As it turns out, with Applitools, your collection of completed baselines makes your life much easier going forward.

Visual Validation Checkpoints

Next, Evan dove into using Applitools for subsequent tests.

Screen Shot 2020 02 13 at 2.05.00 PM

Once you develop your test cases and run them through Applitools, you have baselines. As you continue to develop your product, you continue to run tests through Applitools. Each test you run and capture in Applitools can be compared with your earlier tests. A run with no differences shows as a pass. A run with differences shows up as “unresolved.”

Screen Shot 2020 02 13 at 2.06.10 PM

Evan showed how to deal with an unresolved difference. He inspected one difference, saw it was due to an expected change, and accepted the difference by clicking the “thumbs-up” button in the UI. In this case, the checkpoint automatically becomes the new baseline image.  He inspected another difference. In this case, the difference wasn’t one he wanted. He clicked the “thumbs-down” button in the UI. He showed how this information becomes a “failed” test, and how the information of the can get routed back to developers.

Unlike other visual testing tools you have used, the Applitools uses computer vision to identify visual differences. The Visual AI engine can ignore rendering changes at the pixel level that do not result in user-identifiable visible differences. And, there are other capabilities for maintaining applications, such as handling changes that displace the rest of the content on pages or managing identical changes that occur on multiple pages.

Quality over Velocity

Evan went back to discuss the company culture about prioritizing quality. Capital One developed an engineering culture over time to focus on quality. Any decision to emphasize delivery over quality must be documented and validated. Release decisions at Capital One end up being team decisions, as the whole team is responsible for both the content and quality of a release.  So, the entire decision to deliver quality products brings the product management, product development, and quality engineering teams together with a common purpose.

Evan noted that, in his experience, other companies approach these problems in different ways. The culture at Capital One makes this kind of prioritization possible. Cross-training makes this delivery possible because cross-training makes all team members aware of priorities, approaches, and tools used to deliver quality. The result, says Evan, is a high-performing team and consistency of meeting customer expectations.

Questions about Quality Engineering

A questioner asked Evan if Quality Engineering at Capital One had sole responsibility for quality. Evan said no. Evan pointed out that he spoke from his perspective, and while Quality Engineering came up with the approach to validate product quality, the whole team – product management, development, and quality engineer – participated in the testing. The approach helped the team deliver a higher quality product.

Another questioner asked about the benefit of getting customer knowledge directly to Quality Engineering. That’s valuable, Evan said. For example, during an empathy interview, a customer raises the question of a specific problem they have when trying to execute something specific. During the interviewer, the interviewer dives deeper into this issue. The result is a more complete understanding of the customer use case, the expected behavior, and the actual behavior observed.  This results in both better test cases as well as future enhancements.

Questions about Visual Testing and Tools

A questioner asked if Gherkin scenarios made sense in all situations. Not always, said Evan. Gherkin scenarios make great sense when fitting into behavior for development to create and quality engineering to test. Evan thought about cases, such as technical debt cases, for which the intended behavior may not be user behavior.

Another questioner asked about the value of visual testing to Capital One. Evan talked about finding ways to exercise a behavior, capture the results, and share the results with product people.  Test pass/fail results could not capture the user experience, but visual test results do so as part of the visual testing solution.  One example Evan gave was for a web app that had an unexpected break on a mobile browser, due to the different browser behavior on a different operating system.  Without visual testing, the error condition would likely not have been caught in-house. If Capital One were only using manual tests, the condition might not have been covered if the specific device version was not included in the test conditions. With the automated visual tests, they found the problem, saved the image, and used that as a new Gherkin scenario in the next release.

Questions about Product Management and Quality Engineering

Next, Evan was asked about how to integrate product management and quality engineering more closely. Evan said he wasn’t sure how to do this in the general case. At Capital One, the need for engineers and product managers to collaborate on issue grooming, with the ability to capture the visual behavior during a test run, improved the ability of the engineers and product managers to agree on issues that needed to be addressed, in what priority, and for what purpose.

Finally, Evan was asked how to get Product Management to involve engineering more closely. Evan focused on empathy interviews as ways to align engineering and product management, and Gherkin scenarios as tools to bring a common language for both development and test requirements. Evan also talked about his own transition from Quality Engineer to Product Manager – and how he went from being tool-and-outcome focused to customer-and-strategy focused.

About the Webinar

Evan’s Slides (Slideshare)

Evan’s Presentation (YouTube)

 

For More Information about Applitools

The post When Quality Engineering Meets Product Management appeared first on Automated Visual Testing | Applitools.

]]>
Know The Answers, Get The Job https://applitools.com/blog/answers-for-test-engineers/ Fri, 18 Oct 2019 19:16:13 +0000 https://applitools.com/blog/?p=5978 When you’re looking for a job as a software test engineer, you know you’ll run a gauntlet of questions before you get a handshake and a new job. What do...

The post Know The Answers, Get The Job appeared first on Automated Visual Testing | Applitools.

]]>

When you’re looking for a job as a software test engineer, you know you’ll run a gauntlet of questions before you get a handshake and a new job. What do you need to know?

When I want to know about software testing, I ask Angie Jones. Angie is the senior developer advocate here at Applitools. And, Angie is a test engineering rock star.

Check out Angie’s online presence at http://angiejones.tech. She has held leadership positions at well-known companies in test engineering. She knows functional test and test automation. And she knows how you can become a test engineer who knows the answers.

While Angie first shared her webinar, Ace Your Next Job Interview, with us in 2017, we have seen lots of open jobs in functional testing and Selenium testing. Demand grows for engineers with SDET (Software Development Engineer in Test) qualifications to help build working and maintainable software. Whether you are test automation engineer looking to interview with someone who advertises for a “Selenium testing” position, or you are looking for an SDET position, coding skills will come up on your interview.

We reviewed this article with Angie. She said the content is just as relevant today as it was two years ago. So, here is a summary of her webinar (which you can also watch below).

Test Automation Coding

If you take away nothing else from this column, take away this: automation engineering for functional tests requires coding skills, knowledge of test approaches, and a way to marry the two effectively. Automation engineers need to know how to build appropriate tests into the code being tested. As a result, you need to expect to be quizzed on both your test knowledge as well as your coding knowledge.

Angie’s point?

“The game has definitely changed. The automation engineer interview might be the most difficult one because not only do they ask us automation questions but they’re also asking us testing questions and developer questions. Each one of those may have its own round of interviews.”

Effectively, people are looking for someone with a tester’s mindset, and a developer’s skill set. This could be an SDET position, or just an automation engineer with code analysis tools.

Let’s review the ideas Angie shared, so you can feel more prepared. She broke the interview into two parts: testing questions, and developer questions.

Ace The Testing Questions

If you’re interviewing for a functional testing role, you need to ace the functional testing questions.  Savvy development teams may implement a “shift left” strategy and move a percentage of testing to the developers themselves. You must expect to answer the testing questions from both your developer interviewers as well as your test interviewer.

The classic interview questions for functional testing asks you to develop a test approach for an everyday object. Like, say, a chair.

How Would You Test A Chair?

SDET, Automation Engineer or Manual Tester – stop and consider this question.

If you haven’t come across this question – or one like it – remember two concepts: assumptions and limits. Your assumptions will dictate how you approach the problem. Your test cases will be influenced by both your assumptions and what you think about limits.

Let me be more concrete.  When you think about testing anything, do you begin with test strategies? Test cases? What have you missed?

Validate your assumptions by asking questions. If you only assume, you can miss some of the important design considerations that will drive your test strategy.

Here are some questions that might help:

  • Who is the user?
  • What are the purposes for which the chair was designed?
  • Was there a weight/size/height/age assumption?
  • Was the chair designed for a person with specific abilities, or specific disabilities?

You might think your chair tests are fine until you consider a set of chairs like this one:

Screen Shot 2019 07 23 at 9.43.29 AMYes, each chair supports sitting. But one is clearly mobile, one or two may be too awkward or too heavy for an individual to move, and at least one may have a weight limit.

Each has its own set of specifications and use cases. And, you might think that this set is sufficient to broaden your definition until you come across the chair that doubles as a stepladder.  In this last example, if you do not consider the stepladder use consideration, you will miss a set of important functional tests.

You want to make sure that you develop use cases from the design and intended use – and not simply what you thought would be important. For instance, “How high can a person stand on the chair?” is not a concept for the one on casters.

Once you know the intended use cases, you can consider tests to run and appropriate limits for those tests. So remember – ask questions.

The Automation Round

The most common mistake for engineers pursuing test automation jobs, Angie says, is people who prepare only on UI tests. Those candidates who think the job is Selenium Testing are stumped by any question falling outside UI testing.

Angie suggests that everyone know and understand the test automation pyramid, first introduced by Mike Cohn:

Screen Shot 2019 07 22 at 11.06.59 AM

In this pyramid, the bulk of automation involves unit tests. Services tests, which validate all but the behavior of the UI, are the next largest chunk. UI tests are the smallest volume of automation tests, as they are both complex and costly in terms of engineering time to ensure both behavior validation and test coverage.

If you want to be prepared for discussing automation, you need to know how to handle unit tests as well as service and business logic tests.

If you are unfamiliar with unit testing, you can take the Unit Testing course on Test Automation University.

Let’s start with unit tests. Imagine you had the following method to automate:

public int add (int a, int b);

Language aside, let’s think about what is needed to test this behavior.

Screen Shot 2019 07 22 at 12.02.40 PM

You may be thinking, “Oh crap, I don’t ever do unit tests.”

Take a second to understand the inputs and outputs.  Here, this method takes two integers and it returns an integer. Given the name of the method, add, you might assume that it’ll add the numbers and return the sum, but it doesn’t hurt to ask a clarifying question such as:

“I assume this method adds the two parameters and returns the sum, right?”

just to make sure is not a trick question.  In fact if there’s anything that’s not perfectly clear or obvious to you, ask about it.

If you find yourself becoming self conscious and you think you’re asking too many dumb questions then just remind the interviewer (and yourself) that you’re a tester and you’re trained to challenge assumptions. Your interviewer will appreciate that.

Handling Unit Test Questions

So for this question, Angie recommends listing every test you can come up with. You don’t see the body of this method, so don’t assume what’s inside of it. Don’t assume everything works beautifully. In fact, assume this method will break in normal operation – so think of normal operations you might run.

Angie listed the following tests from the top of her head:

  • a and b both positive
  • a and b both negative
  • a positive and b negative
  • a negative and b positive
  • a being zero
  • b being zero
  • both a and b being zero
  • the sum of a and b exceeding the memory allocated for an integer

The more tests you can consider the better you will do here.

While this question is gauging your testing abilities, it’s also testing how much you understand about code. You need to understand the difference between valid UI tests and valid unit tests.

Avoid making a common mistake UI testers make in suggesting impossible coded tests – something like:

“Oh, I want to pass a String as one of the arguments.”

That’s a very typical UI test, to see how the UI behaves when a user enters a String in a number field. However, from a unit test perspective, your tests are directly calling into these methods in code. If you were to try to pass a String as an argument, your test wouldn’t even compile.

Know the difference between valid and invalid unit tests. If you suggest an invalid unit test, it sends a signal to your interviewer that maybe you don’t understand code.

Service Layer Tests

Automation testers must understand how web services behave. An SDET is expected to understand service calls, service responses, and hot to test service behavior. Increasingly, Selenium tester interviewing for an automation engineering role needs to know this as well..

If you know UI but don’t know web services, dig in and learn (there are various sites to check out, including the course “Exploring Service APIs through Test Automation” on Test Automation University). Otherwise, spend a little time brushing up.  Angie says interviewers consistently ask about web services.

Screen Shot 2019 07 22 at 1.09.06 PM

Here is a sample question:

Given a user profile, how would you test CRUD operations of a REST API?

Screen Shot 2019 07 22 at 1.24.19 PM

CRUD is an acronym for Create, Read, Update and Delete.  These have equivalent commands REST API commands in Create = POST, Read = GET, Update = PUT, and Delete = DELETE.

In the question, you aren’t given any information about parameters. Should that stop you? Nope. You’ll want your interviewer to know that you can think abstractly – and then be more specific if you’re given more detailed parameters. So, you can ask:

“Do you have a spec, or would you like me to solve this in general terms?”

If they want you to solve this in general terms, you can lean on what you already know to answer this question.

Answering the Service Layer Question

Given that you have already created a user profile someplace you can leverage your experience to think about this problem in the abstract. Once you consider that most service calls have both required fields and optional fields, you can think about passing different parameters. Let’s start with the “Create”, or REST POST method.

Screen Shot 2019 07 22 at 2.10.38 PM

The four POST tests Angie considered included:

  • POST with all required and optional data
  • POST with required data only
  • POST with required data missing
  • POST with invalid data for parameters

You might be able to come up with more than these four basic scenarios, but as a starting point, these are sufficient. It shows you are thinking about normal and abnormal inputs as ways to validate output. If you knew more about the API, you could be more specific.

You can also point out that successful POST calls will result in a status code of 201 for successful creation. You can mention that if you knew more about the API, you could validate more details within the body and header. And this is typically in JSON, XML or plain text format.

Angie put together the tests she considered as basic for all four of the commands:

Screen Shot 2019 07 22 at 2.21.52 PM

These include error paths as well as happy paths, multiple calls, etc. You can also mention appropriate response codes and checking the body values if you have them. For instance, 200 is the response code for a successful GET, PUT, and DELETE.

Remember to think broadly and abstractly for tests to consider.

If these don’t seem obvious to you, it’s worth studying service calls some more.  There are some great courses on Test Automation University. Check out:

UI Automation

So, it may seem like the most trivial UI question you’ll get is something like: use Selenium API to log into an app. It’s a pretty overused question that most interviewers skip by these days.

Screen Shot 2019 07 22 at 3.51.51 PM

Overtime, interviewers have concluded this question exposes little about the prospect’s knowledge of practical UI testing techniques. If you hire someone just on this type of simple question, you may discover your employee can do rote work, but not the analytical thinking needed to write real tests.

Instead, Angie suggests that you expect something like what she uses in interviews – showing a real UI and asking you, the candidate, to use the Page Object Model as a reference for creating an approach to creating UI tests.  Something like this:

Screen Shot 2019 07 23 at 9.42.49 AM

This is Twitter profile on mobile.  Given this page, and the Page Object Model pattern, how would you build the test framework?

This kind of question can expose whether you understand how to approach testing a real world application.

To answer this question, look at the screen and determine which elements your tests might need to interact with and/or verify.

So, on this page, there’s a

  • banner header
  • a profile photo
  • name
  • handle
  • bio
  • location
  • link
  • number following
  • number of followers
  • an Edit Profile button
  • Tweet button, and a
  • navigation footer

There are also four tabs, and each will have it’s own content and interactions.

Thinking Through a User Interface

The key is thinking about how to organize this into a Profile Page class. This all seems straightforward to organize, but Angie points out that this example can separate the great candidates from others. For instance, if you realize that the “Edit Profile” button only exists for the logged-in user, and it won’t show up on anyone else’s profile – that’s great. More importantly, how do you design your page object class for this case? Do you include the Edit Profile button in the Profile Page class, knowing that sometimes it’s visible, and sometimes it’s not – or do you create a base profile page class that has common elements and subclass it with a My Profile vs. Other People’s profile? Whichever way you go, you need to justify your answer.

Also, think about the tabs. There are Tweets, Tweets, and Replies, Media and Likes. How would you handle that? That might be a question for you.

In fact, your interviewer might wait for you to see these tabs and come up with an approach. If not, the interviewer would possibly point them out to you as a hint (and ding you a little on the interview) before asking you how you would deal with these.

Just like with the Edit Profile issue, your approach to these could be to make these part of the Profile Page class or to make each their own class accessible from the Profile Page Class. It’s up to you – you just have to be ready to justify your approach.

Finally, there’s the navigation bar at the bottom. What do you know about it? If it turns out it’s on the bottom of every page, would you create separate calls for each page of your app? Or, would you make a base page class that all pages inherit from which includes the navigation bar? Or, would you do something totally different? Again, you need to justify your answer and be able to discuss the tradeoffs and pros and cons of your choice.

Angie’s Approach to the UI Question

Angie says her design would include:

  • a base page class that includes all the elements that appear on all pages
    • the navigation bar at the bottom with appropriate methods
    • any other elements that show up on every page.

This approach allows code re-use and avoids code duplication (which makes the code easier to maintain if functions and layout change between releases).

For each of the tabs, Angie says she would create separate classes and link to them via “click” methods from the Profile page class. The result is a small Profile page class that is easy to maintain if the tabs ever change.

The tabs themselves are components that have their own web elements and corresponding methods. If this were all one page, it would become a mess to manage, as one would have to keep track of which tab were active and which methods were appropriate at that time.

Finally, with the Edit Profile button, Angie recommends considering the profile page of the user vs. the profile page of another user asking, “Is the Edit Profile button the only difference?” This might be a question for the interviewer. As it turns out, the Edit Profile button only appears on my page, but it turns out that there are lots of buttons on another user’s page – like “Follow”, “Unfollow”, “Block”, “Mute”, etc. This makes the case for having a base profile page that contains all the common profile page elements, as well as two more profile pages classes for “My Profile” and “Other Profiles”. Each would contain what is specific to those pages.

Comfort With the DOM

Another set of interview questions determines whether someone is comfortable with the DOM. It’s such an important part of testing, and yet so many people rely on the locators recommended by their browser’s Developer Tools. This results in flaky tests.

Let’s look at this example.

Screen Shot 2019 07 22 at 4.38.16 PM

How do you build a test that makes a selection and records a vote? Your test should be generic enough that it can select either choice and record a vote.

As an interviewer, Angie shared, she would start with this UI, and then share the HTML that creates this UI:

Screen Shot 2019 07 22 at 4.41.06 PM

From her observations, many test engineers freak out when they see the HTML, because they don’t understand how to automate HTML tests.

Angie finds it disappointing when people say, “Well, I just use Firebug.” Because, when there are no locators – yes, Firebug can build a test, but it will be extremely brittle.

This is what Firebug gives for the second poll option:

/html/body/div/div/div/div/div/div[2]/label/span[2]

Yes, that’s accurate for this build of the app – and it’s brittle. You don’t have any contextual reference. You don’t have a way to link this to the app itself. You don’t link to any reference in the code. You can’t automate this – this is the link to the second option explicitly and directly from the HTML root. If the app changes, this code will break – and you may not know why. You’re going to have to recode this manually all the time. Yuck.

Angie wrote a whole blog post about the perils of relying on recommended locators from Developer Tools.

In going back to the DOM, there’s a span element for “Yes” and a separate span element for “No”. Should you use these? Again, the answer here should depend on how comfortable you are with the idea that “Yes” and “No” are only used in the HTML in these locations on this page. How comfortable are you with that likelihood (HINT: Not very!).

Going up one level for each, there is a call to a radio class for the radio button value. It turns out you can create a CSS selector which allows you to uniquely call one or the other:

.//span[contains(@class, ‘PollXChoice-choice–radio’)]/span[text()=‘%s’]

If you want to know about how to select web element locators like the CSS selector here, there’s a whole course on Test Automation University (and you can read a course preview here).

Development Round

Development questions are the scariest ones for test engineers. Angie says that she thinks they’re scary, too. Engineering management teams are requiring more development-level tests because of the frequency of test automation project failures. They want their test engineers to be top notch coders.

You may get as many as five of these questions during  your interview.

Here are three things to brush up on:

  • Big O Notation
  • Data Structures
  • Algorithms

Big O Notation

This is huge in coding interviews. It’s a measurement of the performance of the algorithm (by execution time) based on the algorithm’s design.

Screen Shot 2019 07 22 at 9.11.02 PM

The point is to know the relationship between the algorithm design and performance, related to the size of the input set. An algorithm with a O(1) is excellent – performance is the same regardless of the input set.  An algorithm with O(2^N) is horrible – something that runs recursively with multiple calls.  If this is familiar, awesome. If not, there is a little more detail in this article.

Data Structures

The next thing to worry about is data structures. There are four common data structures you will encounter:

Screen Shot 2019 07 22 at 9.25.15 PM

  • Hash Table – by far the most common in tech interview questions.
  • Stack – shows up in some questions. Stack is a last-in/first-out data structure.
  • Queue – shows up in some questions. Queue is a first-in/first-out data structure.
  • Linked Lists – doesn’t show up in very many questions. – Each element points to the next element.

Angie says she encountered questions about hashtables frequently, and stacks and queues as well. Linked lists weren’t ones she encountered often, but they did come up.

You can find a more detailed overview of these and other data structures here.

Sample Questions

Angie pointed out that there are two kinds of test approaches she encountered during her interview experiences. Sometimes, she was asked to use a coding site like hackerrank.com. Other times, she would be asked to enter code using an online text editor like Google Docs. In still other cases, she would be asked to code on a whiteboard. Angie suggests you ask your recruiter or the hiring manager what to expect for doing coding examples.

During her interviews, she mentioned being in a room with developers and automation engineers who were evaluating her answers.

Here are some tips to make things go smoothly.

  1. Very rarely will you get written questions. Most of the time, the questions are spoken. Listen and write down what you think people are asking so they can see, and then clarify.
  1. Write down an example to code against. This will help you structure your answer.

Sample Question 1

Angie was given the following question:

“You’re given two arrays and you’re asked to print out any characters that appear in both arrays”.

Using Tip #1, Angie reminds us to think – what does this question mean? Make sure to ask clarifying questions if you’re not sure. So, she asked:

“The array is an array with each element being a single character, correct?” This validates that the array wasn’t an array of strings of arbitrary length, which might make the coding more challenging.

What other questions might you ask?

Approaching Sample Question 1

Using Tip #2, Angie reminds us to think about coding cases – simple cases to define correct behavior, and exception cases that one would expect to encounter.

For the simplest cases in Angie’s example, she wrote down a simple set of arrays:

a1={‘a’,’b’,’c’}

a2={‘b’,’c’,’d’}

She said she normally uses a more complex example to help push her coding approach. As she said,

“If you code for the simplest examples, you may miss the tricky examples. I try to put the tricky one up front to force me to think it through.”

She then updated her array example to have an array with duplicate elements:

a1={‘c’,‘a’,’b’,’c’}

a2={‘b’,’c’,’d’}

Another suggestion is – once  you have your example, write out what your expected result would be for the example before you code – it’s a way to check your work. Here, the answer should be:

Expected Result: {‘c’,’b’}

Now you have a way to validate your work before you get started. What you don’t want to have is code that doesn’t result in your expected output – now you have to work backward to figure out what you did wrong, and it will leave you flustered.

Thinking Through Approaches

Take a minute to think through Big O notation, algorithms, and data structures to begin. Take enough time to pick an approach, but don’t take too long to start – your interviewers will think you don’t know what you are doing, and you will feel flustered. Pick a solution that comes to mind first.

Say something like, “Okay, let me get my thoughts out here and I can refine them later.” That seems to work well.

Angie took a moment to write out code related to her sample question. Her first approach was to write a nested loop, which advanced through array a1 and compared each element to the elements in a2 – printing the character if the element in a1 matched an element in a2. :

Screen Shot 2019 07 22 at 11.23.17 PM

When she walked through this nested loop, she realized there was a problem, as the algorithm would match the ‘c’, then the ‘b’, and then the ‘c’ again. This is not the output she wants.

This is also an O(N^2) approach, which is pretty inefficient. There should be a better way.

Catching Any Mistakes First

The important point is for you to catch these issues before the interviewing team does. They want you to know the problems in your approach to guide you to a better outcome.

In this case, the issue of duplicate values – along with knowing that the O(N^2) algorithm is not the most performant, makes it clear that you know something useful. You know BigO complexity, and you know how to look at your solution and find cases to break it – meaning you understand testing as well. Also, it means you are not afraid to admit when you mess up.

Angie then thought through the best way to improve the solution. She decided that the best approach was to create a hashtable from array a1, then compare array a2 with the a1 hashtable. The code looks like this:

Screen Shot 2019 07 22 at 11.33.50 PM

This is more code than the original solution and it looks ugly, but it ends up being an O(N) solution, which is really efficient. It also addresses the duplicate issue and the O(N^2) performance problem of the first solution. Therefore, it’s a superior answer.

As you walk through issues, your interviewers may be giving you hints or clues about your code. Pay attention to both their hints and their questions.

Sample Question 2

Angie dealt with the next question:

“Compare a String with a List of String elements and count the number of times the String is found in the list.”

Using Tip #1, she asked the questions that came to her mind to clarify the parameters of the problem. Then, she came up with an example and started thinking about code. She came up with this:

Screen Shot 2019 07 22 at 11.39.31 PM

The solution here works, but everyone else will figure this approach out. How do you stand out as creative?

Standing Out From the Crowd

What happens if the list is null, or an entry is null? Won’t lines 3 or 4 generate an error? How do you handle that?

What’s more, if you know your language library, you can take advantage of built in methods. In this instance, there is a built-in Java library method that does this.  The modified code looks like this:

Screen Shot 2019 07 22 at 11.43.19 PM

Here, the code is much more efficient as it leverages the language library. It also shows that you have thought about the null list failure mode (and the element null issue is addressed in the built-in function).

Studying and Resources

Slide Deck

https://slides.com/angiejones/technical-interviews#/

Angie’s Webinar Video on YouTube

For more information on Applitools

 

The post Know The Answers, Get The Job appeared first on Automated Visual Testing | Applitools.

]]>