WebdriverIO Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/webdriverio/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 28 Apr 2023 21:21:22 +0000 en-US hourly 1 Let the Engineers Speak: Selectors in WebdriverIO https://applitools.com/blog/using-web-selectors-in-webdriverio/ Thu, 30 Mar 2023 15:59:26 +0000 https://applitools.com/?p=48883 Earlier this month, Applitools hosted a webinar, Let the Engineers Speak: Selectors, where testing experts discussed one of the most common pain points that pretty much anyone who’s ever done...

The post Let the Engineers Speak: Selectors in WebdriverIO appeared first on Automated Visual Testing | Applitools.

]]>
Christian Bromann

Earlier this month, Applitools hosted a webinar, Let the Engineers Speak: Selectors, where testing experts discussed one of the most common pain points that pretty much anyone who’s ever done web UI testing has felt. In this two-part blog series, I will recap what our experts had to say about locating their web elements using their respective testing frameworks: WebDriverIO and Cypress.

Introducing our experts

I’m Pandy Knight, the Automation Panda. I moderated our two testing experts in our discussion of selectors:

In each article, we’re going to compare and contrast the approaches between these two frameworks. This comparison is meant to be collaborative, not competitive. This series is meant to showcase and highlight the awesomeness that we have with modern frameworks and tools available.

This first article in our two-part series will define our terms and challenges, as well as recap Christian’s WebDriverIO selectors tutorial and WebdriverIO Q&A. The upcoming second article will recap Filip’s Cypress selectors tutorial and Cypress Q&A.

Defining our terms

Let’s define the words that will be used in this series to frame the discussion and so that we have the same understanding as we discuss selectors, locators, and elements:

  • Selectors: A selector is a text-based query used for locating elements. Selectors could be things like CSS selectors, XPaths, or even things like an ID or a class name. In modern frameworks, you can even use things like text selectors to query by the text inside of a button.
  • Locators: A locator is an object in the tool or framework you use that uses the selector for finding the element on the active page.
  • Elements: The element itself is the entity on the page that exists. An element could be a button, it could be a label, an input field, or a text area. Pretty much anything in HTML that you put on a page is an element.

In one sentence, locators use selectors to find elements. We may sometimes use the terms “selector” and “locator” interchangeably, especially in more modern frameworks.

Challenges of locating web elements

We’re going to talk about selectors, locators, and the pain of trying to get elements the right way. In the simple cases, there may be a button on a page with an ID, and you can just use the ID to get it. But what happens if that element is stuck in a shadow DOM? What if it’s in an iframe? We have to do special things to be able to find these elements.

Locating web elements with WebdriverIO

Christian Bromann demonstrated effective web element locator strategies with WebDriverIO. In this series, we used Trello as the example web app to compare the frameworks. For the purposes of the demo, the app is wrapped over a component test, allowing Christian to work with web on the console to easier show you how to find certain elements on the page.

Note: All the following text in this section is based on Christian’s part of the webinar. Christian’s repository is available on GitHub.

Creating a board

First, we want to start creating a new board. We want to write a test to inspect the input element that the end user sees. From the dev tools, the input element can be found.

There are various ways to query for that input element:

  • We could fetch the element with WebdriverIO using $ or $$ commands, depending if you want to fetch one element, the first element on the page, or all elements. Using an input as a selector is not advised, as there could be more inputs and then the wrong input could be fetched.
  • We could also use a CSS class called px-2. Using CSS classes is also not advised, as oftentimes these classes are meant to style an element, not necessarily locate it.
  • We could use a property, such as the name of the input using the data attribute. This is the suggested way to locate a web element in WebdriverIO – using the accessibility properties of an element.

WebdriverIO has a more simplified way to use accessibility properties. This approach is a much better selector, because it’s actually what the user is seeing and interacting with. To make things easier, Chrome helps with finding the accessibility selector for every element under the accessibility tab.

Note: WebdriverIO doesn’t use the accessibility API of the browser and instead uses a complex XPath to compute the accessibility name.

After you’ve located the input element, press enter using the keys API.

  it('can create an initial board', async () => {
    await $('aria/Name of your first board').setValue('Let the Engineers Speak')
    await browser.keys(Key.Enter)
    await expect(browser).toHaveUrlContaining('/board/1')
    
    await browser.eyesCheck('Empty board')
  })

Creating a list

Now that our board is created, we want to start a list. First, we inspect the “Add list” web element for the accessibility name, and then use that to click the element.

Set the title of the list, and then press enter using the keys API.

  it('can add a list on the board', async () => {
    await $('aria/Enter list title...').setValue('Talking Points')
    await $('aria/Add list').click()

    /**
     * Select element by JS function as it is much more perfomant this way
     * (1 roundtrip vs nth roundtrips)
     */
    await expect($(() => (
      [...document.querySelectorAll('input')]
        .find((input) => input.value === 'Talking Points')
    ))).toBePresent()
  })

Adding list items

To add to the list we created, the button is a different element that’s not accessible, as the accessibility name is empty. Another way to approach this is to use a WebdriverIO function that allows me to add a locator search applicator strategy.

The example applicator strategy basically queries all this on the page to find all divs that have no children and that text of a selector that you provide, and now you should be able to query that custom element. After getting the custom elements located, you can assert that three cards have been created as expected. You can inject your JavaScript script to be run in the browser if you don’t have accessibility selectors to work with.

  it('can add a card to the list', async () => {
    await $('aria/Add another card').click()
    await $('aria/Enter a title for this card...').addValue('Selectors')
    await browser.keys(Key.Enter)
    await expect($$('div[data-cy="card"]')).toBeElementsArrayOfSize(1)
    await $('aria/Enter a title for this card...').addValue('Shadow DOM')
    await browser.keys(Key.Enter)
    await $('aria/Enter a title for this card...').addValue('Visual Testing')
    await browser.keys(Key.Enter)
    await expect($$('div[data-cy="card"]')).toBeElementsArrayOfSize(3)

    await browser.eyesCheck('board with items')
  })

Starring a board

Next, we’ll “star” our board by traversing the DOM using WebdriverIO commands. We need to first locate the element that has the name of the board title using the attribute selector and going to the parent element.

From that parent element, to get to the star button, we chain the next command and call the next element. Now we can click the star and see it change to an enabled state. So with WebdriverIO, you can chain all these element queries and then add your action at the end. WebdriverIO uses a proxy in the background to transform everything and execute the promises after each other so that they’re available. One last thing you can also do is query elements by finding links with certain text.

Summarizing Christian’s suggestions for using selectors in WebdriverIO, always try to use the accessibility name of an element. You have the dev tools that give you the name of it. If you don’t have the accessibility name available, improve the accessibility of your application if possible. And if that’s not an option, there are other tricks like finding an element using JavaScript through propertis that the DOM has.

  it('can star the board', async () => {
    const startBtn = $('aria/Let the Engineers Speak').parentElement().nextElement()
    await startBtn.click()
    await expect(startBtn).toHaveStyle({ color: 'rgba(253,224,71,1)' })
  })

Accessing a shadow root

Using a basic web HTML timer component as an example, Christian discussed shadow roots. The example has multiple elements and a nested shadow root. Trying to access a button within the shadow root results in an error that you don’t have access from the main page to the shadow root. WebdriverIO has two things to deal with this challenge. The first method is the deep shadow root selector. This allows you to access all of the shadow root elements or filter by defined attributes of the elements in the shadow root.

A different way to access elements in the shadow is using the browser shadow command, which basically allows you to switch to the search within the shadow root.

    describe('using deep shadow selector (>>>)', () => {
        beforeEach(async () => {
            await browser.url('https://lit.dev/playground/#sample=docs%2Fwhat-is-lit&view-mode=preview')

            const iframe = await $('>>> iframe[title="Project preview"]')
            await browser.waitUntil(async () => (
                (await iframe.getAttribute('src')) !== ''))
            
            await browser.switchToFrame(iframe)
            await browser.waitUntil(async () => (await $('my-timer').getText()) !== '')
        })

        it('should check the timer components to work', async () => {
            for (const customElem of await $$('my-timer')) {
                const originalValue = await customElem.getText()
                await customElem.$('>>> footer').$('span:first-child').click()
                await sleep()
                await customElem.$('>>> footer').$('span:first-child').click()
                await expect(customElem).not.toHaveTextContaining(originalValue)

                await customElem.$('>>> footer').$('span:last-child').click()
                await expect(customElem).toHaveTextContaining(originalValue)
            }
        })
    })

Integrating with Applitools

Lastly, Christian shared using Applitools with WebdriverIO. The Applitools Eyes SDK for WebdriverIO is imported to take snapshots of the app as the test suite runs and upload them to the Applitools Eyes server.

if (process.argv.find((arg) => arg.includes('applitools.e2e.ts'))) {
    config.services?.push([EyesService, {
        viewportSize: {width: 1200, height: 800},
        batch: {name: 'WebdriverIO Test'},
        useVisualGrid: true,
        browsersInfo: [
            {width: 1200, height: 800, name: 'chrome'},
            {width: 1200, height: 800, name: 'firefox'}
        ]
    }])

With this imported, our tests can be simplified to remove some of the functional assertions, because Applitools does this for you. From the Applitools Eyes Test Manager, you can see the tests have been compared against Firefox and Chrome at the same time, even though only one test was run.

WebdriverIO selectors Q&A

After Christian shared his WebdriverIO demonstration, we turned it over to our Q&A, where Christian responded to questions from the audience.

Using Chrome DevTools

Audience question: Is there documentation on how to run WebdriverIO in DevTools like this to be able to use the browser and $ and $$ commands? This would be very helpful for us for day-to-day test implementation.
Christian’s response: [The demo] is actually not using DevTools at all. It uses ChromeDriver in the background, which is automatically spun up by the test runner using the ChromeDriver service. So there’s no DevTools involved. You can also use the DevTools protocol to automate the browser. The functionality is the same. WebdriverIO executes the same XPath. There’s a compliant DevTools implementation to the WebdriverIO protocol so that the WebdriverIO APIs works on both. But you really don’t need DevTools to use all these accessibility selectors to test your application.

Integrating with Chrome Console

Question: How is WebdriverIO integrated with Chrome Console to type await browser.getHtml() etc.?
Christian’s response: With component tests, it’s similar to what other frameworks do. You actually run a website that is generated by WebdriverIO. WebdriverIO injects a couple of JavaScript scripts and it loads web within that page as well. And then it basically sends commands back to the Node.js world where it’s then executed by ChromeDriver and then the responses are being sent back to the browser. So basically WebdriverIO as a framework is being injected into the browser to give you all the access into the APIs. The actual commands are, however, run by Chrome Driver.

Testing in other local languages

Question: If we’re also testing in other languages (like English, Spanish, French, etc.), wouldn’t using the accessibility text fail? Would the ID not be faster in finding the element as well?
Christian’s response: If you have a website that has multiple languages and you have a system to inject those or to maintain the languages, you can use this tool to fetch the accessibility name of that particular language you test the website in. Otherwise, you can say I only tested one language, because I would assume it would not be different in other languages. Or you create a library or a JSON file that contains the same selectors for different languages but the same accessibility name for different languages. And then you import that JSON to your test and just reference the label to have the right language. So there are ways to go around it. It would make it more difficult in different languages, obviously, but still from the way to maintain end-to-end tests and maintain tests in general, I would always recommend accessibility names and accessibility labels.

Using div.board or .board

Question: Do you have a stance/opinion on div.board versus .board for CSS selectors?
Christian’s response: I would always prefer the first one, just more specific. You know, anyone could add another board class name to an input element or what not. And so being very specific is usually the best way to go in my recommendation.

Tracking API call execution

Question: How can we find out if an API call has executed when we click on an element?
Christian’s response: WebdriverIO has mocking capabilities, so you can look into when a certain URL pattern has been called by the browser. Unfortunately, that’s fairly based on DevTools because WebdriverIO, the first protocol, doesn’t support it. We are working with the browser vendors and the new WebdriverIO binary protocol to get mocking of URLs and stubbing and all that stuff to land in the new browsers. And so it’ll be available across not only Chrome, but also Firefox, Safari, and so on.

Working with iframes

Question: Is it possible to select elements from an iframe and work with an iframe like normal?

Christian’s response: An iframe is almost similar to a shadow DOM – just that an iframe has much more browser context than just a shadow DOM. WebdriverIO has an issue currently. To implement the deep shadow selector for iframes as well, you could do three characters and then name any CSS path and it would make finding an iframe very easy. But you can always switch to an iframe and then query it. So it’s a little bit more difficult with iframes, but it’s doable.

Learn more

So that covers our expert’s takeaways on locating web elements using WebdriverIO. In the next article in our two-part series, I’ll recap Filip’s tutorial using Cypress, as well as the Cypress Q&A with the audience.

If you want to learn more about any of these tools, frameworks like Cypress, WebDriverIO, or specifically web element locator strategies, be sure to check out Test Automation University. All the courses and content are free. You can also learn more about visual testing with Applitools in the Visual Testing learning path.
Be sure to register for the upcoming Let the Engineers Speak webinar series installment on Test Maintainability coming in May. Engineers Maaret Pyhäjärvi from Selenium and Ed Manlove from Robot Framework will be discussing test maintenance and test maintainability with a live Q&A.

The post Let the Engineers Speak: Selectors in WebdriverIO appeared first on Automated Visual Testing | Applitools.

]]>
Let the Engineers Speak! Part 5: Audience Q&A https://applitools.com/blog/let-the-engineers-speak-part-5-audience-qa/ Wed, 11 Jan 2023 14:58:00 +0000 https://applitools.com/?p=45617 In this final part of our Cypress, Playwright, Selenium, or WebdriverIO? Let The Engineers Speak recap series, we will cover the audience Q&A, sharing the most popular questions from the...

The post Let the Engineers Speak! Part 5: Audience Q&A appeared first on Automated Visual Testing | Applitools.

]]>
Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak! from Applitools

In this final part of our Cypress, Playwright, Selenium, or WebdriverIO? Let The Engineers Speak recap series, we will cover the audience Q&A, sharing the most popular questions from the audience and the answers our experts gave. Be sure to read our previous post.

The experts

I’m Andrew Knight – the Automation Panda – and I moderated this panel. Here were the panelists and the frameworks they represented:

  • Gleb Bahmutov (Cypress) – Senior Director of Engineering at Mercari US
  • Carter Capocaccia (Cypress) – Senior Engineering Manager – Quality Automation at Hilton
  • Tally Barak (Playwright) – Software Architect at YOOBIC
  • Steve Hernandez (Selenium) – Software Engineer in Test at Q2
  • Jose Morales (WebdriverIO) – Automation Architect at Domino’s

The discussion

Andy Knight (moderator):

So our first question comes from Jonathan Nathan.

Can Playwright or Cypress handle multiple browser tabs? What do engineers in these tools do for Azure authentication or target new links?

Gleb Bahmutov (Cypress):

[It’s a] very specific kind of interesting question, right? Multiple tabs and Azure authentication. You must have both. I think Playwright is [a] better tool, because it supports multiple browser tabs control right out of the box. So I would go with that.

Carter Capocaccia (Cypress):

Can I share a hack for this real quick?

Andy Knight (moderator):

Sure.

Carter Capocaccia (Cypress):

So I’m going to share the way that you don’t have to deal with multiple browser tabs in Cypress. And that’s just, you change the DOM from a target blank to a target self and then it just opens in the same window. So if you have to use Cypress [and] you need to get around multiple browser tabs, you can do DOM manipulation with Cypress.

Andy Knight (moderator):

Man, it sounds kind of hacky, though.

Carter Capocaccia (Cypress):

No, I mean it’s absolutely right. Yeah, it’s hacky.

Andy Knight (moderator):

So I got a question then. Is that like a feature request anyone has lifted up for Cypress? Or is that just because of the way the Cypress app itself is designed that that’s just a non-starter?

Gleb Bahmutov (Cypress):

It’s an open issue. Cypress team says it’s not a priority. Because you think about two tabs, right? Two tabs that present communication between one user and another of a backend. So you always want to kind of stop that and control it so you don’t have to actually have two windows. You want to control one window and communicate with API calls on the back. At least that’s the Cypress team opinion. So we might not see it any time soon.

ICYMI: Cypress ambassador Filip Hric shared What’s New in Cypress 12, which includes an update on the feature just discussed.

Andy Knight (moderator):

Yeah. I know in Playwright like, Gleb, like you said, it is really easy and nice because you have a browser instance. Then in that instance, you have multiple browser contexts and then, from each browser context, you’re going to have multiple pages. So yeah, I love it. Love it.

Tally Barak (Playwright):

And the better thing is that you can have one of them, like in a mobile size or emulating a mobile device and the other one in web. So if you want to run a test of like cutting between two different users, each one is incognito and is a complete browser context. They don’t share their local storage or anything. So basically, you can run any test that you want on the thing. And because of the browser, it also works really fast. Because it’s not really launching a whole browser. It’s just launching a browser context, which is much, much faster than launching the whole browser.

Andy Knight (moderator):

Awesome. Alright, let’s move on to our next question here. This is from Sundakar.

Many of the customers I worked with are preferring open source. And do you think Applitools will have its place in this open source market?

Can I answer this one? Because I work for Applitools.

For this one, I think absolutely yes. I mean all of Applitools Eyes SDKs are open source. What Applitools provides is not only just the mechanism for visual testing, but also the platforms. We work with all the open source tools, frameworks, you name it. So absolutely, I would say there’s a place here. Let me move on to the next question.

Gleb Bahmutov (Cypress):

Andy, before you move on, can I add? So my computer science PhD is in computer vision image processing. So it’s all about comparing new images, teaching them, and so on. I would not run my own visual testing service, right? My goal is to make sure I’m testing [the] web application. Keeping images, comparing them, showing the diffs, updating it. It’s such a hassle, it’s not worth it, my time. Just pay for a service like Applitools and move on with your life.

Andy Knight (moderator):

Awesome. Thank you. Okay. Let me pull the next question here. This is from Daniel.

I heard a lot that Playwright is still a new framework with a small community even when it was released in January of 2020 but never heard that about WebdriverIO. As far as I know, Playwright is older.

I don’t think that is true. I’d have to double check.

Tally Barak (Playwright):

No, I don’t think [so].

Andy Knight (moderator):

Is Playwright still considered new?

Tally Barak (Playwright):

It’s newer than the others. But it’s growing really fast. I mean, because I’m the [Playwright] OG, I remember the time when I would mention Playwright and no one had any idea what I’m talking about. It was very, very new. This is not the case anymore. I mean, there’s still, of course, people don’t really hear about it, but the community has grown a lot. I think [it has] over 40,000 stars on GitHub. The Slack channel has almost 5,000 participants or something. So the community is growing, Playwright is growing really, really nicely. And you’re welcome to join.

Andy Knight (moderator):

Okay, here’s another question from Ryan Barnes.

Do y’all integrate automated tests with a test case management tool? If so, which ones?

Gleb Bahmutov (Cypress):

Yes. TestRail.

Andy Knight (moderator):

TestRail. Okay.

Gleb Bahmutov (Cypress):

Because we are not the only testing tool, right? Across organizations, where our teams, our tools, and manual testing. So we need a central testing portal.

Tally Barak (Playwright):

No, we are not perfect. And we should. Any good ideas are welcome.

Carter Capocaccia (Cypress):

So we don’t have a formalized test manager tool. But if anybody has ever used any kind of Atlassian tooling – there’s, you know, JIRA out there has the idea of a test set ticket inside of the test set or individual test. You can define user flows inside of there. So I guess you can consider that a test management tool. It’s a little bit less featured than something like TestRail. Actually, it’s a lot less featured than something like TestRail. But you know, that’s how we stay organized. So we basically tie our tests to a ticket. That’s how we can manage, you know, well what is this ticket test? What is it supposed to be testing? Where is our source of truth?

Andy Knight (moderator):

I guess I could launch a somewhat controversial question here, but I’ll do it rhetorically not to answer. But if you have a test automation solution, do you really need to have it export results to a test case management tool? Or can you just use the reports it gives you? We’ll leave that for offline. So the next one on the list here is from Sindhuja.

We are trying to convert our test scripts from Protractor.

Okay. Wow, that’s a blast from the past.

We are doing [a] proof of concept and WebdriverIO and we have issues with running in Firefox and WebdriverIO. Is there any notes in WebdriverIO with cross browsers?

Jose, got any insights for us here?

Jose Morales (WebdriverIO):

Yeah, absolutely. So one thing that I really love about WebdriverIO is the easy configuration. So, when you create a project in WebdriverIO, you have a JSON file where you put all the configuration about capability services, what browser you want to use. And you can easily add your own configuration. It could be, for example, if you want to run in Firefox or in Edge or you want to run on Source Labs, you have several options.
So it is really easy to integrate configuration for Firefox. You only need to specify the browser in the capability section along with the version and special features like size view. If you want to know how to do that, it’s very easy. You can go to my home page. And there [are] examples where you can build something from scratch and you can see easily where to add that particular configuration. And I’m going to share with you some repositories in GitHub where you can see some examples [of] how to do it.

Andy Knight (moderator):

Thank you so much, Jose. Oh, here we go.

Which framework would be the best platform to test both Android and iOS apps?

I know most of us [are] focused on web UI, so here’s a curveball: mobile.

Gleb Bahmutov (Cypress):

I can say that at Mercari US, we picked Detox after using Appium for a long time. But for new React Native projects, we went with Detox.

Carter Capocaccia (Cypress):

Yeah, Detox is the only one that I’ve ever used it for as well. And it was really, really good. I found no reason to switch. I think, Gleb, can you correct me if I’m wrong on this? I think Detox was originally made by Wix? Is it, Wix, the company?

Gleb Bahmutov (Cypress):

That’s correct. Yes.

Carter Capocaccia (Cypress):

Yes, so Wix used to make Detox. I think it’s still maintained by them, but it was like an in-house tool  they open sourced, and now it’s really good.

Andy Knight (moderator):

Awesome. Cool. I hadn’t heard. I’ll have to look it up. Alrighty, well I think that’s about all the time we have for question and answer today. I want to say thank you to everyone for attending. Thank you to all of our speakers here on the panel.

Conclusion

This article concludes our Cypress, Playwright, Selenium, or WebdriverIO? Let The Engineers Speak recap series. We got to hear from engineers at Mercari, YOOBIC, Hilton, Q2, and Domino’s about how their teams build their test automation projects and why they made their framework decisions. Our panelists also shared insights into advantages and disadvantages they’ve encountered in their test automation frameworks. If you missed any previous part of the series, be sure to check them out:

The post Let the Engineers Speak! Part 5: Audience Q&A appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation Video Winter Roundup: September – December 2022 https://applitools.com/blog/test-automation-video-winter-roundup-september-december-2022/ Mon, 09 Jan 2023 18:35:00 +0000 https://applitools.com/?p=45499 Get all the latest test automation videos you need right here. All feature test automation experts sharing their knowledge and their stories.

The post Test Automation Video Winter Roundup: September – December 2022 appeared first on Automated Visual Testing | Applitools.

]]>
Applitools minions in winter

Check out the latest test automation videos from Applitools.

We hope you got to take time to rest, unplug, and spend time with your loved ones to finish out 2022 with gratitude. I have been incredibly appreciative of the learning opportunities and personal growth that 2022 offered. In reflection of our past quarter here at Applitools, we’ve curated our latest videos from some amazing speakers. If you missed any videos while away on holiday or finishing off tasks for the year, we’ve gathered the highlights for you in one spot.

ICYMI: Back in November, Andrew Knight (a.k.a. the Automation Panda) shared the top ten Test Automation University courses.

Cypress vs. Playwright: The Rematch

One of our most popular series is Let the Code Speak, where we compare testing frameworks in real examples. In our rematch of Let the Code Speak: Cypress vs. Playwright, Andrew Knight and Filip Hric dive deeper to how Cypress and Playwright work in practical projects. Quality Engineer Beth Marshall moderates this battle of testing frameworks while Andy and Filip explore comparisons of their respective testing frameworks in the areas of developer experience, finding selectors, reporting, and more.

Video preview of Cypress vs Playwright: The Rematch webinar

Automating Testing in a Component Library

Visual testing components allows teams to find bugs earlier, across a variety of browsers and viewports, by testing reused components in isolation. Software Engineering Manager David Lindley and Senior Software Engineer Ben Hudson joined us last year to detail how Vodafone introduced Applitools into its workflow to automate visual component testing. They also share the challenges and improvements they saw when automating their component testing.

Video preview of Automating Testing in a Component Library webinar

When to Shift Left, Move to Centre, and Go Right in Testing

Quality isn’t limited to the end of the development process, so testing should be kept in mind long before your app is built. Quality Advocate Millan Kaul offers actionable strategies and answers to questions about how to approach testing during different development phases and when you should or shouldn’t automate. Millan also shares real examples of how to do performance and security testing.

Video preview of When to Shift Left, Move Centre, and Go Right in Testing webinar

You, Me, and Accessibility: Empathy and Human-Centered Design Thinking

Inclusive design makes it easier for your customers with your varying needs and devices are able to use your product. Accessibility Advocate and Crema Test Engineer Erin Hess talks about the principles of accessible design, how empathy empowers teams and end users, and how to make accessibility more approachable to teams that are newer to it. This webinar is helpful all team members, whether you’re a designer, developer, tester, product owner, or customer advocate.

Video preview of You, Me, and Accessibility webinar

Erin also shared a recap along with the audience poll results in a follow-up blog post.

Future of Testing October 2022

Our October Future of Testing event was full of experts from SenseIT, Studylog, Meta, This Dot, EVERSANA, EVERFI, LAB Group, and our own speakers from Applitools. We covered test automation topics across ROI measurement, accessibility, testing at scale, and more. Andrew Knight, Director of Test Automation University, concludes the event with eight testing convictions inspired by Ukiyo-e Japanese woodblock prints. Check out the full Future of Testing October 2022 event library for all of the sessions.

Video preview of Future of Testing keynote

Skills and Strategies for New Test Managers

Being a good Test Manager is about more than just choosing the right tools for your team. EasyJet Test Manager Laveena Ramchandani shares what she has learned in her experience on how to succeed in QA leadership. Some of Laveena’s strategies include how to create a culture that values feedback and communication. This webinar is great for anyone looking to become a Test Manager or for anyone who has newly started the role.

Video preview of Skills and Strategies for New Test Managers

Ensuring a Reliable Digital Experience This Black Friday

With so much data and so many combinations of state, digital shopping experiences can be challenging to test. Senior Director of Product Marketing Dan Giordano talks about how to test your eCommerce application to prioritize coverage on the most important parts of your application. He also shares some common shopper personas to help you start putting together your own user scenarios. The live demo shows how AI-powered automated visual testing can help retail businesses in the areas of visual regression testing, accessibility testing, and multi-baseline testing for A/B experiments.

Video preview of Ensuring a Reliable Digital Experience webinar

Dan gave a recap and went a little deeper into eCommerce testing in a follow-up blog post.

Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak!

Our popular Let the Code Speak webinar series focused primarily on differences in syntax and features, but it doesn’t really cover how these frameworks hold up in the long term. In our new Let the Engineers Speak webinar, we spoke with a panel of engineers from Mercari US, YOOBIC, Hilton, Q2, and Domino’s about how they use Cypress, Playwright, Selenium, and WebdriverIO in their day-to-day operations. Andrew Knight moderated as our panelists discussed what challenges they faced and if they ever switched from one framework to another. The webinar gives a great view into the factors that go into deciding what tool is right for the project.

Video preview of Let the Engineers Speak webinar

More on the way in 2023!

We’ve got even more great test automation content coming this year. Be sure to visit our upcoming events page to see what we have lined up.

Check out our on-demand video library for all of our past videos. If you have any favorite videos from this list or from 2022, you can let us know @Applitools. Happy testing!

The post Test Automation Video Winter Roundup: September – December 2022 appeared first on Automated Visual Testing | Applitools.

]]>
Let the Engineers Speak! Part 4: Changing Frameworks https://applitools.com/blog/let-the-engineers-speak-part-4-changing-frameworks/ Thu, 05 Jan 2023 22:51:57 +0000 https://applitools.com/?p=45523 In part 4 of our Cypress, Playwright, Selenium, or WebdriverIO? Let The Engineers Speak recap series, we will recap the stories our panelists have about changing their test frameworks –...

The post Let the Engineers Speak! Part 4: Changing Frameworks appeared first on Automated Visual Testing | Applitools.

]]>
Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak! from Applitools

In part 4 of our Cypress, Playwright, Selenium, or WebdriverIO? Let The Engineers Speak recap series, we will recap the stories our panelists have about changing their test frameworks – or not! Be sure to read our previous post where our panelists talked about their favorite test integrations.

The experts

I’m Andrew Knight – the Automation Panda – and I moderated this panel. Here were the panelists and the frameworks they represented:

  • Gleb Bahmutov (Cypress) – Senior Director of Engineering at Mercari US
  • Carter Capocaccia (Cypress) – Senior Engineering Manager – Quality Automation at Hilton
  • Tally Barak (Playwright) – Software Architect at YOOBIC
  • Steve Hernandez (Selenium) – Software Engineer in Test at Q2
  • Jose Morales (WebdriverIO) – Automation Architect at Domino’s

The discussion

Andy Knight (moderator):

So I’m going to ask another popcorn question here. And this one again comes with an audience poll. So if you’re in the audience, hop over to the polls and take a look here. Have you ever considered switching your test tool or framework? And if so, why? If so, why not?

Poll results of frameworks participants wish they had in their projects

Carter Capocaccia (Cypress):

Yeah, we considered it and did it. You know, I think you have to acknowledge it at a certain point that maybe what you’re doing isn’t working or the tool doesn’t fit your application or your architecture. And it’s a really hard decision to make, right? Because typically, by the time you get to that point, you’ve invested tens, hundreds, if not thousands of hours into a framework or an architecture. And making that switch is not an easy task.

Andy Knight (moderator):

What did you switch from and to?

Carter Capocaccia (Cypress):

Yeah, so [Selenium] WebDriver to Cypress. And you can already kind of see that architecturally they’re very, very different. But yeah, it was not an easy task, not an easy undertaking, but I think we had to admit to ourselves that WebDriver did not fit our needs. It didn’t operate in the way we wanted it to operate. At least for us, it was set up in a way that did not match our development cycles, did not match our UI devs’ patterns and processes.

So when we talk about, you know, kind of making a team of test engineers have additional help, well, one way to do that is [to align] your test automation practices with your UI dev practices. And so if we can have a JavaScript test framework that runs in a web app, that acts like a web app, that feels like a web app, well, we’ve now just added an entire team of UI developers that we’ve kind of made them testers, but we haven’t told them that yet. That’s kind of how that works. You adopt these people by just aligning with their tooling and all of a sudden they realize like, hey, I can do this too.

Andy Knight (moderator):

Mm-hmm. So I want to ask another question. Since you’re saying you used Selenium WebDriver before, in what language did you automate your test? Was that in JavaScript, just like first test, or was that in Java or Python or something else?

Carter Capocaccia (Cypress):

Yep, so it’s JavaScript. But when we look [at the] architecture of the app, right? So the architecture of WebDriver is very different than Cypress. [With] Cypress, you write a JavaScript function just like you write any other function. And most of the time, it’s going to operate just like you expect it to. I think that’s where, instead of them having to learn a WebDriver framework and what the execution order of things was and how the configurations were, Cypress is just pretty apparent in how it functions. When you look at like, well, how is the Cypress test executed? It uses Mocha under the hood as the test runner. So that’s just functions that are just kind of inside of describe and context and it blocks. So I think anybody that doesn’t know anything about testing but is a JavaScript developer can hop right into Cypress, look at it, and say, yeah, I think I know what’s going on here. And probably within the first 5 to 10 minutes, write an assertion or write a selector.

Andy Knight (moderator):

Awesome. That’s really cool. And so was that something that you and your team or org did recently? Was this like a couple years ago that you made this transition?

Carter Capocaccia (Cypress):

You know, I think I’d be lying to you if I told you the transition ever ends, right? I think it starts and you get to a point where you’ve scaled and you’ve gotten to where you’ve written a ton of tests on it, and then that’s great. But of course, it’s kind of the bell curve of testing. Like it starts really, really slow because you’re moving off the old framework, then you start to scale really quickly. You peak, and then at that point, you have these kind of outliers on the other side of the bell that are just running legacy applications or they just don’t want to move off of it. Or for whatever reason, they can’t move off of it. Let’s say they’re running like Node 10 still. It’s like okay, well, maybe we don’t want to invest the time there to move them off of it.

Yeah. So it’s a long journey to do a task like this. So it’s never done. By the time we’re finished, you know, there’s gonna be a new testing tool or Cypress will have a new version or something else will be new in there. You know, pipelines will [be through] some AI network at this point. Who knows? But I’m sure we’ll have to start the next task.

Andy Knight (moderator):

Sure. So how about others in our group of panelist speakers here? Have y’all ever transitioned from one framework or tool to another? Or in your current project, did you establish it and you were just super happy with what you chose? And tell us the reasons why.

Tally Barak (Playwright):

Yes. So, yeah, we also switched from WebdriverIO as I said at the beginning. We switched to Playwright, obviously. It was not that painful, to be honest, maybe because we kept the Cucumber side. So we had [fewer] scenarios than what we have today. So it was less to move. I think it went over a period [of] a few months. Unlike you, Carter, we actually moved completely away from WebdriverIO. There is no WebdriverIO, everything is with Playwright. But again, probably a smaller organization, so we were able to do that. And are we happy? I mean, [we are] thankful every single day for this switch.

Gleb Bahmutov (Cypress):

I can comment. At Mercari, we did switch from WebDriver in Python and mabl. So both were because the maintenance of tests and debugging of failed tests was [a] very hard problem, and everyone agreed that there is a problem. The transition was, again, faster than we expected, just because the previous tests did not work very well. So at some point we were like, well, we have 50 failing daily tests, right? It’s not like we can keep them. It’s not a sunken cost. It’s a cost of not working. So we have to recreate the test. But it was not that bad. So for us, the cost of debugging and maintaining the previous tool just became overwhelming, and we decided, let’s just do something else.

Jose Morales (WebdriverIO):

In my experience, we were using Selenium with JavaScript as a previous framework, and then we moved to WebdriverIO with JavaScript and Applitools. Because in Domino’s, we have different kind of products. We have another product that is called Next Generation of the stores. We’re using a reactive application there. This is also a website application. And we’re evaluating other tools for automation such as UiPath. And one thing that I like about UiPath is [it’s] a low-coding solution. That means you don’t need to really be a very proficient developer in order to create the scenarios or automate with UiPath, because it only [requires] drags and drops [to] create the scenarios. And it’s very easy to automate. And in the top of that, with UiPath, it’s a very enterprise solution.

We have different kinds of components like an orchestrator, we have a test manager, we have integration with Jenkins. We have robots that execute our test cases. So it’s an enterprise solution that we implemented for that particular project. And we have the same results as WebdriverIO with Applitools. We’re happy with the two solutions – the two frameworks – and we’re considering to remove WebdriverIO and Applitools and use UiPath for the main website for Domino’s. So I think in the future, the low-coding solution for automation is a way to go in the future. So definitely UiPath is a good candidate for us for our automation in the website.

Andy Knight (moderator):

How about you, Steve? I know you’re right now on a Selenium-based test project. Have you and your team thought about either moving to something else, or do you feel pretty good about the whole Selenium-ness?

Steve Hernandez (Selenium):

I think clearly there appears to be major speed-ups that you get from the JavaScript-based solutions. And I know you’ve evangelized a little bit about Playwright and then I’m hearing good things from others. I mean, we have a mature solution. We have 98% pass rates. You know, some of the limitation is more what our application can handle when we blast it with tests. I don’t know if your developers after each local build are tests running locally using some of these JavaScripts solutions. But I think one of the things that is appealing is the speed. And with Boa Constrictor 3, the roadmap is to – well, it has been modularized. So you can bring your own Rusty API client or potentially swap in Selenium and swap in something like a module for Playwright. So we would get that speed up but keep the same API that we’re using. So we’re toying with it, but if we can pull that off in Boa Constrictor, that would take care of that for us.

Andy Knight (moderator):

Yeah, man, definitely. So, I mean, it sounds like a big trend from a lot of folks has been, you know, step away from things like Selenium WebDriver to [a] more modern framework. You know, because Selenium WebDriver is pretty much just a low-level browser automator, which is still great. It’s still fine. It’s still awesome. But I mean, one of the things I personally love about Playwright – as well as things like Cypress – is that it gives you that nice feature richness around it to not only help you run your tests, but also help develop your tests.

Audience Q&A

So we had a fantastic conversation amongst each of our panel speakers about the considerations they had as their test projects grew and changed. We found that most of our panelists have had to make changes to their test frameworks used as their projects’ needs changed. In the next article, we’ll cover the most popular questions raised by our audience during the webinar.

The post Let the Engineers Speak! Part 4: Changing Frameworks appeared first on Automated Visual Testing | Applitools.

]]>
Let the Engineers Speak! Part 3: Favorite Test Integrations https://applitools.com/blog/let-the-engineers-speak-part-3-favorite-test-integrations/ Wed, 28 Dec 2022 16:06:28 +0000 https://applitools.com/?p=45276 In part 3 of our Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak recap series, we will recap our panelists’ favorite integrations in their test automation projects, as well...

The post Let the Engineers Speak! Part 3: Favorite Test Integrations appeared first on Automated Visual Testing | Applitools.

]]>
Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak! from Applitools

In part 3 of our Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak recap series, we will recap our panelists’ favorite integrations in their test automation projects, as well as share our audience’s favorite integrations. Be sure to read our previous post, where our panelists discussed their biggest challenges in test automation.

The experts

I’m Andrew Knight – the Automation Panda – and I moderated this panel. Here were the panelists and the frameworks they represented:

  • Gleb Bahmutov (Cypress) – Senior Director of Engineering at Mercari US
  • Carter Capocaccia (Cypress) – Senior Engineering Manager – Quality Automation at Hilton
  • Tally Barak (Playwright) – Software Architect at YOOBIC
  • Steve Hernandez (Selenium) – Software Engineer in Test at Q2
  • Jose Morales (WebdriverIO) – Automation Architect at Domino’s

The discussion

Andy Knight (moderator):

Alrighty. So I mean, this, our conversation is now kind of going in so many good directions. This is awesome. We keep talking about integrations and how our frameworks and our tools can connect with other things. So I guess my next question I would like to ask as a big popcorn question would be: what are some of your favorite integrations with your test projects? I’m talking about things like CI hooks, special reports that y’all have, maybe things for visual testing, accessibility testing.

And not only is this a question for the panel speakers, but also for the audience. We have another Slido word cloud, so head over to the Slido tab. We’d love to hear what your favorite integrations are for your test projects.

But I’d like to now direct this question back to our panel and ask them to pick one. Tell me why you love it so much. Tell me how y’all use it and what value y’all get out of it.

Poll results from over 60 participants in the webinar that depict the audience's favorite test integrations

Carter Capocaccia (Cypress):

Okay, I’ll hop in here. I think for me with Cypress, it’s just under the hood, it’s just JavaScript, right? So most anywhere is able to hook into that immediately. So when we talk about integrations, let’s just use GitHub for example here. If you wanted a GitHub Actions pipeline, you can just go ahead and start doing that immediately. And Cypress out of the box will provide you a Docker image to go ahead and use right inside of your GitHub Actions pipeline.

So for me, I guess I’m not going to focus on any one specific tool for integration here. I think the way that Cypress kind of makes itself agnostic to all external tools is what makes it really, really nice. You know, it doesn’t say, hey, you’ve gotta use us with this particular, like an Azure pipeline or any kind of other DevOps tool. It says, hey, under the hood, all we are is a JavaScript framework. If you want to execute our JavaScript inside of some kind of Docker image, create your own Docker image. Gleb knows all about this. He publishes a new Docker image like every day it seems.

And we have, you know, the ability to take all that, take the results, and the base of what comes out of Cypress is just JSON. We’ll shoot that over to an open-source dashboard. So there’s Report Portal out there if you wanted to take those results, and then put them into a portal. I see on here [the poll results] Allure reports. Well, okay, so you take it, you just put it into any kind of integration there.
So I think for me personally – I actually wrote a whole blog post about this where I was able to take Vercel, GitHub Actions, and Cypress and set up a CI/CD pipeline completely for free with automated deployments, preview environments, automated testing, the whole nine yards. And so it made it to where for my personal little website, all I’ve got to do is submit a pull request. It kicks off all my tests. As soon as it’s done, it merges it, and then it’s immediately deployed with Vercel. So that was a really cool thing. So if I have to name a tool, I’m gonna say Vercel and GitHub Actions.

Gleb Bahmutov (Cypress):

Can I second that? GitHub Actions are amazing. Honestly, I went through the whole, you know, self-running Jenkins to TeamCity, Buildkite, Circle CI. GitHub Actions are the best. It’s amazing how much you can do with just JavaScript on UCI. And I was the one writing all integrations for the Cypress team at the time.

I want to bring one more integration. I’ve written it. It’s [an] integration between Cypress and LaunchDarkly feature flags environment, because during the test, sometimes you want to test an experiment behind a feature flag. And for us now, it’s an open-source plugin. It’s like this, here’s my user, here are the experiment values I want to enable. Let’s see this functionality, and it’s working.

Tally Barak (Playwright):

[My] answer is GitHub Actions as well. I don’t have that much experience derived with different tools. I used to work with Travis and a bit with CircleCI. But GitHub Actions is just answering everything we need – and Playwright obviously – because it just requires a node for the JavaScript version. You just install it with Docker, and you are good to go and do whatever you want with the outputs and so on. So yeah, GitHub actions and I’m actually interested in the other one that you mentioned about the feature flag. That’s worth looking into.

And of course, Cucumber. I also have a repository that is a starter report for working with Cucumber and Playwright. I’ll post it on the channel if anyone is interested to try it out. If you want to stick with the Gherkin and the BDD-style test. Otherwise, go with the Playwright test runner.

Andy Knight (moderator):

Tally, I want to plus one of both things you said there. First of all, everyone knows that I love BDD, I love Cucumber, I love Gherkin, so yes. But also, I want to go back to what everyone is harping about GitHub Actions. Y’all in the audience, if you haven’t tried GitHub Actions or used to yet, definitely check it out. It is very straightforward, low-to-no cost CI baked into your repo. I mean, like anytime I’ve done open-source projects or example projects, just putting a GitHub Actions to automatically run your tests from a trigger, oh, it’s beautiful. It can be a little intimidating at first because you’ve got that YAML structure you’ve got to learn. But I have found it very easy to pick up. I feel like personally – and speakers back me up on this – I never sat down and read a manual on how to write a GitHub Action. I just kind of looked at GitHub Actions and copied them and tweaked them, and it worked.

Tally Barak:

And then you went to the marketplace and looked for some.

Andy Knight (moderator):

Yes, yes, yes. You know, and so you can find things that are out there already. And I’ve done it not just with Cypress and Playwright, but literally like every single test framework out there that I’ve touched. You know, if it’s like a Selenium thing, if it’s a Playwright thing, a Cypress thing. Any kind of test unit integration, end-to-end, you name it. Like, it’s awesome. But anyway, I just wanted a plus one to both of those and now I’ll yield to the rest of the speakers’ favored integrations and why.

Steve Hernandez (Selenium):

Can I give a plug for LivingDoc? We use a lot of the SpecFlow stuff in our stack, as you saw, like the runner. There’s a great SpecFlow+ Runner report that gets generated and can show how each of the threads are performing in your test suite so you can look at performance. But one of the things just really simply is the Azure Log Analytics ingestion API. So right in your test pipeline, you can just throw up a PowerShell script or some console application and send that data that you would have in your CI tool over into something like log analytics. I suppose there’s an equivalent for Amazon. But then you can correlate, you know, your telemetry from your application along with test data and start to see where performance issues are introduced.

But I guess coming back to my plug for LivingDoc is, I feel in a lot of cases our automated tests are actually the only documentation for some parts of the application as far as how they’re supposed to work because you’ve captured the requirements in a test. So they’re kind of like living documentation and LivingDoc literally is that. And it’s something that you can share with the business and the developers to see what you have for coverage and it’s human readable. I think that’s really important.

Carter Capocaccia (Cypress):

Well, now with GPT3, everything is human readable.

Andy Knight (moderator):

So I wanted to ask about visual testing and specifically with Applitools. I saw and heard that some of y’all had used visual testing with Applitools in your projects. In line with talking about integrations that you love, how do y’all use Applitools if you use it, and what value do you find you get out of it? Go ahead.

Carter Capocaccia (Cypress):

Yes, I want to bring up kind of why Applitools is a different breed of product when it comes to visual testing, though. So you really have two flavors of visual testing. One is a bitmap comparison where you take a screenshot of an image and it takes it and basically breaks it down into computer readable code and it compares a known standard against the new version. If there’s a certain percentage with diversions between the old version and the new version, it flags it.You can kind of already tell, well, if all I know about is just basically one to zeros and comparing the results here, it’s not really all that intelligent. Because I could have a piece of the page, let’s say it’s a headline. The headline updates every five minutes. Well, I don’t have time to just continually update this baseline screenshot.

Well, what a tool like Applitools does is it actually takes your screenshots and puts them through computer vision. And with computer vision you can now provide a lot more instruction to what do you want to compare? What’s important to you? Do you want to ignore certain parts of the page? When I compare X to Y, do I really consider, let’s say, a carousel where the images are being delivered by a CMS and they’re being changed on a daily basis? Do I really care that yesterday it was a picture of let’s say somebody’s front porch and today it’s a picture of a tree? So there’s a lot more intelligence that’s built into a tool like Applitools than there is with standard bitmap comparison. I think it takes the right kind of customer for Applitools. And when they do have that right kind of customer, it’s a pretty huge game changer, considering that it allows you to make a lot more informed decisions about the health of the visual appearance of your application versus, like I said, a bitmap comparison.

Basically, if you want to compare the two, one is analog, the other is digital. The analog is the map comparison. The digital way is Applitools. If we’re being honest with ourselves, the Applitools way is probably the best and right way to do this. I just know that there are a lot of teams that are making smaller scale applications that can get away with assuming higher risk in using a tool like a bitmap comparison tool.

Jose Morales (WebdriverIO):

I would like to add that we have been using Applitools for six months or so, and it was a great addition to our framework. It was reducing a lot of code because we were coding a lot of assertions. So for example, if you go to the Domino’s website, you have the store locator, and then the store locator basically based on your zip code automatically populates the city region and the store closest to you. There is a lot of functionality there. And I remember that we were validating every single text fill, every single button, every single drop and menu for the content. In English and in Spanish and the different languages. So for us it was very helpful to use Applitools because with WebdriverIO, we can simulate the location from the customer and then move the store locator for that particular zip code and tell Applitools, hey, take a screenshot for this one.

And in one single shot, we are validating all the colors, buttons, text fields, content, and we were very careful to see what is worth it for this scenario, right? So in all the framework, what we are doing is okay, let’s test this particular functionality for this scenario. Let’s use WebdriverIO to move the customer until this particular checkpoint and then take a screenshot for that particular state in our application. And in one single shot, we can validate this in all the devices. In Chrome, Firefox, Edge. Pixel, iPhones. So with that particular action, and because we are reducing a lot of assertions, we are writing less code, it’s easy to maintain, it’s easy to extend, and our test cases are more accurate and smaller. And I think for us Applitools was a great addition and I fell in love with Applitools and currently we’re trying to incorporate the mobile validations, meaning iOS and Android as well.

Did anyone ever change their framework?

We heard popular tools mentioned like GitHub Actions and Cucumber, as well as some lesser known tools. A few of our panelists are also using Applitools Eyes in their projects to provide faster test coverage of their frontend. In the next article, we’ll cover the answers these experts gave when we posed a sticky question: “Did you ever consider changing your test framework?”

The post Let the Engineers Speak! Part 3: Favorite Test Integrations appeared first on Automated Visual Testing | Applitools.

]]>
Let the Engineers Speak! Part 2: The Biggest Challenges https://applitools.com/blog/let-the-engineers-speak-part-2-the-biggest-challenges/ Thu, 22 Dec 2022 14:20:00 +0000 https://applitools.com/?p=45175 In part 2 of our Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak recap series, we will recap what our panelists shared as the biggest challenges they currently face...

The post Let the Engineers Speak! Part 2: The Biggest Challenges appeared first on Automated Visual Testing | Applitools.

]]>
Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak! from Applitools

In part 2 of our Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak recap series, we will recap what our panelists shared as the biggest challenges they currently face in testing and automation. We’ll also cover some of the biggest challenges our webinar audience members have faced. Be sure to read our previous post, where we introduced the experts and their test automation projects.

The experts

I’m Andrew Knight – the Automation Panda – and I moderated this panel. Here were the panelists and the frameworks they represented:

  • Gleb Bahmutov (Cypress) – Senior Director of Engineering at Mercari US
  • Carter Capocaccia (Cypress) – Senior Engineering Manager – Quality Automation at Hilton
  • Tally Barak (Playwright) – Software Architect at YOOBIC
  • Steve Hernandez (Selenium) – Software Engineer in Test at Q2
  • Jose Morales (WebdriverIO) – Automation Architect at Domino’s

The discussion

Carter Capocaccia (Cypress):

So I think one of the biggest challenges that I personally faced was that a lot of the times – I think initially when you’re starting out with automation – the tech problems that you face are where all the focus gets put on. And people lose sight that there’s an entire component to process around this. Meaning that, well, once you’ve written, you know, your hundred tests, you’ve got your architecture stable, you’ve figured out how to seed your database and correct your APIs into all the stuff that you need to do to make your tests run. Well, what do you do with those results? How does it actually impact your app? And I think a lot of people lose sight of that.

Of course, with the advent of CI/CD pipelines, it becomes a little bit easier because now you can basically prevent merges or prevent releases based on the results of these tests. But I think that gets forgotten until we’re way down the line. So, you know, you’ve written all your tests and you come back to your dev teams and you say, “Hey, I’m gonna start failing your pipelines now, and you’re not gonna be able to merge, you’re not gonna be able to to deploy.” And then you immediately have resentment against this process, right? So I think maybe the marketing of why you’re doing what you’re doing, how it’s gonna impact the process, and what is the positive result that comes out of this is equally as important as tackling those technical hurdles.

Tally Barak (Playwright):

Yeah I think I will go somehow along those lines. Besides some technical difficulties that I already shared about the shadow DOM and trivial actions like dragging, again, some of the things that we encountered. The biggest challenge was creating a culture of testing in the organization, making sure that people take care of the test. Because of some flakiness, we didn’t want to prevent merges. I mean, [for] a long time, we built more and more into that, but at the beginning it was like, “People, this is your helper. This is something to help you write better code.” And it didn’t come very easily, I would say.

Poll results from over 60 participants in the webinar that depict the biggest challenges while testing
Poll results from over 60 webinar participants depicting their biggest challenges while testing

Gleb Bahmutov (Cypress):

Can I comment on this slide [of the] poll? I see selectors as one of the challenges and it’s [a] pretty big one, apparently, for a lot of people. I just want to say what we’ve done at Mercari US. So, sometimes [when] you’re writing an end-to-end test, you come to a part of a page and there [are] some dynamic classes, no good selector specific for testing. So what we do in our test [is] we select however we can – by text, by position, you name it. But we do add a to-do comment. So in our test repo, we have lots of to-do [comments] like “needs better test selector”.

And we have [an] automated script that actually counts it and puts it in a rhythm. So we know how many to-dos for selectors we have total. And one of our objectives as a team is to drive a number down. So we [are] adding selectors and it’s kind of a good small task to do, but having good selectors is a key. So don’t just kind of assume someone else will add it for you. Have it as one of your automated priorities.

Andy Knight (moderator):

That’s really cool to actually have it kind of scan through and kind of pinpoint. So it’s almost like you’re self-linting. Wow, that’s awesome. If I may ask another question off of this topic of the challenges specifically around selectors. What other challenges have y’all faced with selectors or innovative solutions to try to get on top of them? Like, does anybody use smart selectors or visual selectors or anything at all?

Carter Capocaccia (Cypress):

Can I share my really staunch opinion on this? I personally don’t allow merge requests that have XPath selectors in them. I disallow it. It is a forbidden practice at our organization. We actually start with – we have a selector hierarchy that we try and follow. And I believe it’s Testing Library. And know it’s a really generic name for everybody in this room, but genuinely, it’s called Testing Library. And they have a selector pattern basically as a hierarchical representation based. I think it says, you know, accessibility selectors, and then at the very bottom, it even goes down to classes and then IDs. But it starts with accessibility selectors, and then content and then, you know, some other pieces that you should go through. And nowhere in that list is XPath.

I know that a lot of folks feel like XPaths are necessary, but I think that you’ve gotta find a way around them. I’m sorry if that hurts you there, Pandy. But yeah, in my opinion, I think XPaths are sometimes used as a crutch. And in today’s world where you have components that are conditionally rendered or potentially even, like for situations that I’m working in, it depends on what part of the country you’re in or what part of the world you’re in. Your page may render it completely differently than another. So therefore I cannot rely on a DOM structure to not be shifting at a moment’s notice, or there may be a CMS tied into this so they can add a component to the page. It’s gonna completely change the DOM structure. So we have to do better than that and, you know, tests more and more nowadays are being kept right alongside the application code.

And I think that, speaking back to what I was saying about kind of changing the process around what you do. Well, if I’m a test engineer, if I really can’t get to a DOM element any other way, why can’t I go into that component and add a data test ID attribute? I should have that ability as a test engineer, just like a UI dev should have the ability to come into my test and modify that. So that’s I think a way you can kind of shift culture when you even think about something as simple as DOM selectors. But yeah, the long-winded answer to say, I personally think that if you want to have maintainable selectors, avoid XPaths.

Jose Morales (WebdriverIO):

I would like to share my opinion in two ways. One is the challenges that we have in Domino’s, and another one is the selectors.

The challenge is the maintenance of the test cases. And basically what we are following is a good process. And with a good process, I mean we have a team that is reviewing the executions and we do analysis about the results. We figured out if we need to fix the test cases, we need to change the test cases because the feature is now different – or what is the root cause of the problem, right? We are following good practices in terms of development, like code reviews, pull requests. We’re doing peer programming. If some automation engineer is facing some kind of challenging problem/blocker, we have working sessions they requested for those working sessions. And we work together, right, as a team to figure out what’s the problem, what [are] the possible solutions that we can do. And it’s very active.

And commenting about the selectors, we encourage to use the ID. And in Domino’s we have a custom ID is the UUID that all the web developers add that particular identifier for us to use. So there is a close communication between the developers and the automation engineers to work together as a team, how we can help to [better test] our product. That’s another thing that we normally do. And respect as much as possible the page-object design pattern. That is what we are doing in Domino’s.

Tally Barak (Playwright):

The slide that was showing actually three items that were shining there, which is flakiness, maintainability, and selectors. And I think they’re all related to some extent. Because if your selectors are too accurate, like XPath with all the DOM structure, then you have a maintainability problem and you might have flakiness. And I have to say, if you are talking about the tools, this is probably the one thing that Playwright has solved in just a way that makes writing tests really easy. I was hearing the other people here and they went like, “Oh, we already forgot about this problem after working with two and a half years.” We used to work with XPaths because that was the only thing that could pierce the shadow DOM.

Now we are working with plain selectors. We’re keeping them to the minimum and mostly the user-facing selectors. So things like text usually. Maybe something like a header component and then a text. And we will know how to address the very exact selector. And that reduces the maintenance because this is the last part that is likely to change. And because of the way Playwright is working with the auto waiting, it’ll just wait, we don’t need to put some time out. It’ll know to wait until a certain time out for the component to show up.

You can write very small and very concise selectors. And the one other thing is that they have the strict mode, which means that if it finds two elements that correspond to the same selector, it’ll break the test. And one of the problems that we’ve seen previously is that it would go with the first element. So it might click a different button or something else, and the test will continue and it’ll only fade like three steps or five steps ahead. And this is really hard to understand what is the problem. Playwright here will just tell you. Of course, you can change if you want it or not. It’s an option. But if you do that, you get a lot less flakiness in your test because the first time it’ll encounter the element, it’ll tell you, okay, “I have two here. You’re probably on the wrong path. Go ahead and fix your test.” So all these three things, the maintenance, the flakiness, the selectors is one of the, I think, the greatest advantages of Playwright in these terms.

Andy Knight (moderator):

Awesome. So I know we’ve spent a lot of time talking about selectors and DOM and modeling the page. What design patterns do y’all use for or within your test projects? I know, I think it was Jose mentioned like they use page objects and they stick to it very rigidly – or maybe rigidly is not the best word, but they stick with page objects. Are y’all using page objects or are y’all using Screenplay? I know Cypress has those action or command methods, I forget. What are y’all using to help make your interactions using those selectors better?

Steve Hernandez (Selenium):

Screenplay.

Gleb Bahmutov (Cypress):

We don’t try to make the test super dry, right? Eventually, if we see a duplication of code, we will write a custom command. For example, we have custom commands for selecting by data test id. So it’s really easy by data test ID or ARIA label, we use custom commands. We almost never write page objects. We prefer little reusable functions, right? We need to create a user. Here’s a function, creates a user for you.

There is one advantage, though, because I like writing Cypress plugins, we can abstract something that is really specific and just move it to [a] third party public open source plugin, but we’ll reuse in our project. So we’ve been doing that a lot. I just looked, we have more than 20 Cypress plugins with our test report, but 17 out of them is what I’ve written. So, we take that abstraction and we make it and just have it by itself so it becomes more powerful and reusable. But small functions and custom commands for us.

Tally Barak (Playwright):

We are working with Cucumber. So this is basically our structure. Each step has the automation behind the scene, and then obviously you can reuse because the step is reusable in any of your tests. And we do try to build them sometimes a bit more, you know, a sophisticated kind of code, so you can reuse them in different places.

Jose Morales (WebdriverIO):

[About] using other design patterns – another thing that really helped us. And I saw a very interesting question about how do you integrate a backend with a JavaScript frontend? And that’s a very interesting question because for us, we try to consume APIs as well. So for example, in Domino’s, we have an application that is an interface for configuration for our stores. So every manager in our stores can configure certain parameters for the store, right? Like the store phone number. So they can go to a specific website and then change the store’s number, for example. So if you think about the testing perspective, you can go to the UI and change a particular store number and then hit an endpoint to refresh the backend for that particular store configuration.

And then you go to another backend to hit the store profile and then get the store profile with a request and read the JSON back, right? So you don’t need to interact with the UI all the time, but you can also integrate with the backend as much as possible. So that’s a very clean strategy, because it helps you to test faster and test better and reduce the flakiness.

And another thing that I recommend is to use the plugins, right? So explore the plugins that you can get in the industry. For example, in the backend and if you are interested in measuring the performance, you can use Lighthouse with WebdriverIO. So you can – with the standards in the industry – measure what is the performance for your web application, right? So we don’t need to reinvent the wheel. And that’s my recommendation.

Andy Knight (moderator):

Awesome. Awesome. So I know, Steve, right off the bat, you had mentioned the Screenplay pattern. I wanted to come back to that. What have you found has been good about using Screenplay over something like raw calls or page objects or something else?

Steve Hernandez (Selenium):

Personally, I haven’t used a lot of raw calls. I’ve had the luxury of using Boa Constrictor. It kind of wraps the Selenium stuff in a warm hug, and it handles a lot of the timing issues that you might run into. Boa Constrictor has a lot of built-in – I hear you guys talk about plug-ins – we have a lot of built-in interactions, they’re called. So they can be questions that you’ll ask of an API endpoint. They can be questions that you’ll ask of the page in the form of, “I’m gonna scrape this column here that I use over and over to compare to something in the future”.

So we’ll have our Gherkin steps and essentially backing each of those step definitions are these different reusable components or questions or interactions. That works very well with the level of parallel parallelism we’re doing. You know, if you’re firing off 50 threads at a time or 80 or more, we need stuff to be fast, not step on each other’s toes. So we write the interactions in a way where they’re very atomic and they’re not going to affect something else that’s happening on another thread. As far as page objects, I haven’t used those a whole lot.

What about integrations?

We’ve heard some common themes from both our panelists and our webinar viewers around their test automation challenges – selectors, maintenance, and flakiness. Gaining a culture of testing that embraces and adheres to the process is another challenge our engineers have faced in their projects. Our panelists each touched a little bit on how they approach these challenges in this recap. In the next article, we’ll cover the panelists’ favorite integrations for their test automation projects.

The post Let the Engineers Speak! Part 2: The Biggest Challenges appeared first on Automated Visual Testing | Applitools.

]]>
Let the Engineers Speak! Part 1: Real-World Projects https://applitools.com/blog/let-the-engineers-speak-part-1-real-world-projects/ Mon, 19 Dec 2022 16:27:39 +0000 https://applitools.com/?p=45037 Over the past two years, Applitools has hosted a series of code battles called Let the Code Speak. We pitted the most popular web testing frameworks against each other through...

The post Let the Engineers Speak! Part 1: Real-World Projects appeared first on Automated Visual Testing | Applitools.

]]>
Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak! from Applitools

Over the past two years, Applitools has hosted a series of code battles called Let the Code Speak. We pitted the most popular web testing frameworks against each other through rounds of code challenges, and we invited the audience to vote for which did them best. Now, to cap off the series, we hosted a panel of test automation experts to share their real-world experiences working with these web test frameworks. We called it Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak!

Here were our panelists and the frameworks they represented:

  • Gleb Bahmutov (Cypress) – Senior Director of Engineering at Mercari US
  • Carter Capocaccia (Cypress) – Senior Engineering Manager – Quality Automation at Hilton
  • Tally Barak (Playwright) – Software Architect at YOOBIC
  • Steve Hernandez (Selenium) – Software Engineer in Test at Q2
  • Jose Morales (WebdriverIO) – Automation Architect at Domino’s

I’m Andrew Knight – the Automation Panda – and I moderated this panel.

In this blog series, we will recap what these experts had to say about the biggest challenges they face, their favorite integrations, and if they have ever considered changing their test framework. In this article, we’ll kick off the series by introducing each speaker with an overview of their projects in their own words.

Gleb’s Cypress Project at Mercari US

Gleb:

I’m in charge of the web automation group; we test our web application. Mercari US is an online marketplace where you can list an item, buy items, ship [items], and so on. I joined about a year and a half [ago], and my goal was to make it work so it doesn’t break.

There were previous attempts to automate web application testing that were not very successful, and they hired me specifically because I brought Cypress experience. Before that, I worked with Cypress for years, so I’m kind of partial. So you can kind of guess which framework I chose to implement end-to-end tests.

We currently run about 700 end-to-end tests every day multiple times per feature with each pull request. On average, our flake – you know, how many tests kind of pass or fail per test run – is probably half a percent, maybe below that. So we actually invested a lot of time in making sure it’s all green and working correctly. So that’s my story.

Carter’s Cypress Project at Hilton

Carter:

Nice to meet you all today and thank you for the opportunity to speak with everybody. So I need to lead off with a little bit of a disclaimer here. Hilton does not necessarily endorse or use any of the products you may speak of today, and all the opinions that I’m going to talk about are my own.

So Hilton is a large corporation. We operate 7,000 properties in 122 countries. So when we talk about some of the concerns that I face as a quality automation engineer, we’re talking about accessibility, localization, analytics, [and] a variety of application architectures. We might have some that are server-side rendered. We might have some that are CMS-driven pages and even multiple types of CMS. We also have, you know, things that are like that filter out bot traffic. We may use mock data, live data. We have availability engines, pricing engines. So things get pretty complex and therefore our strategy has to follow this complexity in a way that makes things simple for us to interpret results and act upon them quickly.

To give you guys an idea of the scale here, we typically are much like Gleb in this regard, where every single one of our changes runs all of our tests. So we are constantly executing tests all of the time. I think this year – I was just running these reports the day to figure out for, you know, interview review stuff – I think I pulled up that we had so far in this year run about 1.2 million tests on our applications. That’s quite a large quantity. So, you know, if anything, that speaks to our architecture – [it] has some stability built into it – the fact that we were even able to execute that quantity of tests.

But yeah, I’m a Cypress user. I’ve been a Cypress user now for about three years. Much like Gleb, I got faced with the challenge of, “Well, this only runs in Chrome. It’s a young company, young team.” And yeah, I think, you know, we stuck with it, and obviously today Cypress has grown with us and we’re really, really happy to still be using the tool. We were a Webdriver shop before, so I’m really interested to see what Jose is doing. And there probably is still some Webdriver code hanging around here at Hilton. But yeah, I’m really excited to be here today, so thank you for the opportunity.

Tally’s Playwright Project at YOOBIC

Tally:

I work at YOOBIC. YOOBIC is a digital platform for frontline employees – that’s a way to say people like sellers in shops. And they use our application in their daily job – for the tasks they need to do, for their training, for their communication needs, and so on.

For this application to be useful, we have a very rich UI that we are using. And for that, we have decided to use web components. We’re using extensive JS and building web components. This is important because when we tried to automate the testing, this was one of the biggest barriers that we had to cross.

We used to work with WebdriverIO, which was good to some extent. But the thing that really was problematic for us was the shadow DOM support – and shadow root. So I got into the habit for once every fortnight or monthly to go into the web and just Google “end-to-end testing shadow DOM support” and voila! In one of these Googlings, it actually worked out and I stumbled upon Playwright – and it said, “Yes, we have Shadow DOM support.” So I went like, “Okay, I have to try that.” And I just looked at the documentation. It was like shadow DOM was no biggie. It was just like built/baked in naturally. I started testing it, and it worked like magic.

That was almost two and a half years ago. Playwright had a lot less than what it has today. It was still on version 1.2 or something. At the time, it was only the browser automation side; it didn’t have the test runner. And our tests were already written in Cucumber, and Cucumber is a BDD-style kind of testing. So this is the way we actually describe our functional test. So user selects mission, and then user starts, and so on – this is the test. We already had that from WebdriverIO, and basically we just changed the automation underneath to work with Playwright instead of Selenium WebdriverIO.

And this is still how we work. We are using Cucumber for running those tests. We use Playwright as the browser automation. We have our tested app and we are testing against server and the database. We have about 200 scenarios. We run them on every PR. It takes about 20 or 25 minutes with sharding, but this is because they are really long tests.

It also solved a lot of other problems that we had previously, like the fact that you can open multiple browsers. Previously what we were doing, if you wanted to test multiple users, we would log in, log out, and log in with the other user. And here, all of a sudden, we could just spawn multiple windows and do that.

And one [additional] thing that I want to mention besides the functional test, this end-to-end test, we also run component tests. This is different; we are using the Playwright Test Runner, in fact. This is because we work with Angular, and Playwright component testing is not supported with Angular yet. We are using Storybook, and we test with Playwright on top of Storybook. So yeah, that’s my story.

Steve’s Selenium Project at Q2

Steve:

I actually started as a full stack developer at Precision Lender. We’re part of Q2, which is a big FinTech company. If you had priced any skyscrapers or went to get a loan for a skyscraper, there’s a very good chance that our software was used recently – I’m sure you’re all doing that.

Working on the front end, it’s a JavaScript-based application using Knockout but increasingly Vue. I worked over in that for about two and a half years at the company, and I saw the cool stuff that Andy and our other Andy were doing. And I was excited about it. I wanted to move over [and] see what it was all about. And I believe strongly in the power of test to not only improve the quality of the application, but also just the software development lifecycle and the whole experience for everyone and the developers.

Architecture diagram depicting Q2's test automation
Q2 test automation architecture diagram from the Let the Engineers Speak webinar

So what you guys have in front of you is a typical CI pipeline that we use. We use Git and we use TeamCity. And on a typical day, we run about 20,000 tests. We have about 6,000 unique tests and upon one commit to our main developed branch, we kick off about 1,500 tests and it takes about 20 to 25 minutes, depending on errors, test failures. On that check-in to Team City, we use a stack with SpecFlow and SpecFlow+ Runner kicks off our tests.

As I said before, we’re using Boa Constrictor underneath with the Screenplay Pattern. It acts as our actor, which is controlling the Selenium Webdriver. And then the Selenium Webdriver farms out the test work in parallel to the Selenium Grid and our little nodes; it’s an AKS cluster run through all of our tests and spit out a whole bunch of great documentation. One of my favorites being the SpecFlow+ Runner report and other good logging things like Azure Log Analytics.

Jose’s WebdriverIO Project at Domino’s

Jose:

Thank you from all [of us at] Domino’s. Well, Domino’s is a big company, right? We have 4 billion dollars in sales every single year. And – only in the Super Bowl – we sell 2 million pizzas just in the United States. So we have a big product and our operations are very critical. So I decided to use WebdriverIO for our product. And this is the architecture that we created in Domino’s.

Architecture diagram depicting Domino's test automation pipelien
Domino’s test automation architecture diagram from the Let the Engineers Speak webinar

In Domino’s, we have different environments. We have QA, we have pre-production, and we have production. And for selecting different kinds of environments, we have different kinds of configuration. And in that configuration, Jenkins reads the configuration for that particular environment, we get the code from a stash, and then we are using virtual machines – it could be inside of the Domino’s network or it could be in Source Labs. Then we have a kind of report. And one thing that I really love about WebdriverIO is the fact that it’s open source, and it has huge libraries.

So in Domino’s, we’re using WebdriverIO in three projects – Domino’s website, in the Domino’s re-architecture, and in the Domino’s call center application. [In fact,] we are using WebdriverIO with two different kinds of reports, and we’re using [it] with two different kinds of BDD frameworks. One is, of course, Cucumber; that is the most used in the industry. And another one is with Mocha. And in both, we have very successful results. We have close to 200 test scenarios right now. And we are executing every single week. And we are generating reports for those specific scenarios. We have a 90% success pass rate in all scenarios.

And we’re using Applitools as an integration for discovering issues – visual issues and visual AI validations. And basically in all those scenarios, we’re validating the results in all those browsers. We’re normally using Chrome, Safari, Firefox, Edge, and of course, we’re using mobile as well, in the flavors of Android and iOS.

And the cool thing is that it really depends about what is the usage for customers, right? Most of the customers that we have are using Chrome, and we are seeing what is our market, what our customers are using, and we are validating our products in those particular customers.

So I am really happy with WebdriverIO, because I was able to use it in other problems that we were facing. For example, in Malaysia we have some stores as well, and in the previous year we had a problem with performance in Malaysia. So I was able to integrate a plugin – Lighthouse from Google – in order to measure the performance for that particular market in Malaysia using WebdriverIO. So far for us [at] Domino’s, WebdriverIO is doing great. And I am really happy with the solution and the results and the framework that we selected. And that’s everything from my end. Thank you so much.

What are their biggest challenges?

So we’ve gotten an overview of the projects these test experts are using and a little bit of their histories with their test framework of choice. This will help set a foundation to get to know our engineers and understand where they’re coming from as we continue our Cypress, Playwright, Selenium, or WebdriverIO? Let The Engineers Speak series. In the next article, we’ll cover the biggest challenges these experts face in their current test automation projects.

The post Let the Engineers Speak! Part 1: Real-World Projects appeared first on Automated Visual Testing | Applitools.

]]>
7 Awesome Test Automation University Courses Not To Miss https://applitools.com/blog/7-awesome-test-automation-university-courses-not-to-miss/ Tue, 06 Dec 2022 20:10:19 +0000 https://applitools.com/?p=44471 Almost everyone in software testing knows about Test Automation University (TAU). Powered by Applitools, TAU is one of the best resources for building up your testing and automation skills. It...

The post 7 Awesome Test Automation University Courses Not To Miss appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation University, powered by Applitools

Almost everyone in software testing knows about Test Automation University (TAU). Powered by Applitools, TAU is one of the best resources for building up your testing and automation skills. It offers dozens of courses from the world’s leading instructors all available for free. It’s awesome!

Last month, I wrote an article listing the 10 most popular TAU courses ranked by completions over the past year. However, there are several other excellent courses in our catalog that also deserve attention. In this article, I want to highlight a small selection of those courses that I think are fantastic. These courses are listed in no particular order, and of course, all our other courses are excellent as well. Let’s dive in!

#1: Playwright with JavaScript

Playwright with Javascript course badge
Ixchel Meza

First on this list is Playwright with JavaScript by Ixchel Meza. Playwright is a hot, new web testing framework from Microsoft. It’s a modern framework with automatic waiting, built-in tracing, and cross-browser support. Plus, it’s fast – really, really fast for UI testing! It’s quickly gaining industry traction. In this course, Ixchel teaches you how to get a Playwright project up and running in JavaScript. The distinction of saying “in JavaScript” is important because Playwright also supports Java, C#, and Python.

#2: Intro to Testing Machine Learning Models

Intro to Testing Machine Learning Models course badge
Carlos Kidman

Next up is one of my personal favorites: Intro to Testing Machine Learning Models by Carlos Kidman. Machine learning (ML), artificial intelligence (AI), “algorithms” – whatever you call it – is indispensable to software products today. Just like every business plan in 1999 needed a website, every new app these days seems to have an ML component. But how do we test ML models when they seem so opaque and, at times, questionable? In this course, Carlos breaks down where testers fit in this brave new world of ML and all the techniques for properly testing models.

#3: UI Automation with WebdriverIO v7

UI Automation with WebdriverIO v7 course badge
Julia Pottinger

UI Automation with WebdriverIO v7 by Julia Pottinger is actually a special course: it’s the first course that we “refreshed” for TAU. Julia’s original course covered a now-older version of WebdriverIO, so she developed a new edition to keep up with the project. WebdriverIO is a Node.js-based browser automation tool that can use either the WebDriver Protocol or the Chrome DevTools protocol. Alongside Selenium WebDriver, Cypress, and Playwright, it’s one of the most popular web testing tools available! Julia covers all the ins and outs of WebDriverIO v7 in this course.

#4: The Basics of Visual Testing

Basics of Visual Testing course badge
Matt Jasaitis

Visual testing is indispensable for testing any app with a user interface (UI) these days – which is basically every app. In The Basics of Visual Testing, Matt Jasaitis teaches how to incorporate visual testing techniques as part of a well-balanced test automation strategy. You’ll build a Selenium Java project together with Applitools Eyes chapter by chapter, learning the best practices for visual testing along the way. Visual testing will help you catch all sorts of bugs like overlapping text, missing elements, and skewed layouts that traditional automation struggles to find.

#5: Introduction to Observability for Test Automation

Introduction to Observability for Test Automation course badge
Abby Bangser

“Observability” means the ability to understand what is happening in a software system based on telemetry being reported in real time. Just like any other software system, test automation and build pipelines have telemetry worth analyzing for improvements. In her course Introduction to Observability for Test Automation, Abby Bangser teaches how to expose metrics, logs, traces, and events of an automated test suite written in Java running through GitHub Actions. She provides excellent guidance on what is worth capturing and how to capture it.

#6: Introduction to Blockchain Testing

Introduction to Blockchain Testing course badge
Rafaela Azevedo

A blockchain is a distributed ledger that links records or “blocks” together in sequence using cryptographic signatures. One of the most popular blockchain applications is cryptocurrencies like Bitcoin and Ethereum. Blockchains also serve as a cornerstone for web3 and web5. Although blockchain technologies are controversial due to environmental concerns from high energy consumption and due to perceived scams around cryptocurrencies and non-fungible tokens (NFTs), they are nevertheless ingrained in our tech landscape. In Introduction to Blockchain Testing, Rafaela Azevedo shows you how to get your hands dirty with blockchains by setting up an Ethereum blockchain on your local machine, writing your first contract with Solidity, and finally testing your contract.

#7: Testing from the Inside: Unit Testing Edition

Unit Testing course badge
Tariq King

The final course to round off this list is Testing from the Inside: Unit Testing Edition by Tariq King. Unit testing is such a vital part of software development. It’s one of the first lines of defense in catching bugs in code. In this course, Tariq takes a unique approach to unit testing: thinking “inside the box” of the software product under test to explore the actual code and its data. Tariq covers foundational principles as well as practical techniques like parameterizing, mocking, and spying.

What other courses are available?

Like I said before, Test Automation University offers dozens of courses from the world’s best instructors, and we add new courses all the time! These are just seven out of many excellent courses. Be sure to peruse our catalog and try some courses that interest you!

The post 7 Awesome Test Automation University Courses Not To Miss appeared first on Automated Visual Testing | Applitools.

]]>
Using Headless Mode for Your Tests – When, Why and How (demonstration with WebdriverIO) https://applitools.com/blog/headless-mode-for-tests-webdriverio/ Tue, 19 Apr 2022 13:30:00 +0000 https://applitools.com/?p=36892 Learn how to use headless mode for your tests. Understand when you should use it and see an example of headless testing with WebdriverIO.

The post Using Headless Mode for Your Tests – When, Why and How (demonstration with WebdriverIO) appeared first on Automated Visual Testing | Applitools.

]]>

Learn how to use headless mode for your tests and why it’s so important. Understand when you should use it and see an example of headless testing with WebdriverIO.

Browser automation has been around for a long time. It is an important part of how we develop, test, and deploy web applications. Browser automation can be done on a headed browser and also headless browsers. A headless browser operates as you would expect a normal browser would, however, it does not have a Graphical User Interface (GUI). Therefore when you are running tests you will not see the browser GUI pop up and the actions being carried out. Interactions with a headless browser are done via the Command Line Interface.

Before the headless mode Chrome release in 2017,  in order to run your automated tests headlessly, you had to use browsers such as PhantomJS (now discontinued). Over time browser vendors included a headless browsing mode as a part of their releases. This started with Google Chrome 59 in 2017 and Firefox following with headless mode available on all their platforms starting with Firefox version 56. Headless Chrome quickly overtook PhantomJS upon its release. 

There are many uses of headless browsers by a wide range of individuals.

  • Test Automation engineers use headless browsers to run their automated scripts faster and in Continuous Integration and Delivery pipelines. Scripts are executed faster as a headless browser does not need to load a GUI 
  • Servers and tools such as Jenkins, GitlabCI, and Docker execute with a headed browser. When carrying out actions on these tools you will need to install headless browsers to complete the tasks
  • SEO tools use it to analyze a web page and provide recommendations on how it can be improved
  • Monitoring tools use headless mode to measure the performance of web applications
  • Scraping tools also use the headless mode to carry out their tasks. Web scraping with a headless browser can be carried out very quickly as most times it does not require a Graphical User Interface
  • Headless browsers can be used to extract metadata (e.g., the DOM) and generate bitmaps from page contents. It also allows users to tap into chrome devtools and make use of features such as network throttling, device emulating, website performance analysis, and more

As test automation engineers we can gain the following benefits from doing headless testing:

  1. Providing the ability to run your tests on a server with no GUI which is the case with most cloud-based servers, tools such as Docker and CI pipelines where it is not always possible to install a GUI browser. 
  2. Reduces the time it takes for your automated scripts to complete compared to a GUI browser. GUI browsers will have to load CSS, images, render HTML and this will increase the time your script takes to run.
  3. When running tests in parallel, UI based browsers utilize a lot of memory compared to headless
  4. Frees up your computer to continue doing tasks while your tests are running in the background. With GUI-based browsers when running scripts they can be on your main screen and prevent you from doing anything else in the time that it takes to be run.

Headless browsers have a lot of benefits, however, there are some things to consider when using headless browsers:

  1. It may be difficult to debug failures when running your tests headlessly as there is no GUI for you to see failure points. There are logs available to help you debug, however, at times the failure is not easily spotted from the logs. This results in you having to add in commands to get screenshots to try and debug the issue or using other means.
  2. Headless browsers aren’t mimicking exact user behavior and some tests may fail due to the speed at which they are being executed.
  3. Regular users are not using the website in headless mode and so it is equally important to run test scenarios, do exploratory testing, visual regression, and other forms of testing on a headed browser. You want to ensure that the functionality and user experience of the web application remains consistent.

Different testing frameworks such as Selenium, Cypress, and WebdriverIO have commands that allow you to execute your scripts in headless browsers. In WebdriverIO you can execute your tests in the following way.

Let us quickly set up a demo WebriverIO project.

Setting up a WebdriverIO Project

In your code editor (I use VSCode) create a new project.

Initialize npm and install WebdriverIO with the following steps:

npm init wdio .

Select the following options from the configuration helper
=========================
WDIO Configuration Helper
=========================

Where is your automation backend located? On my local machine
Which framework do you want to use? mocha
Do you want to use a compiler? No!
Where are your test specs located? ./test/specs/**/*.js
Do you want WebdriverIO to autogenerate some test files? Yes
Do you want to use page objects (https://martinfowler.com/bliki/PageObject.html)? Yes
Where are your page objects located? ./test/pageobjects/**/*.js
Which reporter do you want to use? spec
Do you want to add a service to your test setup? selenium-standalone
What is the base url? http://localhost
Do you want me to run `npm install` Yes

To start the test, run: $ npm run wdio

This will execute tests in a headed Chrome browser.

Let us now set up the project to run headlessly.

Running Headless Chrome with WebdriverIO

  1. In the wdio.conf.js file go to the capabilities section of the configurations and add:
    'goog:chromeOptions': {
                args:'headless',
              },
    
  1. WebdriverIO allows you to use Devtools Service. If you are using that service instead of the Webdriver Protocol then add the following instead of step 1:
    'wdio:devtoolsOptions':{
                headless: true
            },
    
  1. Rerun your tests with npm run wdio and they should run with a headless browser

Running Headless Firefox with WebdriverIO

  1. In the wdio.conf.js file go to the capabilities section of the configurations and change browserName to Firefox 
  2. Add the capability to run Firefox headlessly:
    "moz:firefoxOptions": {
                	args: ['-headless']
              	},
  1. Rerun your tests with npm run wdio and they should run with a headless browser on Firefox

Here’s an example that demonstrates the potential speed difference between executing your tests in a headless browser vs a headed browser. I ran this test suite of 15 tests headed and headless (each having a max instance of 1) and the overall completion time was 18% faster from just that one change. The headed tests took 1 minute and 24 seconds to run and the headless tests ran in 1 minute and 9 seconds:

Headless vs Headed graph

Headless Browsers offers many benefits and can be used to aid your test automation. You can start your browsers headlessly from the command line and also use it in your various test automation frameworks. You can check out Cypress, Selenium Webdriver , Puppeteer among others.

The post Using Headless Mode for Your Tests – When, Why and How (demonstration with WebdriverIO) appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation University is now 75,000 students strong https://applitools.com/blog/tau-contributors/ Tue, 23 Feb 2021 09:32:28 +0000 https://applitools.com/?p=27058 What does it take to make a difference in the lives of 75,000 people? Applitools has reached 75,000 students enrolled in Test Automation University, a global online platform led by...

The post Test Automation University is now 75,000 students strong appeared first on Automated Visual Testing | Applitools.

]]>

What does it take to make a difference in the lives of 75,000 people?

Applitools has reached 75,000 students enrolled in Test Automation University, a global online platform led by Angie Jones that provides free courses on things test automation. Today, more engineers understand how to create, manage, and maintain automated tests.

What Engineers Have Learned on TAU

Engineers have learned how to automate UI, mobile, and API tests. They have learned to write tests in specific languages, including Java, JavaScript, Python, Ruby, and C#. They have applied tests through a range of frameworks including Selenium, Cypress, WebdriverIO, TestCafe, Appium, and Jest.

75,000 engineers would exceed the size of some 19,000 cities and towns in the United States. They work at large, established companies and growing startups. They work on every continent with the possible exception of Antarctica.

What makes Test Automation University possible? Contributors, who create all the coursework.

Thank You, Instructors

As of this writing, Test Automation University consists of 54 courses taught by 39 different instructors. Each instructor has contributed knowledge and expertise. You can find the list of authors on the Test Automation University home page.

Here are the instructors of the most recently added courses to TAU.

AuthorCourseDetailsChapters
Profile Name
Corina Pip
JUnit 5
Learn to execute and verify your automated tests with JUnit 517
Profile Name
Matt Chiang
WinAppDriver
Learn how to automate Windows desktop testing with WinAppDriver10
Profile Name
Marie Drake
Test Automation for Accessibility
Learn the fundamentals of automated accessibility testing8
Profile Name
Lewis Prescott
API Testing In JavaScript
Learn how to mock and test APIs in JavaScript5
Profile Name
Andrew Knight
Introduction to pytest
Learn how to automate tests using pytest10
Profile Name
Moataz Nabil
E2E Web Testing with TestCafe
Learn how to automate end-to-end testing with TestCafe15
Profile Name
Aparna Gopalakrishnan
Continuous Integration with Jenkins

Learn how to use Jenkins for Continuous Integration5
Profile Name
Moataz Nabil
Android Test Automation with Espresso
Learn how to automate Android tests with Espresso11
Profile Name
Mark Thompson
Introduction to JavaScript
Learn how to program in JavaScript6
Profile Name
Dmitri Harding
Introduction to NightwatchJS
Learn to automate web UI tests with NightwatchJS8
Profile Name
Rafaela Azevedo
Contract Tests with Pact
Learn how to implement contract tests using Pact8
Profile Name
Simon Berner
Source Control for Test Automation with Git
Learn the basics of source control using Git8
Profile Name
Paul Merrill
Robot Framework
Learn to use Robot Framework for robotic process automation (RPA)7
Profile Name
Brendan Connolly
Introduction to Nunit
Learn to execute and verify your auotmated tests with nUnit8
Profile Name
Gaurav Singh
Automated Visual Testing with Python
Learn how to automate visual testing in Python with Applitools11

Thank You, Students

As engineers and thinkers, the students continue to expand their knowledge through TAU coursework.

Each course contains quizzes of several questions per chapter. Each student who completes a course gets credit for questions answered correctly. Students who have completed the most courses and answered the most questions successfully make up the TAU 100.

Some of the students who lead on the TAU 100 include:

StudentCreditsRank
Profile Name Osanda Nimalarathna
Founder @MaxSoft
Ambalangoda Sri Lanka
44,300
Griffin
Profile Name Patrick Döring
Sr. QA Engineer @Pro7
Munich Germany
44,300
Griffin
Profile NameDarshit Shah
Sr. QA Engineer @N/A
Ahmedabad India
40,250Griffin
Profile NameAdha Hrustic
QA Engineer @Klika
Bosnia and Herzegovina
39,575Griffin
Profile NameHo Sang
Principal Technical Test Engineer @N/A
Kuala Lumpur Malaysia
38,325Griffin
Profile Name Gopi Srinivasan
Senior SDET Lead @Trimble Inc
Chennai India
38,075Griffin
Profile Name Ivo Dimitrov
Sr. QA Engineer @IPD
Sofia Bulgaria
37,875Griffin
Profile Name Malith Karunaratne
Technical Specialist – QE @Pearson Lanka
Sri Lanka
36,400Griffin
Profile Name Stéphane Colson
Freelancer @Testing IT
Lyon France
35,325Griffin
Profile NameTania Pilichou
Sr. QA Engineer @Workable
Athens Greece
35,025Griffin

Join the 75K!

Get inspired by the engineers around the world who are learning new test automation skills through Test Automation University.

Through the courses on TAU, you’ll not only learn how to automate tests, but more importantly, you’ll learn to eliminate redundant tests, add automation into your continuous integration processes, and make your testing an integral part of your build and delivery processes.

Learn a new language. Pick up a new testing framework. Know how to automate tests for each part of your development process – from unit and API tests through user interface, on-device, and end-to-end tests.

No matter what you learn, you will become more valuable to your team and company with your skills on how to improve quality through automation.

The post Test Automation University is now 75,000 students strong appeared first on Automated Visual Testing | Applitools.

]]>
Fast Testing Across Multiple Browsers https://applitools.com/blog/fast-testing-multiple-browsers/ Thu, 28 Jan 2021 08:22:47 +0000 https://applitools.com/?p=26281 Ultrafast testers seamlessly resolve unexpected browser behavior as they check in their code. This happens because, in less than 10 minutes on average, they know what differences exist. They could not do this if they had to wait the nearly three hours needed in the legacy approach. Who wants to wait half a day to see if their build worked?

The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.

]]>

If you think like the smartest people in software, you conclude that testing time detracts from software productivity. Investments in parallel test platforms pay off by shortening the time to validate builds and releases. But, you wonder about the limits of parallel testing. If you invest in infrastructure for fast testing across multiple browsers, do you capture failures that justify such an investment?

The Old Problem: Browser Behavior

Back in the day, browsers used different code bases. In the 2000s and early 2010s, most application developers struggled to ensure cross browser behavior. There were known behavior differences among Chrome, Firefox, Safari, and Internet Explorer. 

Annoyingly, each major version of Internet Explorer had its own idiosyncrasies. When do you abandon users who still run IE 6 beyond its end of support date? How do you handle the IE 6 thorough IE 10 behavioral differences? 

While Internet Explorer differences could be tied to major versions of operating systems, Firefox and Chrome released updates multiple times per year. Behaviors could change slightly between releases. How do you maintain your product behavior with browsers in the hands of your customers that you might not have developed with or tested against? 

Cross browser testing proved itself a necessary evil to catch potential behavior differences. In the beginning, app developers needed to build their own cross browser infrastructure. Eventually, companies arose to provide cross browser (and then cross device) testing as a service.

The Current Problem: Speed Vs Coverage

In the 2020s, speed can provide a core differentiator for app providers. An app that delivers features more quickly can dominate a market. Quality issues can derail that app, so coverage matters. But, how do app developers ensure that they get a quality product without sacrificing speed of releases?

In this environment, some companies invest in cross browser test infrastructure or test services. They invest in the large parallel infrastructure needed in creating and maintaining cross browser tests. And, the bulk of uncovered errors end up being rendering and visual differences. So, these tests require some kind of visual validation. But, do you really need to repeatedly run each test? 

Applitools concluded that repeating tests required costly infrastructure as well as costly test maintenance. App developers intend that one server response work for all browsers. With its Ultrafast Grid, Applitools can capture the DOM state on one browser and then repeat it across the Applitools Ultrafast Test Cloud. Testers can choose among browsers, devices, viewport sizes and multiple operating systems. How much faster can this be?

Hackathon Goal – Fast Testing With Multiple Browsers

In the Applitools Ultrafast Cross Browser Hackathon, participants used the traditional legacy method of running tests across multiple browsers to compare behavior results. Participants then compared their results with the more modern approach using the Applitools Ultrafast Grid. Read here about one participant’s experiences.

The time that matters is the time that lets a developer know the details about a discovered error after a test run. For the legacy approach, coders wrote tests for each platform of interest, including validating and debugging the function of each app test on each platform. Once the legacy test had been coded, the tests were run, analyzed, and reports were generated. 

For the Ultrafast approach, coders wrote their tests using Applitools to validate the application behavior. These tests used fewer lines of code and fewer locators. Then, the coders called the Applitools Ultrafast Grid and specified the browsers, viewports, and operating systems of interest to match the legacy test infrastructure.

Hackathon Results – Faster Tests Across Multiple Browsers

The report included this graphic showing the total test cycle time for the average Hackathon submission of legacy versus Ultrafast:

Here is a breakdown of the average participant time used for legacy versus Ultrafast across the Hackathon:

ActivityLegacyUltrafast
Actual Run Time9 minutes2 minutes
Analysis Time270 minutes10 minutes
Report Time245 minutes15 minutes
Test Coding Time1062 minutes59 minutes
Code Maintenance Time120 minutes5 minutes

The first three activities, test run, analysis, and report, make up the time between initiating a test and taking action. Across the three scenarios in the hackathon, the average legacy test required a total of 524 minutes. The average for Ultrafast was 27 minutes. For each scenario, then, the average was 175 minutes – almost three hours – for the legacy result, versus 9 minutes for the Ultrafast approach.

On top of the operations time for testing, the report showed the time taken to write and maintain the test code for the legacy and Ultrafast approaches. Legacy test coding took over 1060 minutes (17 hours, 40 minutes), while Ultrafast only required an hour. And, code maintenance for legacy took 2 hours, while Ultrafast only required 5 minutes.

Why Fast Testing Across Multiple Browsers Matters

As the Hackathon results showed, Ultrafast testing runs more quickly and gives results more quickly. 

Legacy cross-browser testing imposes a long time from test start to action. Their long run and analysis times do not make them suitable for any kind of software build validation. Most of these legacy tests get run in final end-to-end acceptance, with the hope that no visual differences get uncovered. 

Ultrafast approaches enable app developers to build fast testing across multiple browsers into software build. Ultrafast analysis catches unexpected build differences quickly so they can be resolved during the build cycle.

By running tests across multiple browsers during build, Ultrafast Grid users shorten their find-to-resolve cycle to branch validation even prior to code merge. They catch the rendering differences and resolve them as part of the feature development process instead of the final QA process. 

Ultrafast testers seamlessly resolve unexpected browser behavior as they check in their code. This happens because, in less than 10 minutes on average, they know what differences exist. They could not do this if they had to wait the nearly three hours needed in the legacy approach. Who wants to wait half a day to see if their build worked?

Combine the other speed differences in coding and maintenace, and it becomes clear why Ultrafast testing across multiple browsers makes it possible for developers to run Ultrafast Grid in development.

What’s Next

Next, we will cover code stability – the reason why Ultrafast tests take, on average, 5 minutes to maintain, instead of two hours. 

The post Fast Testing Across Multiple Browsers appeared first on Automated Visual Testing | Applitools.

]]>
Comparing Cross-Browser Testing Techniques https://applitools.com/blog/cross-browser-testing-techniques/ Fri, 13 Nov 2020 02:21:51 +0000 https://applitools.com/?p=24654 Can you double or triple the speed of your test runs by switching out a few lines of code? That’s what the team at Applitools is promising. They’ve challenged test...

The post Comparing Cross-Browser Testing Techniques appeared first on Automated Visual Testing | Applitools.

]]>

Can you double or triple the speed of your test runs by switching out a few lines of code?

That’s what the team at Applitools is promising.

They’ve challenged test automation teams to speed up their test suites with their new Ultra-Fast Grid (UFG), which provides “massively parallel test automation across all browsers, devices, and viewports.”

To try it out myself, I took part of their Cross Browser Testing Hackathon, seeing if their new grid lived up to its claims.

What’s the Cross Browser Testing Hackathon?

In June of this year (2020), Applitools held an online Cross Browser Testing Hackathon. The challenge was to complete three automation tasks, attempting to catch bugs between two versions of the same app, using two different techniques.

While I really wanted to participate in it, I also needed to focus my efforts on my book, The Web App Testing Guidebook. As much as I wanted to, I just didn’t have to the time to commit to the hackathon.

So it came and went and I moved on with life. A few months later, Applitools reached out and asked if I’d be interested in covering the content via a series of live streams, plus this blog post. I thought the opportunity was a great fit, since I already had an interest in the topic, plus the work on my book had mostly wrapped up.

At this point, I should mention that Applitools is sponsoring all of this. I do my best to stay objective, but I want to be transparent with that.

“…with the UFG, you run your test setup scripts… once on your local machine, then the Applitools code takes a snapshot of the page HTML & CSS and sends it to the grid for processing. This means that we can run our filtering steps in the desktop view, yet still be able to capture our screenshots in mobile view via Applitools.

–Kevin Lamping

I should also mention what Applitools does. They are a “Next generation test automation platform powered by Visual AI”. I’m a big fan of Visual Regression Testing, seeing that I help maintain visualregressiontesting.com and have covered the approach many, many times since I first came across it back in 2013.

Applitools takes the Visual Regression Testing approach to a new level, which I talk about in my blog post “Visual Regression Testing is Stupid.”

The reason that Applitools put on the hackathon was to get people to try out their new “Ultra-Fast Grid”. Rather than try and describe what that is myself, I’ll use their own words: “With Ultrafast Grid, you run your functional & visual tests once locally and it instantly renders all screens across all combinations of browsers, devices, and viewports. This is all done with unprecedented security, stability, and speed, and with virtually no setup required.”

We’ll get to my thoughts on the UFG (Ultra-Fast Grid) in a bit, but let’s cover what the Hackathon entailed first. Here’s how the project page introduces the challenge:

Imagine you are a test automation engineer for “AppliFashion”, a high profile e-commerce company that sells fancy shoes. The AppliFashion web app is used by millions of people, using various devices and browsers to buy shoes. The 1st version of the app (V1) is already built and is “bug-free”. Your developers are now coming up with a newer version, version (V2) of the app, and assume that it’s full of bugs.

The challenge is to build the automation suite for the first version of the app and use it to find bugs in the second version (V2) of the app. You need to automate three (3) main tasks across seven (7) different combinations of browsers and screen resolutions (viewports). Further, you need to automate the tasks in both the traditional approach and the Modern approach through Visual AI, for both V1 and V2 versions of the app. By “traditional approach”, we mean without using Applitools Visual AI. You can execute the traditional tests either locally or by using other cross-browser cloud solutions that you are already familiar with.

That’s a pretty realistic scenario in my mind. That’s one of the things I really appreciated about this entire challenge; the scenarios were very “real world”. They took the time to create a project that reflected what you may face in a work-related project. I think that’s a testament to how valuable Visual Regression Testing can be.

How I Did It

Over a series of three 2-hour live streams, I worked through the three tasks assigned to the project (completing one task each stream).

As for tooling, I’m a huge fan of WebdriverIO. I used it, plus the Applitools WebdriverIO SDK, for my test framework. I also used Sauce Labs for my traditional cross-browser testing needs.

You can find a copy of the final code here:
https://github.com/klamping/applitools-hackathon-2020

Now let’s talk through each task. For the sake of brevity, I’ll be going through a high-level overview of each task, what I did, and my impressions comparing “traditional” testing vs. the Applitools “modern” approach.

Task 1 – Cross-Device Elements Test

Video Replay of Stream #1

The challenge started off with a relatively simple set of tests. The test would compose of loading the homepage, then validating the correct elements appear (or don’t) on the page.

Most of my time was spent on getting my project initialized and set up. I’m fairly familiar with WebdriverIO, so installation and configuration went pretty smoothly.

One challenge of the project is to test on multiple browsers and multiple viewports. The site being tested is responsive, so we needed to handle that functionality.

To handle this, I took advantage of the WebdriverIO setWindowSize command. Using it, plus a defined set of viewports, I was able to switch viewport size around as needed.

I added a few element display checks, but didn’t go too far down that path. Element display checks are pretty routine in WebdriverIO, so I didn’t feel like spending time repeating the same code again and again. In a real test I’d definitely want to though.

After writing the traditional tests, I switched to setting up Applitools and the Ultra-Fast Grid.

While there is an Applitools service for WebdriverIO, it’s not updated to support the UFG yet, so I stuck with using the SDK they provide.

I had to read through the documentation a bit to get it all set up, but I was able to figure it out without too much pain.

After setting it up, writing my test was an absolute breeze. This is essentially the entirety of my test script:

home.load();

browser.call(() => {
    return eyes.check('Cross-Device Elements Test', Target.window())
})

All we do is load the home page, then take a screenshot of the entire window, then send it off to Applitools.

Already, there’s a large difference between the traditional approach of manually writing element display checks, and just relying on the Applitools AI to handle all of that for us.

This is one of the true powers of Visual Regression Testing (VRT), and it shines in this example. VRT implicity checks text content, color, display, size and much more without you having to specifically ask for it for each and every element. It really is a fantastic technique for testing the UI of sites.

But, did the UFG live up to its promise of fast speeds? Well, at this point, not really.

For a good reason though. The tests we’ve written so far are extremely simple. They load a page and check a couple of items. There’s no login steps or other preperation steps needed to get to where we need to be in order to test. Because of that, the UFG doesn’t really get to flex its muscles and show off. But it’ll show off soon enough.

Task 2 – Shopping Experience Test

Video Replay of Stream #2

Next up on task list is to try filtering the product grid. I enjoyed this task, because it gave me the chance to show off how to extend Page Objects through the use of Components. I won’t cover it here, but check out the stream if you’re interested.

The traditional test was pretty straight-forward again. I did need to spend a little bit of extra time handling the filter functionality across viewports, but the way I set up the viewports made this much easier than it could have been.

So how did the modern approach compare? Well, it matched more evenly with the traditional approach this time, taking about the same level of coding effort. Both approaches caught the main bug on V2 of the site, which is that the shoes aren’t properly filtered.

One big difference between the two approaches is that the extra functionality I needed to write to handle the filter viewport differences isn’t needed in the modern approach. That’s because of the way the UFG works.

The reason why is that with the UFG, you run your test setup scripts (e.g., clicking the filter menu and selecting the options) once on your local machine, then the Applitools code takes a snapshot of the page HTML & CSS and sends it to the grid for processing. This means that we can run our filtering steps in the desktop view, yet still be able to capture our screenshots in mobile view via Applitools.

We’ll dive deeper into the benefits of this approach later, but I wanted to mention this specific difference here.

Task 3 – Product Details Test

Video Replay of Stream #3

For our final task, we take a look at the product detail page.

Thankfully, we’re able to re-use a lot of our previously written test code, as the steps to select the product needed are almost the same as our previous task (just adding the step to click the product link).

The main challenge of the test is writing the various assertions needed to check all the product details. There are several details to validate, including the name, price, SKU, description and more.

This puts us in a similar situation to Task 1, where we have multiple minor details on a page that we have to write assertions for. And similar to Task 1, using VRT really helps in this regard.

Not only does it help reduce the monotony of writing these assertions, in this task, it caught several bugs that my traditional tests didn’t. This is due to both me taking a shortcut by not writing all my checks, but also because my checks honestly wouldn’t have caught some of the bugs.

Here are three bugs that my traditional tests didn’t catch:

  • SKU Number not shown. This is due to the text color matching the background. WebdriverIO’s isDisplayed command absolutely won’t catch this, since it doesn’t check for text color vs. background.
  • The text of dropdown changed. This is just too minute of a detail for me to write a test for. If I had to spend my time on things like that, not only would it take forever to write a full test suite, but keeping that suite up-to-date would be an absolute pain.
  • In the price, it changed from $33.00 to only $33. Again, this would be a very difficult thing to test for and would take a lot of time to get that level of detail using the traditional approach. But using VRT catches it with minimal effort.

I have a sneaking suspicion that the project was set up to highlight these sorts of bugs, and I don’t blame the creators one bit for exposing it. VRT is a very powerful technique and there’s nothing wrong about showing off specific examples of why.

Overall Feelings/Observations

So that finishes up all three tasks. Now for some overall observations. I’ve talked about the difference between VRT and traditional testing, so I won’t cover that much here. I hope I made my point there.

But there are several other observations I had over the course of the work.

UFG’s Speed

Let’s look at the speed improvements that UFG claims to provide.

Honestly, in these tasks, UFG’s speed didn’t get to shine. There are a few of reasons why that is.

WebdriverIO is Already Fast

The first is that WebdriverIO provides the ability to run tests in parallel by default, so I was able to run all three of my test files at the same time with little effort.

This is important, because if each of my traditional tests take 5 seconds extra, normally that would cost me 15 total seconds. But since those 5 seconds all occur at the same time (since the tests all run at the same time), then the pain is minimized.

But this parallelization only gets you so far. Due to computing limitations, you can only run so many tests at the same time. As you add more tests, you’ll quickly reach the limit of browsers you can run at once.

Let’s take an example of 80 tests that need to run on 8 environments (i.e., different devices/browsers/viewports). You will need to run these 80 tests, 8 times each test, which is 640 total runs.

Even if you could run 20 tests in parallel, you’d still have to go through 32 cycles of testing.

With UFG, you would only need to run 80 tests in parallel once (you can run all the tests on one environment in parallel with WebdriverIO). Divide that by 20, and you have only 4 cycles to complete. That’s 8x faster, because you don’t have to run each test 8 separate times for each environment.

So while it’s not too evident on small scales, once you level up your test suite, UFG’s speed really starts to show.

The Speed Pedal Wasn’t Pushed

While these example tests are useful as a demonstration, they aren’t very complex. They simply don’t do enough compared to what a more real-world example would look like.

Looking back, almost all the tests I’ve written over my career have involved some sort of login flow, which we didn’t have in our examples. And, in my experience, the login flow can be painfully slow, as the tests run through their extensive authorization process.

To simulate a more real-world example, what if we add an arbitrary 10 second pause in our tests? Let’s add it right after we load the page for each of our viewports/tasks. This pause is similar to what a login flow would take to run, so imagine it’s that.

When added up, this 10-second delay would add over 70 seconds to our overall test run using the traditional approach.

Why? Well, since that pause happens in our beforeEach hook, it occurs for each of our tests. With 7 total tests (3 each for Task 1 and 2, and once for Task 3), that’s 70 seconds.

Then, we take that 70 second delay, and multiply it by the number of browsers we’re testing in, which is 3. Now we’re 210 seconds slower due to this unavoidable delay.

The Ultra-Fast Grid avoids most of those slow downs. Remember, our modern tests only needs to run the setup steps once per file, since the UFG handles the multi-browser aspect of everything.

So at most, we’re going to see 30 seconds added to our tests (10 seconds for each file). That’s three minutes of testing time saved over just 7 tests. And, no matter how many browsers we add to our suite, UFG always only takes 30 seconds, while traditional methods grow and grow.

Visual Regression Testing is Powerful

The other part to consider here is that the UFG is running a full visual regression. That’s doing a lot more work than our traditional tests were.

Consider our first task. I ran about two assertions per test, getting the text of a couple elements and checking them.

What did our modern approach check? Well, essentially everything on the page. It checked the display and text of all the elements. It checked their visual display, including color, spacing, font size, font style, on and on.

It checked background colors, images, and overall layout. It ran hundreds, maybe thousdands, of assertions in the time it takes a traditional test to run two.

Aside from the test run time, it also provided a huge amount of savings from the test writing time. I don’t think I can emphasize enough how much simpler it is to write a visual regression check. I can’t imagine having to write all those thousands of assertions myself (or even worse, have to use my own eyes to manually check everything).

So, all those things considered, I really think I barely scratched the surface in the speed improvements that the UFG provides. It’s like taking a race car for a drive through a congested street, when it really deserves to be taken out to the race track.

Reduced Flakiness

Okay, so not only is the UFG faster than traditional testing (while also being far more powerful), it also reduces test flakiness.

By minimizing the number of times we’re going through specific actions, we’re reducing the number of interactions we have with potentially finicky flows.

Take the Login flow for example (which, in my experience, can be flaky on test servers). Say that, through no fault of the tests themselves, there’s an odd bug in the staging environment where every 100th login fails. This isn’t much of an issue for a manual test, since you’d just retry your login. But it’s a real pain for an automated test, since that failure can happen in the middle of a test run.

Using our example of 80 tests on 8 environments from before, using the traditional approach, we’d get at least 6 test failures due to the login bug (we login 640 times, so hit that magic 100 number 6 times).

But with the UFG, we’re only running our 80 tests once. We go from a sure six failures, to likely none at all. Even though we have all the same test coverage from before, we’ve minimized our interactions with the app to avoid errant failures.

One Final Wish

I’ll end this all with a wish. I’d love to see the official WebdriverIO Applitools service updated to include integrating the UFG features.

While Applitools does provide an SDK for WebdriverIO, which includes support for UFG, it’s not quite as integrated into WebdriverIO as the @wdio/applitools-service is.

Fore example, for the sake of time during this project, I just copied over my UFG initilization code from test to test. But in a real job, I’d definitely want to avoid that. While I could set it up myself using a custom wdio.conf.js configuration, an update to the @wdio/applitools-service could be made to handle this for me in a simple way.

It would also help avoid having to wrap all my Applitools commands in browser.call, which can get clunky.

WebdriverIO services are perfect for minimizing the set up code. Seeing as there’s already an Applitools service, I hope it gets updated soon so I can use this functionality with even less work!

Overall, I really enjoyed the chance to implement this hackathon, and was impressed with what Applitools has to offer. I’m definitely going to see where I can use this in my work make the move to modern testing!

Cover Photo by Alexandria Bates on Unsplash

The post Comparing Cross-Browser Testing Techniques appeared first on Automated Visual Testing | Applitools.

]]>