Event Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/event/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 07 Jul 2023 18:37:43 +0000 en-US hourly 1 Front-End Test Fest 2022 Recap https://applitools.com/blog/front-end-test-fest-2022-recap/ Fri, 08 Jul 2022 19:58:14 +0000 https://applitools.com/?p=40126 We were excited to host our second annual Front-End Test Fest. We've rounded up all the videos with this recap of the great event.

The post Front-End Test Fest 2022 Recap appeared first on Automated Visual Testing | Applitools.

]]>

User experience is becoming more and more critical to the success of applications every year, making the world of front-end testing a fast-moving and exciting place. Along with our partners at Netlify, we were excited to host our second annual Front-End Test Fest last month. The event featured expert speakers from global brands, lively Q&A sessions as well as games and fun. It was hosted by Angie Jones, Head of Developer Relations at Block, and Cassidy Williams, Head of Developer Experience and Education at Remote.

We’ve rounded up all the videos in one place, so feel free to dive right in, or keep reading for a recap of the event.

Front-End Test Fest 2022

The event began with opening remarks from hosts Angie and Cassidy, before Andrew Knight (AKA the Automation Panda) took the stage to present the opening keynote.

OPENING KEYNOTE: The State of the Union for Front End Testing – Andrew Knight

Front-end Test Fest - Opening Remarks

Web development moves fast, and new techniques are constantly emerging with promises of better performance and simpler maintenance. The JAM stack, server-side rendering frameworks, and APIs for all the things are on the rise. How can we most effectively test our web apps when they follow these new patterns? In this talk, Andrew Knight, Developer Advocate at Applitools and Automation Panda, explores seven major trends in front end testing to help unlock the best approaches, tools and frameworks you can use. Andrew considers the value of new techniques like component testing and visual testing, and touches on the future with a glance at autonomous testing. Check out this exciting “state of the union” on current trends in front end testing!

How to Avoid Common Mistakes When Writing Your Cypress Tests – Filip Hiric

Front-end Test Fest - How to Avoid Common Mistakes When Writing Your Cypress Tests

Filip Hric, QA Lead at Slido and Cypress expert, knows the pain of a failed, flaky test. Cypress is a great tool, but it’s only as effective as the tests you design with it. In this talk, Filip shows you a few powerful tools that are built into Cypress that can help you navigate the rough terrain of flaky tests so you can start designing tests that are evergreen and stable. He discusses common mistakes, the proper use of commands and plenty more. Check out his great presentation.

Coffee Break!

One of the nice things about the conference was the “coffee breaks” between the sessions, that gave attendees a chance to “mingle” with some of the speakers. Gleb Bahmutov (Senior Director of Engineering, Mercari US), Tiffany Le-Nguyen, (Sr. Frontend Engineer, Netlify) and Nick Taylor (Staff Software Engineer, Netlify) were available at the morning break.

Shifting Accessibility Testing to the Left – Marie Drake

Front-end Test Fest - Shifting Accessibility Testing to the Left

Why are there still so many websites that are not accessible? Many companies still view accessibility testing as an afterthought or something that will be difficult to implement. In this talk, Marie Drake, Quality Engineering Manager at Zoopla, busts some of these myths around accessibility testing and explains how you can shift it to the left. She also covers the many benefits of improving accessibility, and how to integrate accessibility testing into development and testing workflows. Watch the talk right here.

Expert Panel: Trending Tools and Frameworks – What’s Hype and What’s Not

Front-end Test Fest - Expert Panel: Trending Tools and Frameworks

Cutting through the marketing noise to understand what’s hype and what’s real can be a challenge. In this panel, you’ll hear from leaders Skyler Brungardt, Sr. Manager of Digital Experience & Commerce at Gap, Dan Giordano, Sr. Director of Product Marketing at Applitools, and Tiffany Le-Nguyen, Sr. Frontend Engineer, and Nick Taylor, Staff Software Engineer, both from Netlify.

Take a look at this panel for an exciting conversation based on hands-on experience with the latest tools and frameworks being used at leading companies. The panel was expertly moderated by Joe Colantonio, the founder of TestGuild.

Lunch Break: Making Music with (Live) Code – Dan Gorelick

Front-end Test Fest - Lunch Break

This was a fun one. For the lunch break, independent developer and artist Dan Gorelick gave a live musical performance using only code. Using the coding framework TidalCycles, Dan briefly introduces us to the technology and how it works before improvising a great live show. Check this one out for a bit of musical theory, conversation around how to combine code with musical ideas, and just for some fun music.

Slice and Dice Your End-to-End Tests – Gleb Bahmutov

Front-end Test Fest - Slice and Dice Your End-to-End Tests

In this presentation, Gleb Bahmutov, Senior Director of Engineering at Mercari US, shares some practical tips and tricks for reliable end-to-end testing at serious scale. Using parallelization and a bit of custom tinkering, his CI system has his team running 500+ end-to-end front-end tests – and running them quickly. Want to get developers and other stakeholders quick, high-quality feedback from your E2E tests? Watch this talk.

Increasing Automated Test Flow in a Monorepo – David Lindley, Ben Hudson

Front-end Test Fest - Increasing Automated Test Flow in a Monorepo

David Lindley and Ben Hudson, Software Engineering Manager and Senior Software Engineer at Vodafone UK, respectively, talk about the challenges and benefits involved in one of their global projects – a shared React component library situation in a monorepo. The library has 220+ shared components and developers across the globe contribute to it, so ensuring quality is critical! David and Ben share how they overcame issues that came up as they progressed, including how they used dependency checking to make sure they test only what is needed, speeding up test flow. See it here.

Coffee Break!

Another opportunity to schmooze with the speakers! Andrew Knight (Developer Advocate and Automation Panda, Applitools), Adam Murray (Developer Experience Team Lead, Cypress.io) and Ben Hudson (Sr. Software Engineer, Vodafone UK) were hanging out to chat in this afternoon break.

Getting Started with Component Testing in Cypress – Adam Murray

Front-end Test Fest - Getting Started with Component Testing in Cypress

In the latest Cypress 10 release, the Cypress team has added something new – official component testing (previously, it was an experimental feature). In this hands-on talk, Adam Murray, Developer Experience Team Lead at Cypress, helps you get going with it. He starts at the very beginning, showing you how to install Cypress 10, configure it in a new application, and add your first component tests. Adam goes on to cover more advanced topics like cy.intercept() as well. Take a look.

CLOSING KEYNOTE: Looking Forward – Trends That Will Shape Modern Web Development – Jason Lengstorf

Front-end Test Fest - Closing Keynote

The web ecosystem is changing quickly, and it can feel hard to keep up with all the changes. In this fascinating closing keynote, Jason Lengstorf, VP of Developer Experience at Netlify, delivers some comfort – things are perhaps not as “new and different” as they seem. Jason looks forwards and back at the landscape of the web, discussing not only where we’re going but how it all builds on historical trends. Check it out here.

After the closing keynote, Angie and Cassidy delivered the closing remarks as the event concluded.

Thank You!

We want to extend a huge thanks to everyone involved in making this event such a huge success. The list could go on, but we’ll start with Angie and Cassidy for being rock star hosts, our brilliant speakers for sharing their wisdom and experience, and of course to every attendee for adding your voices and presence. Thank you – we could never have delivered an event this great without all of you.

If you liked these videos, you might also like our videos from our previous virtual events too – all free to watch. Happy testing!

The post Front-End Test Fest 2022 Recap appeared first on Automated Visual Testing | Applitools.

]]>
Front-End Test Fest 2021 Recap https://applitools.com/blog/front-end-test-fest-2021-recap/ Fri, 16 Jul 2021 18:20:44 +0000 https://applitools.com/?p=29960 Last month, Applitools and Cypress hosted the Front-End Test Fest, a free event that brought together leading experts in test automation for a full day of learning and discussion around...

The post Front-End Test Fest 2021 Recap appeared first on Automated Visual Testing | Applitools.

]]>

Last month, Applitools and Cypress hosted the Front-End Test Fest, a free event that brought together leading experts in test automation for a full day of learning and discussion around front-end testing. It was a great opportunity to hear about the latest in the industry and get to hear some really innovative and interesting stories.

We’ve got all the videos ready for you here, so feel free to jump right in below, but let’s recap the event below.

Opening Keynote: Applitools & Cypress: State of the Union – Angie Jones and Amir Rustamzadeh

This talk opened with Amir Rustamzadeh, Director of Developer Experience at Cypress, getting us all familiar with the latest and greatest in the testing tool. We all already know that Cypress is an excellent tool that is highly interactive and visual, but Amir took us on a tour of two new features that look pretty powerful.

These new features were Test Retries and Component Testing. Test Retries allows you to easily retry a test multiple times, helping you to catch and defeat test flake by highlighting how frequently a test passes with some handy analytics in the Cypress Dashboard. A related feature, Test Burn-In, allows you to do the same thing with brand new tests as they’re introduced. As for Component Testing, Amir noted that while this is typically done in a virtual DOM that you can’t debug, Cypress now has a beta where you can use the real browser DOM to mount a component and test it in isolation. Much better!

Angie Jones, Senior Director of Developer Relations at Applitools, then helped us understand the dangers of all-too-common visual bugs. Angie walked us through how Applitools Eyes can give your code super powers to find visual bugs, thanks to Visual AI. This talk covered visual component testing, visual testing of dynamic content, accessibility and localization testing, as well as cross-browser/viewport testing using the Ultrafast Grid. Check it out for a great overview of how to improve your visual testing.

The Allure of Adding Tests to Azure DevOps – Bushra Alam

Azure DevOps is a powerful tool, and if you’re curious about it, this talk will help get you started with it. Busra Alam, Software Quality Analyst, begins by covering the basics about what Azure, DevOps, and of course Azure DevOps means.

Azure Pipelines, part of Azure DevOps, is a tool to build, test and deploy your apps. By running tests in the pipeline, we can discover bugs early and deliver faster feedback with quicker time to market and overall better code quality. Bushra takes us through a live demo that shows how to create a pipeline, run a test and check the results – all automated through Azure, and quickly. She went on to share some advanced tips for running tests in parallel and utilizing release triggers. Check it out for the whole demo.

Bushra Alam

Answer The Call – A Digital Learning Transformation Using Model-Based Testing – Greg Sypolt

EverFi is a socially-conscious educational platform with a large number of courses and individualized learner paths. Greg Sypolt joined them as their VP of Quality Assurance to solve a tricky testing challenge they had – with so many different courses and paths for learners to take, traditional testing just couldn’t cover it all, and it would only get worse as EverFi grew.

Greg’s solution was to launch a multi-pronged approach centered around model-based testing. In this eye-opening talk you’ll see the step-by-step approach Greg used to build his models. Cypress and Applitools as critical components of the process, but there’s a lot more to it. This one is hard to sum up in a couple of sentences but is definitely worth watching to get the full story.

Greg Sypolt

Expert Panel: Successful Test Strategies

Stacy Kirk, CEO/Founder of QualityWorks Consulting Group, moderated this great panel with a trio of testing experts. Kristin Jackvony, Principal Engineer – Quality at Paylocity, Alfred Lucero, Senior Software Engineer at Twilio and Jeff Benton, Staff Software Engineer in Test at RxSaver share their experiences on a range of issues relevant to test engineers everywhere. Learn about the testing tools they used, tips for incorporating testing into the CI/CD process and how you can secure that crucial teamwide buy-in for testing. I won’t spoil it but the parting words from these experts make it clear that the first step for successful testing is to have the conversation with your team on the value of testing, and then just start – it’s ok if you start small with a quick win to get that buy-in quickly.

It’s a (Testing) Trap! – Common Testing Pitfalls and How to Solve Them – Ramona Schwering

How can we avoid trapping ourselves underneath tests that are hard to maintain or worse, don’t even deliver any value? Ramona Schwering, a Developer Core at shopware AG, shared her own mistakes here (and yes, her love of Star Wars) to try and make sure you don’t have to make them too. Ramona has worked as both a developer and in testing so she knows how to speak to both experiences, and this was a very easy to follow, relatable talk. She shared three main pain points (or traps) that tests can fall into – they can be slow, they can be tough to maintain, and they can be “Heisen tests” that are so flakey they don’t tell you anything. Check this one out to hear more about traps and solutions and how you can keep your tests simple.

Ramona Schwering

Testing Beyond the DOM – Colby Fayock

Colby Fayock, Developer Advocate at Applitools, kicked off his talk with a game of “UI Gone Wrong,” taking us through some cringeworthy examples of UI bugs from major organizations that probably cost them revenue or customers. You all know the kind of bug – it happens to everyone sometimes, but does it need to? With Cypress and Applitools working together, Colby showed us that you can do better. He walked us through a live demonstration of how you can easily add Applitools to an existing Cypress test, enhancing the browser automation provided by Cypress with Visual AI to catch any visual bugs. Take a look and see how you can take your testing to the next level.

Colby Fayock

Our Journey to CI/CD: The Trials, Tribulations, & Triumphs – Hector Coronado and Joseph King

As projects get increasingly complex, they get harder to maintain and changes become slower to deploy. That was the issue Hector Coronado and Joseph King were running into as frontend and web application engineers, respectively, at Autodesk. They were working on a React app they had built, the “Universal Help Module,” that provides users several types of support while appearing in multiple locations with varying layouts and UIs. To keep up with the growing complexity, they set out to build a fast and thorough CI/CD pipeline that would include an automated testing strategy.

Hector and Joseph moved away from manual testing and tried many tools for automated functional and visual testing. In the end, Cypress won big as a free all-in-one testing framework that is fast an open source, and they loved Applitools for its blazing speed, simple Cypress SDK, strong cross-browser capabilities and excellent customer support. They put them together to achieve a dream they used to get buy-in – more coverage with less code! Check out their full journey below.

Practically Testing – Kent C. Dodds

You have limited time in your day – should you write that test or fix that bug? That’s the subhead for this talk by Kent C. Dodds, a JavaScript Engineer and Trainer at Kent C. Dodds Tech LLC. Unlike many of the presentations above, which are filled with awesome code examples and demos, Kent’s talk is intended to be a practical one with relatable examples to get you thinking about one key thing: How do you prioritize?

Kent describes his methodology for understanding what’s truly important to your company and its mission and how you can identify your role in pushing that forward. He also reminds all of us that we’re not simply hired as engineers to write code or tests, but as humans to advance a mission. Watch this video for some really humanizing inspiration and to spark some thoughts about how you can get more out of your day.

Kent C. Dodds

Looking for More? Learn about the Future of Mobile Testing

We’ve got you covered with another free event. Our next live Future of Testing: Mobile event takes place on August 10th, and registration is officially open. Check it out and reserve your spot today.

Past Future of Testing: Mobile Events

You can also check out all the videos from our Future of Testing: Mobile event in June here or get a full recap of our Future of Testing: Mobile event from April right here.

Happy testing!

The post Front-End Test Fest 2021 Recap appeared first on Automated Visual Testing | Applitools.

]]>
Future of Testing: Mobile Recap – All About Mobile Test Automation https://applitools.com/blog/future-of-testing-mobile-all-about-mobile-test-automation/ Fri, 30 Apr 2021 20:09:18 +0000 https://applitools.com/?p=28760 Applitools recently hosted a conference on the future of testing for mobile applications. Check out a recap of the event and watch the recordings.

The post Future of Testing: Mobile Recap – All About Mobile Test Automation appeared first on Automated Visual Testing | Applitools.

]]>

A few weeks ago, Applitools hosted a conference on the future of testing for mobile applications. Almost 4000 people registered for the event, creating a fun and exciting atmosphere in the chat for each session as well as for the live Q&A that followed. There was a lot to learn and it was a great opportunity to engage with the testing community on such an important topic.

The videos are all available now in our on-demand library and can be watched for free. If you want to dive right in and watch them right now and skip this recap, go ahead, I won’t blame you ?. You can check them all out at the link below.

The Path to Autonomous Testing – Gil Sever

The opening remarks were from Applitools CEO and co-founder Gil Sever. In this ten-minute presentation, Gil delivers a strong primer on what autonomous testing really is – and how machine learning can help assist humans and make testing much, much more effective. Tune in for a glimpse at the autonomous future.

On the Same Wavelength: Adding Radio to Your Testing Toolbox – Jason Huggins

Jason Huggins was the opening keynote speaker at the event, and he gave a fascinating talk on where testing is headed. As Jason says, “testing is getting weird,” and is increasingly about things you can’t even see. He argues that it’s time to move past an understanding of testing as just simulating what can be seen and tapped. What mobile testers are ultimately interested in today is the triggering of radio activity. That’s the essence of how your app truly performs, isn’t it?

Jason is a founder of Selenium, Appium, and Tapster Robotics, so he knows quite a bit about where testing has been and where it’s going. Check out his talk to hear what he has to say.

Appium 2.0: What’s Next – Sai Krishna, Srinivasan Sekar 

Appium is a very popular test automation framework, and the upcoming release of Appium 2.0 is highly anticipated. Sai and Srinivasan are both contributors to the Appium project as well as lead consultants at ThoughtWorks, and in their presentation you’ll find a preview of what’s coming with Appium 2.0. 

For example, today you need to install a large number of drivers when you install Appium server – even ones you don’t need. With Appium 2.0, you can just install the ones you need. Another example has to do with bug fixes – a lot of fixes are added to betas but many people don’t install betas and miss out, so with Appium 2.0 the fixes will be attached to individual drivers and rolled out faster. There will be improved docs and it’ll be easier to build your own plugins… the list goes on.

Catch Sai and Srini’s presentation to learn all about it. And if you’re ready to try it out for yourself, read our blog post on Getting Started with Appium 2.0 Beta.

Coffee Break

You might think there’s not much to recap during a coffee break, but during the first coffee break of the conference the brand-new Test Automation Cookbook was introduced to the world. This is a collection of bite-sized recipes you can use to answer a number of specific and common questions you may have about test automation. This “commercial break” was very well received by the audience ?.

Mobile App Testing & Release Strategy – Anand Bagmar

Your mindset needs to be mobile-first. That’s how Anand, a Quality Evangelist and Solution Architect at Applitools, opened his talk. He followed that up with an overview of the differences between web and mobile testing/releasing, including mobile test automation on a local/cloud device lab. Anand explains that even after all our hard work in continuous testing, sometimes visual tests can still come down to a game of manual “spot the difference.” 

Visual AI is a difference-maker there, as Anand explains. He talks about the difference between Visual AI and pixel comparisons and how you can apply it yourselves. Take a look at this talk for a great overview of mobile testing and releasing.

Next Generation Mobile Testing with Visual AI – Adam Carmi

Adam Carmi, a co-founder and CTO of Applitools, picked up with Anand left off with a deeper dive into Visual AI. Adam walks through a live demo of Applitools Eyes so you can see it for yourself. He talks about the huge code reduction when you use Eyes – up to 80% – which also gives you increased coverage and no validation logic to maintain. He backs this up with hard data from a hackathon, highlighting the fact that many testers were completely new to Applitools and were able to pick it up quickly and get some really strong results.

Adam’s talk was full of examples of how Eyes can work in the kinds of scenarios you may be wondering about, including how Eyes deals with different mobile form factors and how it batches together similar errors that can be approved/rejected together. Check it out.

Expert Panel: State of the Mobile Frameworks

This panel gathered together three mobile development experts for a robust discussion of what life is like for developers using different mobile frameworks. Eran Kinsbruner, DevOps Chief Evangelist and Sr. Director, Product Marketing at Perforce Software, Eugene Berezin, iOS Developer at Nordstrom and Moataz Nabil, Mobile Developer Advocate at Bitrise shared a lot of great information about the frameworks they use, which included Flutter, Appium, Kif, EarlGray and of course XCUITest and Espresso.

The panel was moderated by Justin Ison, Sr. Software Engineer at Applitools. Justin led the panel through a conversation around framework limitations, how to make apps testable, and what could make mobile testing easier. You can check out the whole discussion below. And if you’re curious for a quick comparison, be sure to take a look at a recent writeup on our blog that tackles Appium vs Espresso vs XCUITest

The Future of Multi-Platform Integration Testing – Bijoya Chatterjee, Rajnikant Ambalpady

Bijoya and Rajnikant work on testing for the new SONY PlayStation 5, giving them a unique outlook on what it takes to deliver strong integration testing across platforms. In this talk, they describe the challenge of having many standalone apps that require automated testing, when there aren’t any off-the-shelf tools that are built to test a PlayStation! They ended up customizing Appium and making use of many other tools in their stack (this might be a good place to mention that Applitools is part of it, which I did not know until I heard Bijoya tell the audience ?).

They cover the challenges of testing numerous standalone components within apps that must talk to each other, as well as testing across platforms from console to web to mobile. For a discussion of the pros and cons of end-to-end integration testing and much more, be sure to check out this talk.

Let the Robots Test Your Flutter App – Paulina Grigonis, Jorge Coca

It’s not easy to organize code so that it’s A) maintainable and customizable by development teams, and B) still easily understood and readable by business stakeholders. In this presentation, Paulina and Jorge, who are respectively business and technical experts at Very Good Ventures, walk us through a methodology they call the Robot Pattern. This pattern separates the “What” from the “How” of testing and can result in some pretty spiffy code. Definitely very easy to read even for a non-technical user.

Want to learn how to implement this pattern in your own development? Check out their presentation below.

Your Tests Lack Vision: Adding Eyes to Your Mobile Tests – Angie Jones

As humans, we can only pay attention to so much at one time – and that means we miss things, even in plain sight. The closing keynote from Angie Jones makes this clear from the first moments with a great video clip. I won’t spoil it, but it reminded me a lot of another video when I saw it, so after you watch Angie’s talk go ahead and take a look at this one too and see how it goes if you want to laugh at yourself.

After helping us all understand our blind spots, Angie provides a lot of great examples of how visual bugs slip through traditional testing processes. She then walks us through a demo of a new app and show us how Applitools Eyes can help us make sure it’s visually perfect. In the live Q&A, Angie also answers a number of questions around handling multiple viewport sizes or when you have to scroll, and even testing variations like light and dark mode or dealing with pop-up notifications and alerts.

Angie also shared her inspiration behind launching the automation cookbook (hint: it’s making the life of fellow testers easier). If you haven’t taken a look at it yet, be sure to check out the automation cookbook here.

You can see Angie’s full talk below.

Thank You!

And with that (and a few closing remarks from host Joe Colantonio of TestGuild) the Future of Testing Mobile event ended. We want to extend a huge thanks to everyone involved in making this event such a success, from Joe for his incredible hosting to the amazing speakers for sharing their insights to every attendee for adding your voices and presence. The event could not happen without all of you.

If you liked these videos, you might also like our videos from our previous Future of Testing events too – all free to watch. Happy testing!

The post Future of Testing: Mobile Recap – All About Mobile Test Automation appeared first on Automated Visual Testing | Applitools.

]]>
From Selenium To Robotics with Jason Huggins https://applitools.com/blog/jason-huggins/ Mon, 05 Apr 2021 21:43:21 +0000 https://applitools.com/?p=28169 Jason’s team needed a reliable way to test their application across these browsers. So, Jason and two colleagues at ThoughtWorks did research on the available tools. Finding nothing that met their needs, they began writing what became Selenium.

The post From Selenium To Robotics with Jason Huggins appeared first on Automated Visual Testing | Applitools.

]]>

Jason Huggins has combined wicked brilliance, great experience, serendipity and perseverance. Jason is a luminary of software testing. And, he will be one of the key speakers at this week’s Future of Testing Mobile North America Conference, sponsored by Applitools. 

Jason serves today as founder and CEO at Tapster Robotics, but you may know him better as the co-founder and/or creator of amazing software testing tools. Jason co-created Selenium, and he co-created Appium. And, Jason co-founded Sauce Labs. 

Jason has chronicled his experiences in numerous places online. Here is a short guide to some cool recordings. 

Jason Huggins – Starting Selenium

Joe Colantonio and Jason have this great discussion from 2017 discussing the origins of the Selenium project. You might already know the story. Jason and his team had been developing a time and expense application at ThoughtWorks in 2003. 

Back then, ThoughtWorks had a global presence, but anyone outside of headquarters dealt with huge latency issues just to log their timesheets. The round-trip time to add another row to an expense report clearly slowed the work of someone in San Francisco and seemed positively glacial to someone in India. To overcome this limitation, Jason’s team decided to use JavaScript in the browser to do this work – instead of going back to the server.

However, JavaScript had not become a standard. Code Jason wrote would run on Internet Explorer but break on Mozilla. A fix for the Mozilla code might break IE. And updates to both browsers might break everything. 

Jason’s team needed a reliable way to test their application across these browsers. So, Jason and two colleagues at ThoughtWorks did research on the available tools. Finding nothing that met their needs, they began writing what became Selenium. Selenium could enter data and click buttons on a series of web pages to run through different test scenarios. And, Selenium could do this across multiple browsers.

Jason talks a bit more about this in his keynote address from the 2011 Selenium Conference.

Test Project Grows

After building the test software, Jason thought he would go back to the time and expense application. But, as people inside ThoughtWorks found out about his work, they wanted to know more about the web application testing tool instead. Other teams wanted to use the tool for their own projects.

Eventually, ThoughtWorks realized that ThoughtWorks clients would want this kind of testing tool. ThoughtWorks concluded that the test code would aid their projects if it could get easily into the hands of their clients. As a result, ThoughtWorks made the test software project open-source.

From there, it took five years for Selenium to become a 1.0 product, and Jason had long since left ThoughtWorks. In the intervening years, Selenium has dominated much of web application test – thanks to Jason’s desire to automate application tests back in 2003.

Robotics and Sauce Labs

You can watch Jason’s interview with Tim O’Brien of O’Reilly Media where he talks about the first robot he built to play Angry Birds, which he showed off at the JavaOne conference in San Francisco in 2011. He also talks about founding Sauce Labs, and his experience at Google that led up to joining the Sauce team.

As he discusses his Sauce experience, Jason talks about leaving ThoughtWorks and joining Google. He helped Google build their Selenium farm. This infrastructure would test web apps developed at Google. 

Jason realized that this test infrastructure could reside anywhere on the Internet., He also understood that a companies no longer needed dedicated test infrastructure. He took this insight and joined the team founding Sauce Labs. Sauce provide the infrastructure in the cloud.

Jason also talks about his experience with robotics. He built what he calls a “bitbeambot” and his idea of building a robot that could play Angry Birds. Then, he demonstrates his home-built robot doing this.

Appium with Dan Cuellar and Jason Huggins

A third great video comes from the 2018 Appium Conference in London. Jason joins Dan Cuellar, the founder of Appium, to discuss how Appium almost did not come to be – and how Jason contributed to the creation of what became Appium.

First, Jason talks about the creation of Selenium Remote Control, Selenium Grid, and Selenium Webdriver. Finally, he talks about the need for a standard – and how WebDriver got submitted to W3C for standardization.

Next, Dan talks about the need to test mobile applications running on iOS. As he goes through the initial iOS specification he talks about running into a command:

host.performtaskwithpathargumentstimeout()

This command would take JavaScript from a file, apply it to the iOS application, and then take the response and save it to a file. As Jason says, “Ludicrous.” But it fits with the iOS model. Everything in iOS development had to be done in Xcode, except for this command. And this ugly command made the Appium project possible.

Jason and Dan were working together at this point. Jason came up with the name “Appium.” It wasn’t the original idea – but they couldn’t use what they had wanted. So, Appium – Selenium for Apps. And, eventually, Android as well as iOS.

You will find a lot more fun history in Dan and Jason’s talk.

Tapster Robotics

From the O’Reilly video, Jason makes it clear that he loves robotics and the world of makers. He founded Tapster Robotics to help companies that want to validate their user interface physically. 

From his own hand-built robot playing Angry Birds, he now has a Tapster robot that can do the same. Tapster robots can test smartphones and tablets, as well as other push-button devices. The device can interact with top screens as well as side buttons. 

Jason continues to develop great tools to help people test. And, he continues to participate in the world of software testing.

Get Ready For A Great Talk

Jason joins the Future of Testing Mobile North America conference with great experience, a lot of stories, and his current passion. We at Applitools thank him for joining our conference. We look forward to his presentation.

The post From Selenium To Robotics with Jason Huggins appeared first on Automated Visual Testing | Applitools.

]]>
How To Ace High-Performance Test for CI/CD https://applitools.com/blog/how-to-ace-high-performance-test-for-ci-cd/ Thu, 26 Mar 2020 15:14:00 +0000 https://applitools.com/?p=17388 If you run continuous deployment today, you need high-performance testing. You know the key takeaway shared by our guest presenter, Priyanka Halder: test speed matters. Priyanka Halder presented her approach...

The post How To Ace High-Performance Test for CI/CD appeared first on Automated Visual Testing | Applitools.

]]>

If you run continuous deployment today, you need high-performance testing. You know the key takeaway shared by our guest presenter, Priyanka Halder: test speed matters.

Priyanka Halder presented her approach to achieving success in a hyper-growth company through her webinar for Applitools in January 2020. The title of her speech sums up her experience at GoodRx:

“High-Performance Testing: Acing Automation In Hyper-Growth Environments.”

Hyper-growth environments focus on speed and agility. Priyanka focuses on the approach that lets GoodRx not only develop but also test features and releases while growing at an exponential rate.

About Priyanka

Priyanka Halder is head of quality at GoodRx, a startup focused on finding all the providers of a given medication for a patient – including non-brand substitutes – and helping over 10 million Americans find the best prices for those medications.  Priyanka joined in 2018 as head of quality engineering – with a staff of just one quality engineer. She has since grown the team 1200% and grown her team’s capabilities to deliver test speed, test coverage, and product reliability. As she explains, past experience drives current success.

Priyanka’s career includes over a dozen years of test experience at companies ranging from startups to billion-dollar companies. She has extensive QA experience in managing large teams and deploying innovative technologies and processes, such as visual validation, test stabilization pipelines, and CICD. Priyanka also speaks regularly at testing and technology conferences. She accepted invitations to give variations of this particular talk eight times in 2019.

One interesting note: she says she would lik to prove to the world that 100% bug-free software does not exist.

Start With The Right Foundation

Three Little Pigs

Priyanka, as a mother, knows the value of stories. She sees the story of the Three Little Pigs as instructive for anyone trying to build a successful test solution in a hyper-growth environment. Everyone knows the story: three pigs each build their own home to protect themselves from a wolf. The first little pig builds a straw house in a couple of hours. The second little pig builds a home from wood in a day. The third little pig builds a solid infrastructure of brick and mortar – and that took a number of days. When the wolf comes to eat the pigs, he can blow down the straw house and the wood house, but the solid house saves the pigs inside.

Priyanka shares from her own experience. – She encounters many wolves in a hyper-growth environment. The only safeguard comes from building a strong foundation. Priyanka describes a hyper-growth environment and how high-performance testing works.  She describes the technology and team needed for high-performance testing. And, she describes what she delivered (and continues to deliver) at GoodRx.

Define High-Performance Testing

So, what is high-performance testing?

Fundamentally, high-performance testing maximizes quality in a hyper-growth startup. To succeed, she says, you must embrace the ever-changing startup mentality, be one step ahead, and constantly provide high-quality output without being burned out.

Agile startups share many common characteristics:

  • Chaotic – you need to be comfortable with change
  • Less time – all hands on deck all the time for all the issues
  • Less resources – you have to build a team where veterans are mentors and not enemies
  • Market pressure – teams need to understand and assess risk
  • Reward – do it right and get some clear benefits and perks

If you do it right, it can lead to satisfaction. If you do it wrong, it leads to burnout. So – how do you do it right?

Why High-Performance Testing?

Leveraging data collected by another company Priyanka showed how the technology for app businesses changed drastically over the past decade. These differences include:

  • Scope – instead of running a dedicated app, or on a single browser, today’s apps run on multiple platforms (web app and mobile)
  • Frequency – we release apps on demand (not annually, quarterly, monthly or daily)
  • Process – we have gone from waterfall to continuous delivery
  • Framework – we used to use singe-stack on premise software – today we are using open source, best of breed, cloud based solutions for developing and delivering.

The assumptions of “test last” that may have worked a decade back can’t work anymore. So, we need a new paradigm.

How To Achieve High-Performance Testing

Priyanka talked about her own experience. Among other things, teams need to know that they will fail early as they try to meet the demands of a hyper-growth environment. Her approach, based on her own experiences, is to ask questions:

  • Does the team appreciate that failures can happen?
  • Does the team have inconsistencies? Do they have unclear requirements? Set impossible deadlines? Use waterfall while claiming to be agile? Note those down.

Once you know the existing situation, you can start to resolve contradictions and issues. For example, you can use a mind map to visualize the situation. You can divide issues and focus on short term work (feature team for testing) vs. long term work (framework team). Another important goal – figure out how to find bugs early (aka Shift Left). Understand which tools are in place and which you might need. Know where you stand today vis-a-vis industry standards for release throughput and quality. Lastly, know the strength of your team today for building an automation framework, and get AI and ML support to gain efficiencies.

Building a Team

Next, Priyanka spoke about what you need to build a team for high-performance testing.

Screen Shot 2020 03 25 at 9.33.53 PM

In the past, we used to have a service team. They were the QA team and had their own identity. Today, we have true agile teams, with integrated pods where quality engineers are the resource for their group and integrate into the entire development and delivery process.

So, in part you need skills. You need engineers who know test approaches that can help their team create high-quality products. Some need to be familiar with behavior-driven design or test-driven design. Some need to know the automation tools you have chosen to use. And, some need to be thinking about design-for-testability.

One huge part of test automation involves framework. You need a skill set familiar with building code that self-identifies element locators, builds hooks for automation controls, and ensures consistency between builds for automation repeatability.

Beyond skills, you need individuals with confidence and flexibility. They need to meld well with the other teams. In a truly agile group, team members distribute themselves through the product teams as test resources. While they may connect to the main quality engineering team, they still must be able to function as part of their own pod.

Test Automation

Priyanka asserts that good automation makes high-performance testing possible.

In days gone by, you might have bought tools from a single vendor. Today, open source solutions provide a rich source for automation solutions. Open source generally has lower maintenance costs, generally lets you ship faster, and expands more easily.

Screen Shot 2020 03 25 at 10.06.28 PM

Open source tools come with communities of users who document best practices for using those tools. You might even learn best-practice processes for integrating with other tools. The communities give you valuable lessons so you can learn without having to fail (or learn from the failures of others).

Priyanka describes aspects of software deployment processes that you can automate.  Among the features and capabilities you can automate:

  • Assertions on Action
  • Initialization and Cleanup
  • Data Modeling/Mocking
  • Configuration
  • Safe Modeling Abstractions
  • Wrappers and Helpers
  • API Usage
  • Future-ready Features
  • Local and Cloud Setups
  • Speed
  • Debugging Features
  • Cross Browser
  • Simulators/Emulators/Real Devices
  • Built-in reporting or easy to plug in

Industry Standards

You can measure all sorts of values from testing. Quality, of course. But what else? What are the standards these days? Who knows what are typical test times for test automation?

Priyanka shares data from Sauce Labs about standards.  Sauce surveyed a number companies and discussed benchmark settings for four categories: test quality; test run time; test platform coverage; and test concurrency. The technical leaders at these companies set some benchmarks they thought aligned with best-in-class industry standards.

In detail:

  • Quality – pass at least 90% of all tests run
  • Run Time – average of all tests run two minutes or less
  • Platform Coverage – tests cover five critical platforms on average
  • Concurrency – at peak usage, tests utilize at least 75% of available capacity

Next, Priyanka shared the data Sauce collected from the same companies about how they fared against the average benchmarks discussed.

  • Quality – 18% of the companies achieved 90% pass rate
  • Run time – 36% achieved the 2 minute or less average
  • Platform coverage – 63% reached the five platform overage
  • Concurrency – 71% achieved the 75% utilization mark
  • However, only 6.2% of the companies achieved the mark on all four.

Test speed became a noticeable issue. While 36% ran on average in two minutes or faster, a large number of companies exceeded five minutes – more than double.

Investigating Benchmarks

These benchmarks are fascinating – especially run time – because test speed is key to faster overall delivery. The longer you have to wait for testing to finish, the slower your dev release cycle times.

Sadly, lots of companies think they’re acing automation, but so few are meeting key benchmarks. Just having automation doesn’t help. It’s important to use automation that helps meet these key benchmarks.

Another area worth investigating involves platform coverage. While Chrome remains everyone’s favorite browser, not everyone is on Chrome.  Perhaps 2/3 of users run Chrome, but Firefox, Safari, Edge and others still command attention. More importantly, lots of companies want to run mobile, but only 8.1% of company tests run on mobile. Almost 92% of companies run desktop tests and then resize their windows for the mobile device.  Of the mobile tests, only 8.9% run iOS native apps and 13.2% run Android native apps. There’s a gap at a lot of companies.

GoodRx Strategies

Priyanka dove into the capabilities that allow GoodRx to solve the high- performance testing issues.

Test In Production

The first capabilities GoodRx uses a Shift Right approach that moves testing into the realm of production.

Screen Shot 2020 03 25 at 10.19.12 PM

Production testing? Yup – but it’s not spray-and-pray. GoodRx’s approach includes the following:

  • Feature Flag – Test in production. Ship fast, test with real data.
  • Traffic Allocation – gradually introduce new features and empower targeted users with data. Hugely important for finding corner cases without impacting the entire customer base.
  • Dog Fooding – use a CDN like Fastly to deploy, route internal users to new features.

The net reduce – this reduces overhead, lets the app get tested with real data test sets, and identify issues without impacting the entire customer base. So, the big release becomes a set of small releases on a common code base, tested by different people to ensure that the bulk of your customer base doesn’t get a rude awakening.

AI/ML

Next, Priyanka talked about GoodRx uses AI/ML tools to augment her team. These tools make her team more productive – allowing her to meet the quality needs of the high-performance environment.

First, Priyanka discussed automated visual regression – using AI/ML to automate the validation of rendered pages. Here, she talked about using Applitools – as she says, the acknowledged leader in the field. Priyanka talked about how GoodRx uses Applitools.

At GoodRx, there may be one page used for a transaction. But, GoodRx supports hundreds of drugs in detail, and a user can dive into those pages that describe the indications and cautions about individual medications.  To ensure that those pages remain consistent, GoodRx validates these pages using Applitools. Trying to validate these pages manually would take six hours. Applitools validates these pages in minutes and allows GoodRx to release multiple times a day.

Screen Shot 2020 03 25 at 10.20.40 PM

To show this, Priyanka used an example of visual differences. She showed a kids cartoon with visual differences. Then she showed what happens if you do a normal image comparison – pixel-based comparison.

Screen Shot 2020 03 25 at 10.22.01 PM

A bit-wise comparison will fail too frequently.  Using the Applitools AI system, they can set up Applitools to look at the images that have already been approved and quickly validate the pages being tested.

Screen Shot 2020 03 25 at 10.23.29 PM

Applitools can complete a full visual regression in less than 12 minutes to run 350 test cases, which runs 2,500 checks.  Manually, it takes six hours.

Screen Shot 2020 03 25 at 10.24.29 PM

Priyanka showed the kinds of real-world bugs that Applitools uncovered. One – a screenshot from her own site GoodRx. A second from amazon.com, and a third from macys.com. She showed examples with corrupt display – and ones that Selenium alone could not catch.

ReportPortal.io

Next, Priyanka moved on to ReportPortal.io. As she says, when you ace automation, you need to know where you stand. You need to build trust around your automation platform by showing how it is behaving. All your data – test times, bugs discovered, etc. reportportal.io shows how tests are running at different times of the day.  Another display shows flakiest tests and longest-running tests to help the team release seamlessly and improve their statistics.

Any failed test case in reportportal.io can link the test results log directly into the reportportal.io user interface.

GoodRx uses behavior-driven design (BDD), and their BDD approach lets them describe the behavior they want for a given feature – how it should behave in good and bad cases – and ensure that those cases get covered.

High-Performance Testing – The Burn Out

Priyanka made it clear that high-performance environments take a toll on people. Everywhere.

She showed a slide referencing a blog by Atlassian talking about work burnout symptoms – and prevention. From her perspective, the symptoms of workplace stress include:

  • Being cynical or critical at work
  • Dragging yourself to work and having trouble getting started
  • Irritable or impassion, lack energy, hard to concentrate, headache
  • Lack of satisfaction from achievement
  • Use food, drugs or alcohol to feel better or simply not to feel

So, what should a good team lead do when she notices signs of burnout? Remind people to take steps to prevent burnout. These include:

  • Avoid unachievable deadlines. Don’t take on too much work. Estimate, add buffer, add resource.
  • Do what gives you energy – avoid what drains you
  • Manage digital distraction – the grass will always be greener on the other side
  • Do something outside your work – Engage in activities that bring you joy
  • Say No too many projects – gauge your bandwidth and communicate
  • Make self-care a priority – meditation/yoga/massage
  • Have a strong support system – talk to you family, friends, seek help
  • Unplugging for short periods helps immensely

The point here is that hyper-growth environments can take a toll on everyone – employees, managers. Unrealistic demands can permeate the organization. Use care to make sure that this doesn’t happen to you or your team.

GoodRx Case Study

Why not look at Priyanka’s direct experience at GoodRx? Her employer, GoodRx, provides prices transparency for drugs. GoodRx lets individuals search for drugs they might need or use for various conditions. Once an individual selects a drug, GoodRx lets the individual see the prices for that drug in various locations to find the best price for that drug.

The main customers are people who don’t have insurance or have high-deductible insurance. In some cases, GoodRx offers coupons to keep the prices low.  GoodRx also provides GoodRx Care – a telemedicine consultation system – to help answer patient questions about drugs. Rather than see a doctor, GoodRx Care costs anywhere between $5 and $20 for a consultation.

Because the GoodRx web application provides high value for its customers, often with high demand, the app must maintain proper function, high performance, and high availability.

Set Goals

Screen Shot 2020 03 25 at 10.28.44 PM

The QA goals Priyanka designed needed to meet the demands of this application. Her goals included:

  • Distributed QA team 24/7 QA support
  • Dedicated SDET Team who specializes in test
  • A robust framework that will make any POC super simple (plug and play)
  • Test stabilization pipeline using Travis
  • 100% automation support to reduce regression time 90%

Build a Team

Screen Shot 2020 03 25 at 10.30.03 PM

As a result, Priyanka needed to hire a team that could address these goals. She showed the profile she developed on LinkedIn to find people that met her criteria – dev-literate, test-literate engineers who could work together as a team and function successfully. More emphasis on test automation and coding abilities rose to the top.

Build a Tech Stack

Screen Shot 2020 03 25 at 10.33.22 PM

Next, Priyanka and her team invested in tech stack:

  • Python and Selenium WebDriver
  • Behave for BDD
  • Browserstack for a cloud runner
  • Applitools for visual regression
  • Jenkins/Travis and Google Drone for CI
  • Jira, TestRail for documentation

CICD success criteria requirements came up with four issues:

  • Speed and parallelization
  • BDD for easy debug and read
  • Cross-browser cross-device coverage in CICD
  • Visual validation

Set QA expectations for CI/CD testing

Finally, Priyanka and her team had to set expectations for testing.  How often would they test? How often would they build?

The QA for CI/CD means that test and build become asynchronous. Regardless of the build state,

  • Hourly; QA runs 73 tests hourly against the latest build to sanity check the site.
  • On Build: Any new build runs 6 cross-browser and makes sure all critical business paths get covered.
  • Nightly 300 test regression tests on top of other tests.

Some of these were starting points, but most got refined over time.

Priyanka’s GoodRx Quality Timeline

Next, Priyanka talked about how her team grew from the time she joined until now.

She started in June 2018. At that point, GoodRx had one QA engineer.

  • In her first quarter, she added a QA Manager, QA Analyst, and a Senior SDET. They added offshore reprocessing to support releases.
  • By October 2018 they had fully automated P0/P1 tests. Her team had added Spinnaker pipeline integration. They were running cross-browser testing with real mobile device tests.
  • By December 2018 she added two more QA Analysts and 1 more SDET.  Her team’s tests fully covered regression and edge cases.
  • And, she pressed on. In early 2019, they had built automation-driven releases. They had added Auth0 support – her team was hyper-productive.
  • Then, she discovered her team had started to burnout.  Two of her engineers quit. This was an eye-opening time for Priyanka. Her lessons about burnout came from this period. She learned how to manage her team through this difficult period.

By August 2019 she had the team back on an even keel and had hired three QA engineers and one more SDET.

And, in November 2019 they achieved 100% mobile app automation support.

GoodRx Framework for High-Performance Testing

Finally, Priyanka gave a peek into the GoodRx framework, which helps her team build and maintain test automation.

The browser base class provides access for test automation. Using the browser base class eliminates the need to use Selenium embed click.

The page class simplifies the web element location. The page class structure assigns a unique XPath to each web element. Automation benefits by having clean XPath elements for automation purposes.

Screen Shot 2020 03 25 at 10.47.12 PM

The element wrapper class allows for behaviors like lazy loading.  Instead of having to program exceptions into the test code, the element wrapper class standardizes interaction between the browser under test and the test infrastructure.

Screen Shot 2020 03 25 at 10.47.27 PM

Finally, for every third-party application or tool that integrates using an SDK, like Applitools GoodRx deploys an SDK Wrapper. As one of her SDET team figured, the wrapper ensures that an SDK change from a third party can mess up your test behavior. Using a wrapper is a good practice for handling situations when the service you use encounters something unexpected.

The framework results in a more stable test infrastructure that can rapidly change to meet the growth and change demands of GoodRx.

Conclusions

Hyper-growth companies put demands on their quality engineers to achieve quickly. Test speed matters, but it cannot be achieved consistently without investment. Just as Priyanka started with the story of the Three Little Pigs, she made clear that success requires investment in automation, people, AI/ML, and framework.

To watch the entire webinar:

For More Information

The post How To Ace High-Performance Test for CI/CD appeared first on Automated Visual Testing | Applitools.

]]>