Shift Left Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/shift-left/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 28 Apr 2023 21:21:22 +0000 en-US hourly 1 Test Automation Video Winter Roundup: September – December 2022 https://applitools.com/blog/test-automation-video-winter-roundup-september-december-2022/ Mon, 09 Jan 2023 18:35:00 +0000 https://applitools.com/?p=45499 Get all the latest test automation videos you need right here. All feature test automation experts sharing their knowledge and their stories.

The post Test Automation Video Winter Roundup: September – December 2022 appeared first on Automated Visual Testing | Applitools.

]]>
Applitools minions in winter

Check out the latest test automation videos from Applitools.

We hope you got to take time to rest, unplug, and spend time with your loved ones to finish out 2022 with gratitude. I have been incredibly appreciative of the learning opportunities and personal growth that 2022 offered. In reflection of our past quarter here at Applitools, we’ve curated our latest videos from some amazing speakers. If you missed any videos while away on holiday or finishing off tasks for the year, we’ve gathered the highlights for you in one spot.

ICYMI: Back in November, Andrew Knight (a.k.a. the Automation Panda) shared the top ten Test Automation University courses.

Cypress vs. Playwright: The Rematch

One of our most popular series is Let the Code Speak, where we compare testing frameworks in real examples. In our rematch of Let the Code Speak: Cypress vs. Playwright, Andrew Knight and Filip Hric dive deeper to how Cypress and Playwright work in practical projects. Quality Engineer Beth Marshall moderates this battle of testing frameworks while Andy and Filip explore comparisons of their respective testing frameworks in the areas of developer experience, finding selectors, reporting, and more.

Video preview of Cypress vs Playwright: The Rematch webinar

Automating Testing in a Component Library

Visual testing components allows teams to find bugs earlier, across a variety of browsers and viewports, by testing reused components in isolation. Software Engineering Manager David Lindley and Senior Software Engineer Ben Hudson joined us last year to detail how Vodafone introduced Applitools into its workflow to automate visual component testing. They also share the challenges and improvements they saw when automating their component testing.

Video preview of Automating Testing in a Component Library webinar

When to Shift Left, Move to Centre, and Go Right in Testing

Quality isn’t limited to the end of the development process, so testing should be kept in mind long before your app is built. Quality Advocate Millan Kaul offers actionable strategies and answers to questions about how to approach testing during different development phases and when you should or shouldn’t automate. Millan also shares real examples of how to do performance and security testing.

Video preview of When to Shift Left, Move Centre, and Go Right in Testing webinar

You, Me, and Accessibility: Empathy and Human-Centered Design Thinking

Inclusive design makes it easier for your customers with your varying needs and devices are able to use your product. Accessibility Advocate and Crema Test Engineer Erin Hess talks about the principles of accessible design, how empathy empowers teams and end users, and how to make accessibility more approachable to teams that are newer to it. This webinar is helpful all team members, whether you’re a designer, developer, tester, product owner, or customer advocate.

Video preview of You, Me, and Accessibility webinar

Erin also shared a recap along with the audience poll results in a follow-up blog post.

Future of Testing October 2022

Our October Future of Testing event was full of experts from SenseIT, Studylog, Meta, This Dot, EVERSANA, EVERFI, LAB Group, and our own speakers from Applitools. We covered test automation topics across ROI measurement, accessibility, testing at scale, and more. Andrew Knight, Director of Test Automation University, concludes the event with eight testing convictions inspired by Ukiyo-e Japanese woodblock prints. Check out the full Future of Testing October 2022 event library for all of the sessions.

Video preview of Future of Testing keynote

Skills and Strategies for New Test Managers

Being a good Test Manager is about more than just choosing the right tools for your team. EasyJet Test Manager Laveena Ramchandani shares what she has learned in her experience on how to succeed in QA leadership. Some of Laveena’s strategies include how to create a culture that values feedback and communication. This webinar is great for anyone looking to become a Test Manager or for anyone who has newly started the role.

Video preview of Skills and Strategies for New Test Managers

Ensuring a Reliable Digital Experience This Black Friday

With so much data and so many combinations of state, digital shopping experiences can be challenging to test. Senior Director of Product Marketing Dan Giordano talks about how to test your eCommerce application to prioritize coverage on the most important parts of your application. He also shares some common shopper personas to help you start putting together your own user scenarios. The live demo shows how AI-powered automated visual testing can help retail businesses in the areas of visual regression testing, accessibility testing, and multi-baseline testing for A/B experiments.

Video preview of Ensuring a Reliable Digital Experience webinar

Dan gave a recap and went a little deeper into eCommerce testing in a follow-up blog post.

Cypress, Playwright, Selenium, or WebdriverIO? Let the Engineers Speak!

Our popular Let the Code Speak webinar series focused primarily on differences in syntax and features, but it doesn’t really cover how these frameworks hold up in the long term. In our new Let the Engineers Speak webinar, we spoke with a panel of engineers from Mercari US, YOOBIC, Hilton, Q2, and Domino’s about how they use Cypress, Playwright, Selenium, and WebdriverIO in their day-to-day operations. Andrew Knight moderated as our panelists discussed what challenges they faced and if they ever switched from one framework to another. The webinar gives a great view into the factors that go into deciding what tool is right for the project.

Video preview of Let the Engineers Speak webinar

More on the way in 2023!

We’ve got even more great test automation content coming this year. Be sure to visit our upcoming events page to see what we have lined up.

Check out our on-demand video library for all of our past videos. If you have any favorite videos from this list or from 2022, you can let us know @Applitools. Happy testing!

The post Test Automation Video Winter Roundup: September – December 2022 appeared first on Automated Visual Testing | Applitools.

]]>
Getting Involved with DevOps as a Testing Specialist https://applitools.com/blog/getting-involved-devops-testing-specialist/ Fri, 03 Dec 2021 20:48:24 +0000 https://applitools.com/?p=33186 As a testing specialist, how can you get involved with DevOps? How can you learn the skills, how can you add value with the skills you already have? Here are a few tips that have helped me. 

The post Getting Involved with DevOps as a Testing Specialist appeared first on Automated Visual Testing | Applitools.

]]>

Over the past several years, we’ve heard a lot about “shift-left” and “shift-right” testing. We’ve seen the benefits of having testing specialists involved in testing-related activities on both sides of the “DevOps loop.” Inspired by Dan Ashby’s Continuous Testing graphic, Janet Gregory and I came up with our own visualization of a continuous testing loop (figure 1). 

Testing activities around the continuous DevOps loop
Figure 1, Our visualization of continuous testing. Copyright 2021, Janet Gregory & Lisa Crispin

Testing activities happen throughout the infinite cycle of software development. Many testers now find it natural to test feature ideas during feature planning discussions. More and more teams guide development with business- and technology-facing tests, using practices like test-driven development and behavior-driven development. And, more teams embrace the idea of working together – including testers – on testing activities that happen on the right side of the loop.

The whole team approach to quality is at the heart of DevOps. We don’t throw release candidate artifacts over the wall to an operations team. We work together with SREs (site reliability engineers) and platform engineers to take care of our production environment and help prevent customer pain. Many testing specialists hesitate to get involved on the Ops side of DevOps. They aren’t familiar with the tools used for monitoring, alerting and observability. They may never have worked with operations specialists. And who wants to get paged in the middle of the night?!

I was fortunate to work with system administrators doing operations activities for much of my career. I enjoyed configuring continuous integration, and learning how to automate deployments to test environments. I discovered the value of learning how our product works by examining log data. As platforms progressed to virtual machines and then to the cloud, infrastructure as code became a thing, and new technology enabled gathering and analyzing huge amounts of data about our systems, I got both more interested and intimidated! What really caught my attention was the advent of observability. Capturing enough events so that we can ask questions about our production system that we didn’t anticipate having to ask makes so much sense to me. We testers are great at asking questions!

As a testing specialist, how can you get involved with DevOps? How can you learn the skills, how can you add value with the skills you already have? Here are a few tips that have helped me. 

Build Relationships 

When I first read Katrina Clokie’s excellent book, A Practical Guide to Testing in DevOps, I was struck by how much of the book she devotes to the importance of building bridges not only within your team but to other teams in the company. Collaboration is the secret sauce. I’ve worked to get over my shyness to reach out to people who can help me as well as my team. 

Here’s a recent example. I started a new job this year and found that people on other teams were quite willing to schedule regular 1:1 meetings with me. I took advantage! This paid off right away. We needed to migrate the deployment pipeline for one of our products to a new infrastructure. I had just learned about this new infrastructure from one of the platform team managers in our 1:1. I scheduled another meeting with him to ask about the potential risks and learn where we should focus our testing. This helped me put together an effective test strategy. I worked with a developer teammate and an SRE (aka platform engineer) embedded in our team to do the testing and we confidently completed the migration.

As part of this first project, I joined a weekly “DevOps refinement” meeting with developers from my team, our embedded SRE and others involved with platform infrastructure. The monitoring and alerting tools used by our organization were all new to me. Getting to know the people who could help me learn about them was a huge help since we are  starting to implement some observability in our product. This is a huge area of interest for me, and building these relationships has given me a step up in learning about it. Which leads me to…

Offer to Help

If you hear of a DevOps-related initiative such as a migration to a new infrastructure, or creating dashboards to monitor a new feature, put your hand up to help! On my current team, we now write user stories to create monitoring and observability dashboards as well as alerts so that we can watch new features in production. If the data looks bad, we can turn the feature flag off while we investigate. I make sure to get involved with those stories, so I can learn the tools we use for that work. 

Accepting help seems obvious, but it could escape your radar, so be on the alert for it. A platform engineer specializing in one of our tools used to have a weekly “office hour” to help people learn it but I never seemed to make time for that. Everyone else did the same, so he quit offering it. Recently, he mentioned that he’d be happy to have a weekly session with a couple people on my team to build dashboards that would give us observability into some crucial parts of our product. Well, heck, yes! I scheduled 30 minutes with him, our embedded SRE, and one of my developer teammates, to meet weekly. When we need more time, we schedule it. 

Help Establish a Practice

DevOps and observability are still new to most software organizations. If you are a testing practitioner, you may think, “I don’t know much about that even though I am interested, and I cannot help build those practices across that organization.” If you think that, you may be wrong. Small nudges can have big impacts.

At an earlier job, our quality organization was under the same leadership as monitoring and observability, and I had an opportunity to help build an observability practice. It was a much larger engineering organization than I was used to, with about 40 teams. This was at the start of the pandemic, so it was all remote. I started researching what was happening on teams across the organization that related to monitoring and observability. I put my findings on a big mind map, and then I met with individuals on different teams to go over the mind map. Two good things happened – they expressed surprise at learning what other teams were doing, and, and I learned what their team was doing as they explained it to me.

I wondered what could I do with that information? I started a page on our engineering organization wiki with a table of what all the teams were doing related to observability. I captured who was leading the effort, what tools were they using, and what were their results. Some teams did proof of concepts with OpenTelemetry and Honeycomb. Other teams were working in ElasticSearch and Kibana. Still others were using Snowflake. 

I also started a Slack channel dedicated to observability practices, and socialized the wiki page. Individuals on different teams took the initiative to update the wiki page with their observability-related activities. The teams started talking to each other. A teammate and I started holding observability practice sessions where people could share their experiences. When other teams wanted to try to build in observability, they had access to people with experience who could help them get started.

Is it a tester’s job to start a community of practice around DevOps activities? Well, why not? Ask questions, build relationships, bring the right people together – these are our tester super-powers. We need those relationships because we know we cannot test everything prior to release. We know that in these days of complex systems in the cloud, our test environments will not look like production. We know that we cannot know what our customers will do – unless we start looking at the production use data. 

We testers have work to do on both sides of that DevOps loop. By engaging in the entire development cycle, we can help to delight our customers. 

The post Getting Involved with DevOps as a Testing Specialist appeared first on Automated Visual Testing | Applitools.

]]>
Shifting Accessibility Testing to the Left https://applitools.com/blog/shifting-accessibility-testing-to-the-left/ Tue, 31 Aug 2021 17:00:20 +0000 https://applitools.com/?p=30539 Rather than reacting to accessibility issues at the end of a project, proactively discuss it as early as possible and get buy-in right from the start.

The post Shifting Accessibility Testing to the Left appeared first on Automated Visual Testing | Applitools.

]]>
accessibility key on a keyboard

15% of the world’s population live with a form of disability. That’s equivalent to about one billion people. As we now live in a digital era where everything can be accessed online, we need to make sure that the features we provide are accessible to everyone, including those with disabilities. Accessibility testing is definitely not something new – it’s something that we’ve been trying to do for many years now. Yet until now, the number of inaccessible websites that we encounter are still very high.

The Accessibility Problem

Speaking with different individuals, it seems that there’s still a lack of buy-in from various teams to prioritise designing, developing and testing for accessibility. We’ve all heard about the infamous Domino’s accessibility case. There are regulations out there that state that organisations must have a responsibility to make sure that their products are accessible by people with disabilities. However, based on WebAIM’s accessibility research called the WebAIM Million, where they analysed the homepages of the top one million websites every year, they stated that “Users with disabilities would expect to encounter detectable errors on 1 in every 17 home page elements with which they engage.” From their research, they approximately found 51 million distinct accessibility errors. While the results for 2021 are slightly better than previous years, a significant chunk of work still needs to be done to make sure that the web is accessible for everyone.

There are a lot of available tools out there that can help us with accessibility testing. Giridhar Rajkumar has previously blogged what are some of the accessibility testing tools we can use to help us with testing (Accessibility Testing: Everything You Need To Know). However, the statistics above show that there’s still a lot of work to be done. In this blog post, I’ll explain why designing for inclusivity wins, why we need to talk about accessibility a lot, what we can do to create a culture of inclusivity and how to shift accessibility testing to the left.

Designing for Inclusivity

When we design for inclusivity, everyone wins. Let’s look at an image below to explain this concept.

Three images showing people watching a baseball game. The first image shows a non see through fence, second image with the same fence but people have additional boxes to stand on and the last image shows a see through fence.
Everyone can enjoy the baseball game once the barriers have been removed and the fence is see through

Basically what we have here is we have a baseball game and a fence separating the people watching it and to also stop people running to the baseball field. Not everyone can see through the fence unless you are tall enough. This represents inequality. To remedy this, on the first image, we can give everyone a box to stand on even if they don’t need a box to emphasise equality. Some people will be able to watch along just fine but others won’t.

The second image shows that some people who are not tall enough can ask for additional boxes so they can watch the game. However, this is bad equity because someone needs to be there to hand out boxes physically which can be seen as an extra business overhead.

In order to not increase business overhead, we can just make the fence see through like in the last image. It still keeps people from stepping into the baseball field and at the same time, everyone can watch the game without anyone handing them boxes. Everyone wins if you think about being inclusive. Rather than seeing this as a cost overhead, it’s now become a revenue generator because everyone, whether you’re tall or small, in a wheelchair or not, can enjoy the game.  

Talking About Accessibility

So why is it that so many teams still think accessibility is a nice to have feature rather than having it embedded right from the beginning? I think the answer is we don’t talk about accessibility a lot. It’s not because our teams can’t code accessible websites, but because there is a missed education around the importance of accessibility which then leads to no consideration at all for accessibility. By educating our teams the benefits of having accessible products, we can get them to understand why we need to include it in the first place. 

We need to educate our teams that accessible products are easier to use and it increases user satisfaction. The more users a business can reach, the better it is for the business since it will have financial benefits. Accessibility is also linked to higher SEO ranking and better quality code. It avoids lawsuits and legal complications and encourages independence amongst all our users.

Creating A Culture Of Inclusivity

There’s all kinds of areas where we can bring value and make things more accessible but it starts from creating a culture first of inclusivity. This book that I read by Regine Gilbert called Inclusive Design for a Digital World, which I highly recommend to everyone, explains how it takes a whole village to make accessible products. It’s not only up to us, it’s up to everyone. 

Accessibility tools are here to help us and it can provide value but if there is no change in mindset and culture, then shifting accessibility testing to the left will be difficult since you’ll get resistance from the business to prioritise it. 

Shifting Accessibility Testing to the Left

In order to shift accessibility testing to the left, we need to make sure that accessibility is part of every stage of the development lifecycle and even as far left as possible from the planning stages. Rather than reacting to it right at the end, proactively discuss it as early as possible and get buy-in right from the start. Get your project managers involved and your leadership team onboarded so they know that it’s a priority and educate them that if it’s left out right at the end, it can be time consuming to fix the accessibility issues and exponentially expensive. 

Shifting accessibility is not just about automation. It’s also about having these conversations earlier on the cycle and trying to influence everyone in your team and the business so that everyone is on the same page.

Apart from early conversations and using automation to catch basic accessibility issues, the following are some of the example activities that you can do to shift left:

  • Sit with your UX and Design team and test the design mock-ups. Verify if the general layout is clear and the colours use high contrast.
  • Participate in planning sessions and contribute what scenarios need to be taken into consideration. This is where adopting different personas can help.
  • Plan ahead on how it should be used by keyboards and screen readers and share that knowledge to your team.
  • Participate in code reviews if you can. Semantic tags have built-in accessibility features so make sure that these are used appropriately.
  • Try pair testing to catch more accessibility issues earlier on. 
  • Integrate automated tests as part of your continuous integration pipelines so you can provide faster feedback to your team.

When accessibility is embedded right from the start, user experience will improve and be simplified and barriers for people with disabilities will be removed.

More Resources

Accessibility can be overwhelming, there are tons of resources out there with lots of information. The Web Content Accessibility Guidelines contain a lot of valuable information but it can be daunting to absorb all that information especially if this is your first time. 

So, what can you do? Start talking to your teams about accessibility! Find your allies and get that buy-in from the business by explaining the benefits. Remember that it needs a shift in culture and mindset. Once you have that support, if your website is live already, it’s a good idea to first see how accessible it is. Have an accessibility audit performed so you can see where the issues are. Start with a single page and acknowledge that it will take time. You won’t solve all the issues at once but that is ok. Every small accessibility fix that you and your team introduce will make it better for someone out there.

If you want to know more about testing for accessibility, check out this free course from Test Automation University – Test Automation for Accessibility.

Image

The post Shifting Accessibility Testing to the Left appeared first on Automated Visual Testing | Applitools.

]]>
How to Setup GitHub Actions with Cypress & Applitools for a Better Automated Testing Workflow https://applitools.com/blog/github-actions-with-cypress-and-applitools/ Mon, 01 Mar 2021 20:37:37 +0000 https://applitools.com/?p=27315 Applitools provides a number of SDKs that allows you to easily integrate it into your existing workflow. Using tools like Cypress, Espresso, Selenium, Appium, and a wide variety of others,...

The post How to Setup GitHub Actions with Cypress & Applitools for a Better Automated Testing Workflow appeared first on Automated Visual Testing | Applitools.

]]>

Applitools provides a number of SDKs that allows you to easily integrate it into your existing workflow. Using tools like Cypress, Espresso, Selenium, Appium, and a wide variety of others, web and native platforms can get automated visual testing coverage with the power of Applitools Eyes.

But what if you’re not looking to integrate it directly to an existing testing workflow because maybe you don’t have one or maybe you don’t have access to it? Or what if you want to provide blanket visual testing coverage on a website without having to maintain which pages get checked?

We’ll walk through how we were able to take advantage of the power of Applitools Eyes and flexibility of GitHub Actions to create a solution that can fit into any GitHub-based workflow.

Note: if you want to skip the “How it Works” and go directly to how to use it, you can check out the Applitools Eyes GitHub Action on github.com. https://github.com/colbyfayock/applitools-eyes-action

What are GitHub Actions?

To start, GitHub Actions are CI/CD-like workflows that you’re able to run right inside of your GitHub repository.

GitHub Actions logs and pull request
Running build, test, and publish on a branch with GitHub Actions

Using a YAML file, we can set up our project to run tests or really any kind of script based on events such as a commit, pull request, or even on a schedule with cron.

name: Tests

on:
  push:
	branches: [ main ]
  pull_request:
	branches: [ main ]

jobs:
  test:
	runs-on: ubuntu-latest
	steps:
	- uses: actions/checkout@v2
	- uses: actions/setup-node@v2
	  with:
		node-version: '12'
	- run: npm ci
	- run: npm test

Setting up simpler workflows like our example above, where we’re installing our dependencies and running our tests, is a great example of how we can automate critical code tasks, but GitHub also gives developers a way to package up complex scripts that can reach beyond what a configurable YML file can do.

Using custom GitHub Actions to simplify complex workflows

When creating a custom GitHub Action, we unlock the ability to use scripting tools like shell and node to a greater extent, as well as the ability to stand up entire environments using Docker, which can allow us to really take advantage of our allotted environment just like we could on any other CI/CD platform.

GitHub Actions Build Logs
Building a container in a GitHub Action

In our case, we want to allow someone to run Applitools Eyes without ever having to think about setting up a test runner.

To achieve this, we can include Cypress (or another test runner) right along with our Action, which would then get installed as a dependency on the workflow environment. This allows us to hook right into the environment to run our tests.

Scaffolding a Cypress environment in a GitHub Action workflow with Docker

Setting up Cypress is typically a somewhat simple process. You can install it locally using npm or yarn where Cypress will manage configuring it for your environment.

How to install cypress
Installing Cypress with npm via cypress.io

This works roughly the same inside of a standard YAML-based GitHub Action workflow. When we use the included environment, GitHub gives us access to a workspace where we can install our packages just like we would locally.

It becomes a bit trickier however when trying to run a custom GitHub Action, where you would want to potentially have access to both the project and the Action’s code to set up the environment and run the tests.

While it might be possible to figure out a solution using only node, Cypress additionally ships a variety of publicly available Docker images which let’s us confidently spin up an environment that Cypress supports. It also gives us a bit more control over how we can configure and run our code inside of that environment in a repeatable way.

Because one of our options for creating a custom Action is Docker, we can easily reference one of the Cypress images right from the start:

FROM cypress/browsers:node12.18.3-chrome87-ff82

In this particular instance, we’re spinning up a new Cypress-supported environment with node 12.18.3 installed along with Chrome 87 and Firefox 82.

Installing Cypress and Action dependencies

With our base environment set up, we move to installing dependencies and starting the script. While we’re using a Cypress image that has built-in support, Cypress doesn’t actually come already installed.

When installing Cypress, it uses cache directories to store download binaries of Cypress itself. When using Docker and working with different environments, its important to have predictable locations of where these caches exist, so that we’re able to reference it later.

Cypress cache verification
Cypress verifies that it can correctly identify an installation path from cache

In our Docker file, we additionally configure that environment with:

ENV NPM_CACHE_FOLDER=/root/.cache/npm
ENV CYPRESS_CACHE_FOLDER=/root/.cache/Cypress

When npm and Cypress goes to install, they’ll use those directories for caching.

We also need to set up an entrypoint which tells Docker what script to run to initiate our Action.

Inside of our Dockerfile, we add:

COPY entrypoint.sh /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

We’ll use a shell script to initiate our initial installation procedure so we can have a little more control over setting things up.

Finally, inside of our referenced shell script, we include:

#!/bin/sh -l

cd $GITHUB_WORKSPACE

git clone https://github.com/colbyfayock/applitools-eyes-action
cd applitools-eyes-action

npm ci

node ./src/action.js

This will first navigate into our GitHub’s Workspace directory, to make sure our session is in the right location.

We then clone down a copy of our custom Action’s code, which is referencing itself at this point, but it allows us to have a fresh copy in our Workspace directory, giving us access to the script and dependencies we ultimately need.

And with our Action cloned within our working environment, we can now install the dependencies of our action and run the script that will coordinate our tests.

Dynamically creating a sitemap in node

Once our node script is kicked off, the first few steps are to find environment variables and gather Action inputs that will allow our script to be configured by the person using it.

But before we can actually run any tests, we need to know what pages we can run the tests on.

To add some flexibility, we added a few options:

  • Base URL: the URL that the tests will run on
  • Sitemap URL: this allows someone to pass in an already sitemap URL, rather than trying to dynamically create one
  • Max Depth: how deep should we dynamically crawl the site? We’ll touch on this a little more in a bit

With these settings, we can have an idea on how the Action should run.

If no sitemap is provided, we have the ability to create one.

RSS xml example
Example sitemap

Specifically, we can use a Sitemap Generator package that’s available right on npm that will handle this for us.

const generator = SitemapGenerator(url, {
  stripQuerystring: false,
  filepath,
  maxDepth
});

Once we plug in a URL, Sitemap Generator will find all of the links on our page and crawl the site just like Google would with it’s search robots.

We need this crawling to be configurable though, which is where Max Depth comes in. We might not necessarily want our crawler to drill down link after link, which could cause performance issues, but it could also include pages or other websites that we aren’t interested in including in our sitemap.

Applitools sitemap diagram
Reduced sitemap of applitools.com

With Max Depth, we can tell our Sitemap Generator how deep we want it to crawl. A value of 1 would only scrape the top level page, where a value of 2 would follow the links on the first page, and then follow the links on the second page, to find pages to include in our dynamically generated sitemap.

But at this point, whether dynamically generated or provided to us with the Sitemap URL, we should now have a list of pages that we want to run our tests on.

Running Cypress as a script in node

Most of the time when we’re running Cypress, we use the command line or include it as a script inside of our package.json. Because Cypress is available as a node package, we additionally have the ability to run it right inside of a node script just like we would any other function.

Because we already have our environment configured and we’ve determined the settings we want, we can plug these values directly into Cypress:

const results = await cypress.run({
  browser: cypressBrowser,
  config: {
	baseUrl
  },
  env: {
	APPLITOOLS_APP_NAME: appName,
	APPLITOOLS_BATCH_NAME: batchName,
	APPLITOOLS_CONCURRENCY: concurrency,
	APPLITOOLS_SERVER_URL: serverUrl,
	PAGES_TO_CHECK: pagesToCheck
  },
  headless: true,
  record: false,
}); 

If you notice in the script though, we’re setting a few environment variables.

The trick with this is, we can’t directly pass in arguments that we may need inside of Cypress itself, such as settings for Applitools Eyes.

The way we can handle this is by creating Cypress environment variables, which end up roughly working the same as passing arguments into the function, we just need to access it slightly differently.

But beyond some Applitools-specific configurations, the important bits here are that we have a basic headless configuration of Cypress, we turn recording off as ultimately we won’t use that, and we pass in PAGES_TO_CHECK which is an array of pages that we’ll ultimately run through with Cypress and Applitools.

Using Cypress with GitHub Actions to dynamically run visual tests

Now that we’re finally to the point where we’re running Cypress, we can take advantage of the Applitools Eyes SDK for Cypress to easily check all of our pages.

describe('Visual Regression Tests', () => {
  const pagesToCheck = Cypress.env('PAGES_TO_CHECK');

  pagesToCheck.forEach((route) => {
	it(`Visual Diff for ${route}`, () => {

	  cy.eyesOpen({
		appName: Cypress.env('APPLITOOLS_APP_NAME'),
		batchName: Cypress.env('APPLITOOLS_BATCH_NAME'),
		concurrency: Number(Cypress.env('APPLITOOLS_CONCURRENCY')),
		serverUrl: Cypress.env('APPLITOOLS_SERVER_URL'),
	  });

	  cy.visit(route);
	  
	  cy.eyesCheckWindow({
		tag: route
	  });

	  cy.eyesClose();
	});
  });
});

Back to the critical part of how we ran Cypress, we first grab the pages that we want to check. We can use Cypress.env to grab our PAGES_TO_CHECK variable which is what we’ll use for Eyes coverage.

With that array, we can simply run a forEach loop, where for every page that we have defined, we’ll create a new assertion for that route.

Applitools Visual Regression Tests in Cypress
Running Applitools Eyes on each page of the sitemap

Inside of that assertion, we open up our Eyes, proceed to visit our active page, perform a check to grab a snapshot of that page, and finally close our Eyes.

With that brief snippet of code, we’re uploading a snapshot of each of our pages up to Applitools, where we’ll now be able to test and monitor our web project for issues!

Configuring Applitools Eyes GitHub Action into a workflow

Now for the fun part, we can see how this Action actually works.

To add the Applitools Eyes GitHub Action to a project, inside of an existing workflow, you can add the following as a new step:

steps:
- uses: colbyfayock/applitools-eyes-action@main
  with:
	APPLITOOLS_API_KEY: ${{secrets.APPLITOOLS_API_KEY}}
	appName: Applitools
	baseUrl: https://applitools.com

We first specify that we want to use the Action at its current location, then we pass in a few required input options such as an Applitools API Key (which is defined in a Secret), the name of our app, which will be used to label our tests in Applitools, and finally the base URL that we want our tests to run on (or we can optionally pass in a sitemap as noted before).

With just these few lines, any time our steps are triggered by our workflow, our Action will create a new environment where it will run Cypress and use Applitools Eyes to add Visual Testing to the pages on our site!

What’s next for Applitools Eyes GitHub Action?

We have a lot of flexibility with the current iteration of the custom GitHub Action, but it has a few limitations like having an environment already deployed that can be accessed by the script and generally not having some of the advanced Applitools feature customers would expect.

Because we’re using node inside of our own custom environment, we have the ability to provide advanced solutions for the project we want to run the tests on, such as providing an additional input for a static directory of files, which would allow our Action to spin up a local server and perform the tests on.

As far as adding additional Applitools features, we’re only limited to what the SDK allows, as we can scale our input options and configuration to allow customers to use whatever features they’d like.

This Action is still in an experimental stage as we try to figure out what direction we ultimately want to take it and what features could prove most useful for getting people up and running with Visual Testing, but even today, this Action can help immediately add broad coverage to a web project with a simple line in a GitHub Action workflow file.
To follow along with feature development, to report issues, or to help contribute, you can check out the Action on GitHub at https://github.com/colbyfayock/applitools-eyes-action.

The post How to Setup GitHub Actions with Cypress & Applitools for a Better Automated Testing Workflow appeared first on Automated Visual Testing | Applitools.

]]>
Whole Team Testing for Continuous Delivery https://applitools.com/blog/whole-team-testing/ Tue, 25 Feb 2020 22:53:51 +0000 https://applitools.com/blog/?p=7011 Lisi makes the key point - success in continuous delivery means shortening feedback loops to learn early.  Every point of development and delivery needs validation.

The post Whole Team Testing for Continuous Delivery appeared first on Automated Visual Testing | Applitools.

]]>

I just completed taking Elisabeth Hocke’s course, The Whole Team Approach to Continuous Testing, on Test Automation University. Wow! Talk about a mind-blowing experience.

Mind-blowing? Well, yes. Earlier in my career, I studied lean manufacturing best practices. In a factory, lean manufacturing focuses on reducing waste and increasing factory productivity.  Elisabeth (who goes by ‘Lisi’) explains how this concept makes sense in a continuous delivery model for software.

The Lean Factory

To learn lean manufacturing, I read a book by Eliyahu Goldratt called The Goal. Written in 1984, the book helped explain the concept of waste as building up an inventory of work-in-process (WIP) that could not be manufactured. Goldratt describes walking into a factory filled with WIP inventory that was just piling up.

As Goldratt describes the problem and wonders how to address it, he goes on a hike with his son’s boy scout troop. One member is always the slowest. When they put that scout at the end, he falls behind everyone else. When they put that scout in the middle, it becomes two groups – the fast group, then a gap, and then this scout and everyone else behind him. Finally, they put this one scout in the front of the troop, and everyone stays together.

Goldratt describes the ‘aha!’ moment of realizing that the slowest process in the factory limits the speed of the factory. Everyone else might be busy building body parts or electronic harnesses, but if some step further in the assembly process can’t consume those parts as quickly as they are made, they build up waste.

Goldratt concluded that the efficient factory designs its process to make products at the pace of the slowest process – that profitable operation creates finished products and not waste.

Continuous Delivery As Lean Factory

Lisi spends the first chapter delving into the definition of continuous testing in the framework of continuous delivery. She quotes industry thought leaders like Bas Dijkstra, Jez Humble, Dave Farley, and Dan Ashby to get to some key ideas – namely, that continuous testing means ensuring that you are testing at every step in the continuous delivery process.  She even shares this image from Dan Ashby:

model 2

This diagram shows continuous delivery in a DevOps model with testing everywhere.

Lisi makes the key point – success in continuous delivery means shortening feedback loops to learn early.  Every point of development and delivery needs validation.

  • In planning – how do you test your plans? How do you validate what you’re up to?
  • In branching – how do you make the choice about branch design?
  • During coding – well, is it more than unit tests? And who runs the tests?
  • In the merge – what are you testing?
  • Etc.

She focuses on the key idea that at each step you make choices and want to know if those choices work. Every choice has consequences. Which designs work? Which design choice provides you with optimal results? How do you validate the performance impacts of a certain design choice? All of these questions result in choices you can test.

If you are a test professional and not in this system, you must be wondering what this means for your role. As in – what kind of testing can you do without code to validate? Or, for a developer, how can anyone test code before code completion?

Lisi walks through examples that turn you from coder or tester to scientist. Everything you do becomes a test. You can run large tests or small tests – but test early and learn to get test results early.

Unproductive Work As Waste

I brought up the lean factory because Lisa raises a lean manufacturing concept in her description of work in a continuous integration project. Lisi describes waste as the enemy of continuous deployment.

Any work that fails to deliver value involves waste. In a factory, work that creates unfinished parts faster than the delivery of finished goods creates waste. Usually, you see waste as piled up inventory. In software development, your processes can also create waste. Usually, though, the waste comes in the form of code in a “dev-complete” state – simply waiting to be tested. But, there are other sources of waste – uncoded designs, and unmerged code that are work that cannot be turned into the next step

And, with that, we come full circle to the application team. Does your team divide the coding responsibility from the verification skill set? Do you create mini waterfalls? Or, do you build a team that tries to do something different – to reduce waste in processes, to deliver quality in software, and to deliver products more quickly to market at the pace of the team?

Whole Team Testing For Agile Software Delivery

Here, Lisi turns to the Whole Team testing approach, which becomes a way of thinking about making the entire team responsible for delivering a finished product.

chapter2 image3

Lisi uses this quote from Lisa Crispin and Janet Gregory from their 2009 book, Agile Testing: A Practical Guide for Testers and Agile Teams:

“…everyone involved with delivering software is responsible for delivering high-quality software.”

Lisi gives examples from her own career to back up this idea.  Her own team had gotten bogged down in undelivered code, error tickets, and general frustration. They tried the lean approach for software – “stop starting and start finishing.” Everyone pitched in to help – no matter what role they held. People helped code, test write automation. People helped the product owner – or another product owner. The team worked to finish what they had started.

As a result, Lisi’s team built up trust and broke down barriers. While expertise might exist in pockets, they realized that delivering software required knowledge sharing and growing as a team. Testers know how to test software – but they cannot be the only people writing tests if the team hopes to deliver effectively. Lisi explained that, once the team had collaborated together once, they were prepared to do it again.

So, Lisi asks us to think about our own teams and how we build and share knowledge. We might have silos. We might only have a single test automation expert. She contrasts that with her teams, which work to share knowledge and focus on getting things done.

Her course offers lots of resources to help you move to a more team-oriented approach. One of those is the set of courses on Test Automation University, which can help you increase your team’s skillset in automated testing.

Organizing Your Team

Lisi spends three chapters talking about organizing your team for success.

Working Solo

First, she looks at organizations where everyone ‘works solo.’ Individuals do their own tasks and try to be as productive as possible. Tremendous amounts of specialization. Plenty of opportunities for waste. Why? Because individuals measure their productivity based on delivery relative to personal productivity goals.

Screen Shot 2020 02 25 at 9.48.33 AM

In the world of the individual practitioner, everyone works at different rates. A developer finishes unit of work ‘A’, hands ‘A’ to a tester, and begins working on unit ‘B’. The tester gets testing ‘A’ and finds a bug. When the tester hands it back to the developer, the developer has to context switch to ‘A’ to revisit the code and remember why the bug occurred to fix it. Working Solo involves lots of context switching – which introduces waste.

Another common source of waste comes from team imbalance. For example, your team has sufficient numbers of developers to build an app, but you haven’t hired enough testers to ensure that the app behaves as expected on all user platforms. If you constrain individuals in their silos, your team may falsely see itself forced to choose tradeoffs that increase the likelihood that you will release untested code.

If your team runs solo and you reward individuals for their personal productivity, you might be surprised at how the team doesn’t meet all the schedules that matter to you.

Pairing

Next, she looks at ‘pairing.’ Engineers work together as pairs. They can be colocated or work remotely, but they work together. In some cases, they share ideas and divide tasks. In the more explicit cases, no member of the team can both come up with ideas and write code. Someone with an idea has to explain it to the other, who actually writes the code.  In pairing, the team shares ideas and experience – and learn together from each other.

Screen Shot 2020 02 25 at 9.46.35 AM

Pairing helps the overall team write code that incorporates more robust and thoughtful design as two minds work together to deliver pieces. And, two people who work together learn from each other. Successful airing builds trust and team collaboration.

Mobbing

Finally, Lisi looks at ‘mobbing.’ In mobbing, a large group of people comes together to think and create. Mobbing, one person takes the keyboard and acts as the ‘driver’. The rest of the people suggest ideas and must explain to the driver what they mean to build what they are explaining.

Screen Shot 2020 02 25 at 9.45.05 AM

A mob with a lot of ideas can seem chaotic – and it can lead to lots of teamwork… if you are deliberate in mobbing and use some thoughtful rules, your mob can be effective. The most important parts:

  • Keep everyone engaged – be in a place where you are either contributing or learning from others
  • Use the “yes, and…” approach from improvisational theater
  • When you have multiple ideas, try both – starting with the person with least experience (remember, it’s about learning)
  • In the mob, everyone is learning – be kind and respectful

When To Use Which Approach

Finally, Lisi goes back to the idea that there is no ‘one size fits all’ to collaboration styles. In some cases, teams can see that certain problems can be divided into solo work. In some, they will gravitate to pairs. And, at times, they will call for a mob.

Her point, though, is that teams must know these approaches exist and have experience with all to find which will be the best for the team output in default as well as knowing how to handle the others when necessary.

The key metrics involve team productivity – not just individual productivity. If you track individual productivity without tracking team waste, you don’t know where you are building inefficiencies. And, once you begin to track team productivity, you start to see which approaches work best for you as a development organization.

One hidden productivity benefit of collaboration comes from interdependence. If only one person understands customer needs, or specific test approaches, workflows depend on that person always being part of the loop. While that may seem to give those people special power of knowledge, they also seem personally constrained so they cannot take a vacation or even a sick day without their absence impacting team productivity.

When individuals share their knowledge, two really good things happen. As teams start tracking the flow of work through the team, they see that the expert can take time off without impacting team productivity. And, for the experts, they can continue to develop their knowledge in different areas so they don’t become a siloed specialist whose quest for new knowledge and expertise becomes blocked by solving the same problem all the time.

Whole Team Testing – Conclusions

Before I get started with my other conclusions, I want to mention amazing resources. Lisi includes a generous set of links to both paid and free resources you can use to learn more and share with your team. The course serves as a starting point to what could lead to a major change in the way you work.

As I took the course, I realized that ‘Whole Team” means more than just engineers. Lisi made it clear that everyone in the product delivery cycle plays a role. As team members collaborate and learn, the team can become more productive. To succeed, the team needs a willingness to experiment and tools to measure effectiveness.

Lisi got me to realize that the factory view of waste, which I read about in The Goal, makes sense in a software context as well.  Creating software means creating value. The value for the software maker starts with the value that customers obtain – which means building high-value, defect-free experiences for customers as they use the product.

Similarly, I had not thought about experimentation in ways to get faster team delivery, and that group collaboration styles could impact team productivity. I have worked on projects that tracked team velocity. I could easily see adding this approach to teams in the future – and not just in software development.

Lisi’s point wasn’t simply that collaboration matters during coding. I inferred that we should look for opportunities to use pairing or mobbing – even during design, planning, or release. Her point, as I saw it, was that a rigid approach to team structure limits team productivity. By adding collaboration and work style skills to a team, we increase the opportunities for the team to increase team velocity and productivity.

As always, I include my TAU certificate of completion:

Happy testing!

For More Information

 

“>

The post Whole Team Testing for Continuous Delivery appeared first on Automated Visual Testing | Applitools.

]]>
Creating a Flawless User Experience, End-to-End, Functional to Visual – Practical Hands-on Session https://applitools.com/blog/cypress-applitools-end-to-end-testing/ Thu, 02 May 2019 08:57:30 +0000 https://applitools.com/blog/?p=4715 Creating and maintaining a flawless and smooth user experience is no small feat. Not only do you need to ensure that the backend and front-end are functioning and appearing as...

The post Creating a Flawless User Experience, End-to-End, Functional to Visual – Practical Hands-on Session appeared first on Automated Visual Testing | Applitools.

]]>
Cypress-Applitools webinar - Gleb Bahmutov and Gil Tayar

Cypress-Applitools webinar

Creating and maintaining a flawless and smooth user experience is no small feat.

Not only do you need to ensure that the backend and front-end are functioning and appearing as expected, but also you must verify that this is the case across hundreds (if not thousands) of possible combos of screen-size/browser/operating systems.

And if that wasn’t enough – you are deploying and releasing continuously, in a rapidly changing ecosystem of devices, competitors, and technologies.

So how do you keep track of all those moving parts, in real time, in order to prevent functional and UI fails?

In this hands-on session, Gleb Bahmutov (VP Engineering @ Cypress.io) and Gil Tayar (Sr. Architect @ Applitools) demonstrated how you can safeguard your app’s functionality and UI across all digital platforms, with end-to-end tests. They presented — step-by-step — how to write functional tests, which ensure that the application performs user actions correctly, as well as how to write visual tests that guarantee that the application does not suffer embarrassing UI bugs, glitches and regressions.

Watch this practical, hands-on session, and learn how to:

  • Write functional end-to-end tests, while consistently capturing application screenshots for image comparison
  • Add visual regression tests to ensure that the application still appears as expected
  • Analyze visual diffs to determine the root cause of visual bugs
Gil Tayar’s Slide-deck:

Gleb Bahmutov Slide-deck can be found here.

 

Gil’s and Gleb’s GitHub Repo can be found here.

 

Full Webinar Recording:

Additional Resources and Recommended Reading:

— HAPPY TESTING —

 

The post Creating a Flawless User Experience, End-to-End, Functional to Visual – Practical Hands-on Session appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Root Cause Analysis: Found a Bug? We’ll Help You Fix It! https://applitools.com/blog/applitools-root-cause-analysis-found-a-bug-well-help-you-fix-it/ Wed, 05 Dec 2018 09:32:08 +0000 https://applitools.com/blog/?p=3903 I’m pleased to announce that Applitools has released Root Cause Analysis, or RCA for short. This new offering allows you to instantly pinpoint the root cause of visual bugs in...

The post Applitools Root Cause Analysis: Found a Bug? We’ll Help You Fix It! appeared first on Automated Visual Testing | Applitools.

]]>

I’m pleased to announce that Applitools has released Root Cause Analysis, or RCA for short. This new offering allows you to instantly pinpoint the root cause of visual bugs in your front-end code. I’d like to explain why RCA matters to you, and how it’ll help you in your work.

pasted image 0 4

https://dilbert.com/strip/2015-04-24

Well, maybe RCA doesn’t find THE root cause. After all, all software bugs are created by people, as the Dilbert cartoon above points out.

But when you’re fixing visual bugs in your web apps, you need a bit more information than what Dilbert is presenting above.

The myriad challenges of front-end debugging

What we’ve seen in our experience is that, when you find a bug in your front-end UI, you need to answer the question: what has changed?

More specifically: what are the differences in your application’s Document Object Model (or DOM for short) and Cascading Style Sheet (CSS) rules that underpin the visual differences in my app?

This isn’t always easy to determine.

Getting the DOM and CSS rules for the current version of your app is trivial. They’re right there in the app you’re testing.

But getting the baseline DOM and CSS rules can be hard. You need access to your source code management system. Then you need to fire up the baseline version of your app. This might involve running some build process, which might take a while.

Once your app builds, you then need to get it into exactly the right state, which might be challenging.

Only then can you grab your baseline DOM and CSS rules, so you can run your diffs.

But doing a simple diff of DOM and CSS rules will turn up many differences, many of them have nothing to do with your visual bug. So you’ll chase dead-end leads.

That’s a tedious, time-consuming process.

Meanwhile, if you release multiple times per week or per day, you have less time and more pressure to fix the bug before the next release.

This is pretty darn stressful.

And this is where Applitools RCA comes to the rescue!

AI-assisted bug diagnosis

With Applitools RCA, we’ve updated our SDKs to grab not just UI screenshots — as we always have — but also DOM and CSS rules. We send this entire payload to our test cloud, where we now perform an additional step.

First, our AI finds significant visual differences between screenshots, as it always has, while ignoring minor differences that your users won’t care about (also called false positives).

Then — and this is the new step with RCA — we find what DOM and CSS rules underpin those visual differences. Rather than digging through line after line of DOM and CSS rules, you’ll now only be shown the lines responsible for the difference in question.

We display those visual differences to you in Applitools Eyes Test Manager. You click on a visual difference highlighted in pink and instantly see what DOM and CSS rules are related to that change.

This diagram explains this entire process:

pasted image 0 2

Even better, we give you a link to the exact view you’re looking at — sort of like a blog post’s permalink, which you can add to your Jira bug report, Slack, or email. That way your teammates can instantly see what you’re looking at. Everyone gets on the same page, and bugs get fixed faster.

Here’s a summary of life before and after RCA:

Without Applitools Root Cause Analysis With Applitools RCA
QA finds a bug QA finds a bug
QA files bug report with ONLY current DOM and CSS rules QA files bug report showing exactly the DOM and CSS rule diffs that matter
Dev builds baseline version of app Dev updates the code and fixes the bug
Dev navigates app to replicate state
Dev gets baseline DOM and CSS rules
Dev compares baseline and current DOM and CSS rules
Dev digs through large set of diffs to find the ones that matter
Dev updates the code and fixes the bug

How much would RCA speed up your debugging process?

Making Shift Left Suck Less

If you’re in an organization is that is implementing Shift Left, you know that it’s all about giving developers the responsibility of testing their own code. Test earlier, and test more often, on a codebase you’re familiar with and can quickly fix.

And yes, there’s something to be said for that. But let’s face it: if you’re a developer doing Shift Left, what this means is you have a bunch of QA-related tasks added to your already overflowing plate. You need to build tests, run tests, maintain tests.

We can’t make the pain of testing go away. But with Applitools RCA, we can save you a lot of time and help you focus on writing code!

We intentionally designed RCA to look like the developer tools you use every day. Our DOM diffs look like your Google Chrome Dev Tools, and our CSS diffs look like your GitHub diffs.

All this means you have more time to build features, which is probably the part of your job you like to focus on.

ROI, Multiplied for R&D

This section is for the engineering managers, directors, and VPs.

Applitools RCA lets your team spend more time on building new features. It helps your R&D team be more productive, efficient, and happy!

It’s application features that move modern businesses forward. And RCA helps your team get bug fixing out of the way so they can focus on adding value to your company, and get kudos for adding more features to delight your customers.

So, RCA is good for your developers, for your business, but also for your CFO! Here’s a quick back-of-the-envelope you can share:

Let’s say you have 100 developers on your engineering team. How much money would you save if RCA can accelerate your development by 10%? The quick calculation shows: maybe $2m per year? Maybe more? That’s tons of money!

pasted image 0 3

UI Version Control, Evolved

Applitools RCA helps your product managers too!

With RCA, our user interface version control now includes the DOM and CSS associated with each screenshot.

This means that not only can you see how the visual appearance of your web application has evolved over time, but also how its underlying DOM and CSS have changed. This makes it easier for you to roll back new features that turned out to be a bad idea since they hurt the user experience or decreased the revenue.

You Win Big

Applitools Root Cause Analysis is a major step in the evolution of test automation because, for the first time, a testing product isn’t just finding bugs; it’s telling you how to fix the bugs.

The evolution of software monitoring tools demonstrates a similar pattern. Early monitoring tools would find an outage, but wouldn’t point you in any direction of fixing the underlying problem behind the outage.

Modern monitoring tools like New Relic or AppDynamics, on the other hand, would point you to the piece of code causing the outage: the root cause. The market spoke, and it chose monitoring tools that pointed users to the root cause.

In test automation, we’re where monitoring was ten years ago. Existing tools like Selenium, StormRunner, Cypress, and SmartBear are good and finding problems, but they don’t help you discover and fix the root cause.

Applitools RCA, like New Relic and AppDynamics, helps you instantly find the root cause of a bug. But unlike those tools, Applitools RCA doesn’t force you to rip-and-replace your existing test automation tools. It integrates with Selenium, Cypress, WebdriverIO, and Storybook, allowing you to make your existing testing much more powerful by adding root cause analysis.

integration logos

See for yourself

To see Applitools RCA in action, please watch this short demo video:

Start Using Applitools Root Cause Analysis Today!

If you’re not yet using Applitools Eyes, sign up for a free account.

If you’re an existing Applitools customer, a Free Trial of Applitools Root Cause Analysis is already provisioned in your account. To learn more about how to use it, see this documentation page.

A free trial of Applitools RCA is available until the end of February 2019. After that, it will be available for an additional fee.

The post Applitools Root Cause Analysis: Found a Bug? We’ll Help You Fix It! appeared first on Automated Visual Testing | Applitools.

]]>
How to Implement Shift Left for your Visual Testing https://applitools.com/blog/shift-left-visual-testing/ Mon, 05 Nov 2018 10:40:20 +0000 https://applitools.com/blog/?p=3731 On September 9, 1947, Grace Hopper recorded the first computer bug ever in the Harvard Mark II computer’s logbook. The bug in question? Believe it or not, an actual bug...

The post How to Implement Shift Left for your Visual Testing appeared first on Automated Visual Testing | Applitools.

]]>
Cypress, Storybook, React, Angular, Vue

On September 9, 1947, Grace Hopper recorded the first computer bug ever in the Harvard Mark II computer’s logbook. The bug in question? Believe it or not, an actual bug – a moth – flew into the relay contacts in the computer and got stuck. Hopper duly taped the moth into the logbook. Then she added the explanation: “First actual case of bug being found.” (This might be the most famous moth in history.)

Grace Hopper's original notebook entry
Grace Hopper’s original notebook entry

If only things were this simple today. As software continuously grows in complexity, so does the process of testing and debugging. Nowadays, the lifecycle of a bug in software can be lengthy, costly, and frustrating.

Finding and fixing bugs early on in the application development stage is cheaper and easier than doing so during QA, or worse, in production. This is because, as any developer knows, debugging is often a case of finding a needle in a haystack. The smaller the haystack (the code you’re looking through) the easier it is to find the needle (or bug). That haystack is going to be smaller when you’re looking at the code written by one developer, as opposed to several developers or more.

Looking for needle in haystack
Debugging often feels like this

This dynamic is what’s driving the trend of ‘Shift-Left’ to do more testing – not just unit, but functional and visual – during the development phase of the app lifecycle. Recognizing the importance of this trend, we want to help frontend developers and have developed new visual testing capabilities as part of Applitools Eyes.

Expanding Left to Developers

When you adopt Shift Left, it impacts your software development lifecycle. Developers now need to do more testing as they code, to discover bugs as soon as they create them.

Shift-Left testing paired with an increasing velocity of releasing software to meet business demand, all while maintaining high quality, presents huge challenges for R&D teams. If you’re a developer, you have to do more. Write more code, do more testing, manage the merge, etc. All to deliver more features in less time and with better quality. Because of these challenges, we are seeing new tools being adopted by developers such as the Cypress test framework and Storybook UI component development framework, and more.

So we expanded our AI powered visual testing platform to integrate with Cypress and Storybook. Now you have the tools to do visual UI testing as you code — at the speeds you need for Agile development, continuous integration, and continuous delivery (CI-CD).

Ultrafast Cross-browser Visual Testing with Cypress

The Applitools Cypress SDK lets you instantly run visual UI tests on top of your functional tests by running browsers in parallel, in the cloud, to generate screenshots for multiple browsers and viewports. Our visual AI engine then compares these screenshots to find significant differences and instantly detect visual bugs.

If you’re making the move to Cypress, we made this process run ultra-fast so that you won’t experience any slowdown when adding Applitools Visual AI Testing into your work stream. We do this by using containers to test all options simultaneously, in parallel. We call this feature our Applitools Ultrafast Grid. It takes the DOM snapshot and renders the various screens on different browsers and various viewport sizes corresponding to multiple mobile devices, desktops, etc.  

Cypress diagram
How Applitools implements cross-browser, massively parallel testing with Cypress

For the first time, you can use Cypress to effectively test to see if you have any cross-browser bugs. And, you can now do responsive testing to see if any visual bugs occur on different screen sizes.

This increased speed lets you cover more tests cases, increase test coverage, and catch more bugs before they hit production.

Check out Applitools’ new Cypress SDK and sign up for a free Applitools Eyes account at (https://applitools.com/cypress)

Bailey Yard
Mass parallelization in practice: Bailey Yard, the world’s largest railroad yard

Instantly Detect UI Component Bugs with Visual Testing

The Applitools Storybook SDK visually tests UI web components, without any test code and works with React, Angular or Vue.

The way it works is that we scan the Storybook library to inventory all components. Then we upload a DOM Snapshot for each component’s visual state to our Applitools Ultrafast Grid.

Once these DOM Snapshots are on our ultrafast grid, we run visual validations in parallel. Validations of all visualization states of all components, on all specified viewport sizes, on both Chrome and Firefox, happen at once.

Storybook diagram
How we make visual testing of Storybook components ultrafast.

All this lets you perform ultra-fast codeless validation of your Storybook component libraries in seconds. You can think of this as visual testing at the speed of unit testing. You can learn more in this blog post on visually testing React components in Storybook and this webinar on our Cypress and Storybook SDKs.

Check out Applitools’ new Storybook SDK and sign up for a free Applitools Eyes account at (https://applitools.com/storybook).

The Final Word: Expanding Left, Not Shifting Left!

I want to emphasize that, even though we’ve talked a lot about developers and Cypress in this post, we still remain as committed as ever to QA teams, and the frameworks they use, including Selenium.

Selenium remains an incredibly important part of the test automation world, and we continue to remain a sponsor of both the Selenium project itself and SeleniumConf.  In the coming weeks and months, you’ll see us release new functionality that backs this point up.

While we are delivering new visual testing tools to developers, we remain committed to serving test automation engineers and QA analysts. That’s why we are talking about expand left, rather than using the more common term shift-left.

If you have any questions or want to try the application visual testing approach, please reach out or sign up for a free Applitools account today!

How will you start visual UI testing with Cypress or Storybook? 

The post How to Implement Shift Left for your Visual Testing appeared first on Automated Visual Testing | Applitools.

]]>
Visual UI Testing at the Speed of Unit Testing, with new SDKs for Cypress and Storybook https://applitools.com/blog/ui-testing-for-cypress-and-storybook/ Mon, 29 Oct 2018 12:07:04 +0000 https://applitools.com/blog/?p=3712 Listen to Gil Tayar’s webinar on the new Applitools SDKs for Cypress and Storybook, which enable developers to test the visual appearance of their apps across all responsive web platforms, including React, Vue, and...

The post Visual UI Testing at the Speed of Unit Testing, with new SDKs for Cypress and Storybook appeared first on Automated Visual Testing | Applitools.

]]>
Cypress & Storybook

Cypress & Storybook

Listen to Gil Tayar’s webinar on the new Applitools SDKs for Cypress and Storybook, which enable developers to test the visual appearance of their apps across all responsive web platforms, including React, Vue, and Angular.  

Developers want their tests to run fast. In an increasingly agile world, waiting ten minutes or more for test results is a huge no-no. A new generation of browser automation tools recognize that need-for-speed and enable frontend developers to quickly automate their browser tests.

But those tests also need to check the application’s visual elements. Does the login page look okay? Does it look good in Firefox? What about mobile browsers, in 768px width? And does it still look good in 455px with a device pixel ratio of 2? These are just a few of the questions developers ask when building responsive web applications.

To check the visual quality of your application, across all browsers and in all those responsive widths, would necessitate a humongous grid of browsers, and an unreasonable amount of time. Far more than the two to five minutes usually available for a developer’s tests.

Applitools’ new SDKs for Cypress and Storybook enable you to do just that: write a set of visual regression tests that run through your pages and components, and have the pages and components render in Applitools Ultrafast Grid, in parallel, on a large set of browsers and widths, and return the result in less than a minute.

One minute. That’s quicker than most functional test suites, and approaching the time required for unit testing.

Listen to Applitools Architect Gil Tayar, as he discusses the new generation of visual UI testing tools. Tools that move the burden of the visual work to cloud, and enable you to check what was till now impossible to check locally: the visual appearance of your application across multiple responsive platforms.

Gil Tayar, Sr. Architect and Evangelist @ Applitools
Gil Tayar, Sr. Architect @ Applitools

Source code, slides, and video

You can find companion code for Gil’s talk on this GitHub repo.

We also have detailed, step-by-step tutorials on how to use Applitools with Cypress, Storybook + React, Storybook + Vue, and Storybook + Angular.

Here’s Gil’s slide deck:

And here’s the full webinar recording:

Check out these resources, and if you have any questions, please contact us or set up a demo.

How are you going to start visually testing with Cypress or Storybook?

The post Visual UI Testing at the Speed of Unit Testing, with new SDKs for Cypress and Storybook appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation in 2019: Glimpse of the Near Future From the Experts Who Shape It https://applitools.com/blog/test-automation-in-2019-industry-leaders-expert-panel/ Thu, 11 Oct 2018 18:15:06 +0000 https://applitools.com/blog/?p=3621 Test Automation thought leaders gathered for a round-table discussion about the upcoming trends, best practices, tools, and ideas that will shape your Dev/Test environment in 2019. Joe Colantonio hosted this...

The post Test Automation in 2019: Glimpse of the Near Future From the Experts Who Shape It appeared first on Automated Visual Testing | Applitools.

]]>
Angie Jones, Dave Haeffner, Gil Tayar, and Joe Colantonio - Expert Panel, Oct 2018

Test Automation thought leaders gathered for a round-table discussion about the upcoming trends, best practices, tools, and ideas that will shape your Dev/Test environment in 2019.

Angie Jones, Dave Haeffner, Gil Tayar, and Joe Colantonio - Expert Panel, Oct 2018

Joe Colantonio hosted this all-star expert panel, including: Angie Jones, Dave Haeffner, and Gil Tayar – as they shared their thoughts and insights on the hottest topics in test automation and software quality, including:

  • Shift left: how to get developers involved and invested in the testing process
  • Selenium IDE: ghosts of the past, present, and future
  • Avoiding common pitfalls: how to be successful with test automation
  • Never bet against it: JavaScript’s role in test automation
  • New automation frameworks Cypress and Puppeteer: the good, the bad and the ugly
  • Keep calm and record-and-playback

Listen to the recording:

Slide Deck:

— HAPPY TESTING — 

 

The post Test Automation in 2019: Glimpse of the Near Future From the Experts Who Shape It appeared first on Automated Visual Testing | Applitools.

]]>