Richard Bradshaw, Author at Automated Visual Testing | Applitools https://applitools.com/blog/author/richard/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Thu, 30 Nov 2023 21:31:36 +0000 en-US hourly 1 Should We Fear AI in Test Automation? https://applitools.com/blog/should-we-fear-ai-in-test-automation/ Mon, 04 Dec 2023 13:39:00 +0000 https://applitools.com/?p=53216 Richard Bradshaw explores fears around the use of AI in test automation shared during his session—The Fear Factor—at Future of Testing.

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>

At the recent Future of Testing: AI in Automation event hosted by Applitools, I ran a session called ‘The Fear Factor’ where we safely and openly discussed some of our fears around the use of AI in test automation. At this event, we heard from many thought leaders and experts in this domain who shared their experiences and visions for the future. AI in test automation is already here, and its presence in test automation tooling will only increase in the very near future, but should we fear it or embrace it?

During my session, I asked the attendees three questions:

  • Do you have any fears about the use of AI in testing?
  • In one word, describe your feelings when you think about AI and testing.
  • If you do have fears about the use of AI in testing, describe them.

Do you have any fears about the use of AI in testing?

Where do you sit?

I’m in the Yes camp, and let me try to explain why.

Fear can mean many things, but one of them is the threat of harm. It’s that which concerns me in the software testing space. But that harm will only happen if teams/companies believe that AI alone can do a good enough job. If we start to see companies blindly trusting AI tools for all their testing efforts, I believe we’ll see many critical issues in production. It’s not that I don’t believe AI is capable of doing great testing—it’s more the fact that many testers struggle to explain their testing, so to have good enough data to train such a model feels distant to me. Of course, not all testing is equal, and I fully expect to see many AI-based tools doing some of the low-hanging fruit testing for us.

In one word, describe your feelings when you think about AI and testing.

It’s hard to disagree with the results from this question—if I were to pick two myself, I would have gone with ‘excited and skeptical.’ I’m excited because we seem to be seeing new developments and tools each week. On top of that, though, we are starting to see developments in tooling using AI outside of the traditional automation space, and that really pleases me. Combine that with the developments we are seeing in the automation space, such as autonomous testing, and the future tooling for testing looks rather exciting.

That said, though, I’m a tester, so I’m skeptical of most things. I’ve seen several testing tools now that are making some big promises around the use of AI, and unfortunately, several that are talking about replacing or needing fewer testers. I’m very skeptical of such claims. If we pause and look across the whole of the technology industry, the most impactful use of AI thus far is in assisting people. Various GPTs help generate all sorts of artifacts, such as code, copy, and images. Sometimes, it’s good enough, but the majority of the time is helping a human be more efficient—this use of AI and such messaging, excites me.

If you do have fears about the use of AI in testing, describe them here.

We got lots of responses to this question, but I’m going to summarise and elaborate on four of them:

  • Job security
  • Learning curve
  • Reliability & security
  • How it looks

Job Security

Several attendees shared they were concerned about AI replacing their jobs. Personally, I can’t see this happening. We had the same concern with test automation, and that never really materialized. Those automated tests don’t maintain themselves, or write themselves, or share the results themselves. The direction shared by Angie Jones in her talk Where Is My Flying Car?! Test Automation in the Space Age, and Tariq King in his talk, Automating Quality: A Vision Beyond AI for Testing, is AI that assists the human, giving them superpowers. That’s the future I hope, and believe we’ll see, where we are able to do our testing a lot more efficiently by having AI assist us. Hopefully, this means we can release even quicker, with higher quality for our customers.

Another concern shared was about skills that we’ve spent years and a lot of effort learning, suddenly being replaced by AI. Or significantly easier with AI. I think this is a valid concern but also inevitable. We’ve already seen AI have a significant benefit to developers with tools like GitHub Copilot. However, I’ve got a lot of experience with Copilot, and it only really helps when you know what to ask for—this is the same with GPTs. Therefore, I think the core skills of a tester will be crucial, and I can’t see AI replacing those.

Learning Curve

If we are going to be adding all these fantastic AI tools into our tool belts, I feel it’s going to be important we all have a basic understanding of AI. This concern was shared by the attendees. For me, if I’m going to be trusting a tool to do testing for me or generating test artefacts for me, I definitely want that basic understanding. So, that poses the question, where are we going to get this knowledge from?

On the flip side of this, what if we become over-reliant on these new AI tools? A concern shared by attendees was that the next generation of testers might not have some of the core skills we consider important today. Testers are known for being excellent thinkers and practitioners of critical thinking. If the AI tools are doing all this thinking for us, we run the risk of those skills losing their focus and no longer being taught. This could lead to us being over-reliant on such tools, but also the tools biassing the testing that we do. But given that the community is focusing on this already, I feel it’s something we can plan to mitigate and ensure this doesn’t happen.

Reliability & Security

Data, data, data. A lot of fears were shared over the use and collection of data. The majority of us work on applications where data, security, and integrity are critical. I absolutely share this concern. I’m no AI expert, but the best AI tools I’ve used thus far are ones that are contextual to my domain/application, and to do that, we need to train it on our data. These could lead to data bleeding and private data, and that is a huge challenge I think the AI space has yet to solve.

One of the huge benefits of AI tooling is that it’s always learning and, hopefully, improving. But that brings a new challenge to testing. Usually, when we create an automated test, we are codifying knowledge and behavior, to create something that is deterministic, we want it to do the same thing over and over again. This provides consistent feedback. However, with an AI-based tool it won’t always do the same thing over and over again—it will try and apply its intelligence, and here’s where the reliability issues come in. What it tested last week may not be the same this week, but it may give us the same indicator. This, for me, emphasizes the importance of basic AI knowledge but also that we use these tools as an assistant to our human skills and judgment.

How It Looks

Several attendees shared concerns about how these AI tools are going to look. Are they going to a completely black box, where we enter a URL or upload an app and just click Go? Then the tool will tell us pass or fail, or perhaps it will just go and log the bugs for us. I don’t think so. As per Angie’s and Tariq’s talk I mentioned before, I think it’s more likely these tools will focus on assistance. 

These tools will be incredibly powerful and capable of doing a lot of testing very quickly. However, what they’ll struggle to do is to put all the information they find into context. That’s why I like the idea of assistance, a bunch of AI robots going off and collecting information for me. It’s then up to me to process all that information and put it into the context of the product. The best AI tool is going to be the one that makes it as easy as possible to process the masses of information these tools are going to return.

Imagine you point an AI bot at your website, and within minutes, it’s reporting accessibility issues to you, performance issues, broken links, broken buttons, layout issues, and much more. It’s going to be imperative that we can process that information as quickly as possible to ensure these tools continue to support us and don’t drown us in information.

Visit the Future of Testing: AI in Automation archive

In summary, AI is here, and more is coming. It’s very exciting times in the software testing tooling space, and I’m really looking forward to playing with more new tools. I think we need to be curious with these new tools, try them, and see what sticks. The more tools we have in our tool belts, the more options we have to solve our ever-increasing complex testing challenges. 

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>
Testers and Designers: App Design Meets UI & Visual Validation https://applitools.com/blog/testers-and-designers-app-design-meets-ui/ Mon, 20 Jun 2016 17:23:34 +0000 http://162.243.59.116/?p=161 Testers get a hard time sometimes: as soon as something is put in front of us, we tend to find problems with it. A skill some value, others not so...

The post Testers and Designers: App Design Meets UI & Visual Validation appeared first on Automated Visual Testing | Applitools.

]]>

Testers get a hard time sometimes: as soon as something is put in front of us, we tend to find problems with it. A skill some value, others not so much. However, we aren’t the only people blessed with this ability, hello designers!

 

Have you ever been in this position? You’ve been testing a web page, found multiple problems with it.

 

Those problems get fixed, you test a bit more, and declare yourself done. You then sit down with the designer to review the implementation, and immediately: “Wrong! Wrong, that’s wrong”. “That’s the wrong font”. “Ahhh, that’s not the right blue”. “That margin is too small”.

I’m sure you’ve heard similar things. So why didn’t we find these issues? Of course there are infinite reasons, but I want to explore a common one: Inattentional Blindness. 

 

“Inattentional blindness, also known as perceptual blindness, is a psychological lack of attention that is not associated with any vision defects or deficits. It may be further defined as the event in which an individual fails to recognize an unexpected stimulus that is in plain sight” (Wikipedia)
For the most common example used when talking about inattentional blindness, watch this video and tell me how many time the team in white pass the basketball. Or watch the amazing colour changing card trick by Richard Wiseman.

 

Simply put: we don’t always see everything that is directly in front of us.

We blind ourselves based on lots of factors, In my opinion, one of the most common in testing is being what we’re actually testing for, i.e. where’s our focus. Some testers simply don’t know, they would just claim to be testing. Others would make claims to be testing functionality, other perhaps visually, or many of the others things to chose from. Either or, we will always struggle to spot all the issues.

Designers spend a large percentage of their time, well, designing. They’re emerged in the tool of their choice, analyzing mock ups and doodling on white boards. Tweaking here, tweaking there. A few pixels to the right, a few pixels to the left. A darker colour here, a lighter colour there. They dream about designs, just like some testers dream about the application as a whole… (or is it just me then?). When they finally look at the finished product, they are armed with so much more tacit knowledge than us, so they immediately start targeting areas that perhaps they had more discussion around, or simply areas that were concerning them the most.

Now I’m not saying that we can’t find the same issues that designers find, of course we could, but we would need some of that tacit knowledge that the designers have. We could build up this knowledge for ourselves, by talking to the designers, asking them questions about the design, for example; What was tricky to design? Which areas have more changes than others? And then use this to structure some testing solely focused on the visuals, taking advantage of available tools.

However, my preferred choice is to work alongside the designers, to ensure we get the best of all the minds. Test as a pair. Time doesn’t always allow such activities to take place, so on a previous project I decided to take advantage of some tools.

We had some basic UI checks in place for the majority of screens on our application, and would tend to create checks for new screens as they were created. I decided to incorporate a screenshot into this process. The check would take a screenshot after several key steps, and save it out to a shared drive. We would then collate those screenshots and review them as a pair. This approach saved us a lot of time, as we didn’t have to recreate the scenarios in real time to do the visual review. This worked well, but we started to wonder if this could be improved.

Time was the shortcomings of this approach: finding a time where we could both be present was sometimes difficult, therefore we would fallback to a direct comparison to the previous version, then I would send any noticeable changes to the designer for her to review. Reducing her workload from all of them, to the ones I believe required her attention. This was far from ideal, as it falls back to the previous problem: it relied on me to spot things, something I’d gotten better at due to all the pairing, but I still didn’t have that designer’s eye.

I started to think that there must be some tools out there that could automatically compare these images for me. I was very much aware of diff tools, so I looked for one with a specific focus on images, this lead me to ImageMagick.
I integrated ImageMagick into the process, and I would run the comparison tool on the two images, which in turn would return me a mathematical percentage on the difference of the two images.
This improved the process, as it would draw my attention to the differences, however, I started to get a lot of false negatives, as the comparison it produced was pixel-perfect. So when the tool reports a 0.5% difference, I still have to investigate, as that 0.5% could potentially have been a serious issue, for example: a problem with the company’s logo.
However, in the majority of the time, it was a different I couldn’t even spot, even though the tool was reporting a difference, and sometimes, issues that even the designer couldn’t even spot!

However, our process was vastly improved with the addition of tools. Simply adding the screenshots and diff tool into our process did speed our goal up. Sure it wasn’t perfect, but it overcome the skill gap between myself and the designer when it comes to spotting visual issues.
It also reduced the time we had to spend on this activity, only getting together to look closer at some screens when the tool reported changes. Should point out that for new screens we still paired a lot in the first few iterations, as there was no baseline image for us to compare against.

Nowadays? Well as you could probably guess with me writing on this blog, I do visual checking on my products, and my current context is actually mobile. I still do the same process, however, I take advantage of the many tools available now to help us tackle this problem. Those tools do the screenshotting, comparing and result reporting seamlessly for us. Beyond seamlessly though, in most cases, it’s only a few lines of code to get up and running.

So, if are you experiencing a similar problem with your visual testing, and looking at implementing visual checking to keep on top of it, take a look at the tools in this post, give them a go.

I can tell you that implementing automated visual checking tools has vastly improved our approach to testing, and more importantly: it’s dedicated to checking the visual layer, which is what my designers actually care about, among other things, of course.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

Increase Coverage - Reduce Maintenance - with Automated Visual Testing

Author: Richard Bradshaw. You can read more about Richard’s view on automation on his blog. He is also an active tweeter, where you can find him as @FriendlyTester 

The post Testers and Designers: App Design Meets UI & Visual Validation appeared first on Automated Visual Testing | Applitools.

]]>
Checking More Things – That’s A Good Thing, Right? https://applitools.com/blog/checking-more-things-thats-a-good-thing-right/ Wed, 03 Feb 2016 14:54:21 +0000 http://162.243.59.116/2016/02/03/checking-more-things-thats-a-good-thing-right/ When I first succumbed to peer pressure from a few colleagues in the community to check out some of the new automated visual checking tools available, I was very skeptical....

The post Checking More Things – That’s A Good Thing, Right? appeared first on Automated Visual Testing | Applitools.

]]>

When I first succumbed to peer pressure from a few colleagues in the community to check out some of the new automated visual checking tools available, I was very skeptical. I’d been bitten by the visual checking promises in the past, the tools let the idea down back then.

The main tool suggested this time promised much, with the main advertisement message being that visual checking checks more than conventional GUI checks, which in turn would find more bugs. Certainly draws your attention, along the lines of more bang for your buck, immediately makes you think, this is a good thing!

This is a good thing, a great thing, even more so when all that additional checking comes with less code, less asserts and subsequently less maintenance.

Let me explain: 

Most GUI checks tend to focus on specific results of the application. For instance, if it was submitting a form, a check would be made for a confirmation message on the screen. As in my last post, if validation failed, we would check for an error message being displayed. I believe this applies for the majority of GUI checks. Meaning that the rest of the page, for that specific check, goes unchecked. The rest of the page could be a complete mess, but as long as that confirmation is present and correct, green all the way. However, in order to do all those individual checks, we have to add methods to our Page Objects, we have to store strings to compare text against, we have to maintain many more assertions.

So what the advertising is telling us is that you could check that specific condition, and the rest of the page, in a single check. One check, that checks all the things.
Pretty impressive.
It is, it genuinely is.
That is what these new tools are offering you, you can take a screenshot of the whole page, compare it to a baseline and have it report the differences to you. It’s also accurate enough to check the text, colours and also the layout, so you are indeed able to check more. Far beyond just the text in the scenario above.

So for the scenario above, we could have a check that’s focus is to check that the form confirmation page shows the correct message to the user. However, as we are using a visual checking tool we will also be checking the content of the entire page. So if the confirmation text was correctly displayed, but the text underneath that informing the user of what to do next wasn’t, but wasn’t just incorrect, it’s formatting was also completely wrong, these tools would detect this and inform you.

So it is the case, the statement is true. Adopting an automated visual checking tool over a more traditional approach would mean that you are checking more, therefore increasing the possibility of finding more information, and potentially, more bugs.

But…

My thoughts behind this post started by thinking about Continuous Integration (CI), thinking that if I’m now checking more, but more importantly things that I wouldn’t have explicitly checked, isn’t my build going to fail with things we don’t deem a problem.

Of course, there is a high probability that will happen, but in turn, there’s also a high probability that such a tool will detect something you wouldn’t have ever explicitly checked, but turns out to be a really important problem.

This is where Eyes stands out, as being a well designed tool. Some tools try to do many things, and a lot end up doing those things badly. Some tools try and succeed, but few in my experience. Applitools have focused the majority of their early development on the image comparison, this is the core of their product.
They can detect pixel perfection, however they have worked hard to build in tolerance to the image comparison engine, so that changes detected are visible to the human eye. Resulting in a very robust image comparison offering, giving the user multiple options. Then they have gone on to think deeply about how people want to use image comparison. Which is why, within the Eyes dashboard, you can choose to ignore parts of screenshots, therefore not including those areas in the comparison.

An example of this happened to me, whilst using Eyes with Appium. I was checking a page in our app, ran the script, went into Eyes, approved the screenshot and proceeded to run it again. To my disappointment, the script failed! Upon comparing the images, it was down to the time on the phone changing, which of course is included on a mobile screenshot. But I was able to draw a rectangle around the time on the screenshot and instruct Eyes to ignore that part of the image going forward.

So to summarise, whilst it may be true that automated visual checking tools may detect more changes, and potentially some of those changes you would have never thought to check, it’s also true that it may detect changes that you don’t deem a problem. However, Applitools have catered for this, allowing you to control what parts of an image are checked.

So as with any tool, it’s down to the human to decide how best to implement a tool. It’s down to the human to apply critical thinking, to determine a good strategy. It’s important to try, then change and repeat.
My advice would be to check the full page to start with, as I believe there is a lot of valuable information to be gained in doing so. Then monitor the strategy over time and tweak it accordingly, taking complete advantage of the flexibility provided by some of these automated visual checking tools such as Eyes.

***Click here to read Richard’s previous post “Automated Form Validation Checking”***

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

Instantly increase coverage - with Automated Visual Testing

 

Author: Richard Bradshaw. You can read more about Richard’s view on automation on his blog. He is also an active tweeter, where you can find him as @FriendlyTester.

The post Checking More Things – That’s A Good Thing, Right? appeared first on Automated Visual Testing | Applitools.

]]>
Automated Form Validation Checking https://applitools.com/blog/automated-form-validation-checking-by-richard/ Mon, 23 Nov 2015 13:48:01 +0000 http://162.243.59.116/2015/11/23/automated-form-validation-checking-by-richard/ It’s commonplace to see a form on a website; some are truly awful, some, believe it or not, can actually be a pleasant experience. But most have some method of...

The post Automated Form Validation Checking appeared first on Automated Visual Testing | Applitools.

]]>

It’s commonplace to see a form on a website; some are truly awful, some, believe it or not, can actually be a pleasant experience. But most have some method of validation on them and in my experience, checking this validation is something teams look to automate. It makes sense: populate a few fields, or leave them empty, click submit, and ensure the correct validation is delivered by the site. These are positive and negative scenarios.

Recently though, I was thinking about the form validation automation I have done in the past, and questioning its effectiveness. In my opinion, validation errors and message are there to talk to the user, to help and guide them into being able to complete your form. This is most commonly done using the colour red. Field, labels, error text, popups and messages tend to be all subject to being coloured red. It was this use of red that had me question my normal approach to creating automated checks for form validation.

Now, I’m no UX (User Experience) expert, but happen to know some very good designers. I asked them, why do we use red? It fairly obvious really, it’s a colour we all associate with stopping and damage. This is backed up in the Microsoft design guidelines: 

https://msdn.microsoft.com/en-us/library/windows/desktop/dn742482.aspx
https://msdn.microsoft.com/en-us/library/windows/desktop/dn742482.aspx

So clearly the colour is very important, along with the language and the placement of these messages. Which is where I started to question the effectiveness of my automated checks. Do I actually check the colour of the message? Do I actually check the placement of the text? Well, let’s answer that my exploring a normal approach to creating such automated checks.

I am going to use the “Join GitHub” page for this example. If you load that page and leave all the fields blank and click “Create an account” you should see something like this:

Form validation - example

This is a pattern I see across a lot of sites. The error message at the top of the page, the field/label changing to red, and a contextual message for each field that has a validation error, with that message being location near said field. So I see 3 things I would want to interact with here: the error panel at the top of the page, labels of the fields and the contextual message for the fields.

So let’s start with the error panel. In this instance, it’s a div, but it has a class which indicates its purpose and style, that is “flash-error”. I tend to treat this as enough. When creating the automated check, I can see that this class controls the style and that it is indeed red. So by finding this element using its class, I infer that it is styled correctly. Then, obviously, I assert the text that is inside the div. So I am not really checking that the box is indeed red, and not checking its location at all. You ever done this?

Then we have the labels. When the page first loads, the label is directly under a dl>dt elements that has the class of “form”. When the validation fails, a new div element is introduced between the dt element and the label: that div has the class of “field-with-errors”. So now I have something to check. I can see that this class, as with the flash error above, is causing the label to go red. So now I can use that as a locator for the label. So it would probably go with something like:

By.CssSelector(“div.field-with-errors input[name='user[login]']”)

Ensuring to get the label I require, so in this instance the Username label. So if this locator return an element, I know that the label is red. Or do I?

Then the final thing we can check is the contextual message. This is actually controlled using the css ::before and ::after tags, which are in a dd element with a class of “error”. So not an easy element to locate. I would probably seek help from the development team to make it easier, if they’re not available, you could still locate the element, but you would probably have to find all the dd’s elements with that class and loop through looking for the one you want, which is far from ideal. But again, the same pattern applies when I’m using the class to confirm the style of this contextual message.

So,in summary, now that I take the time to criticise this approach, this isn’t great. Which is why in subsequent frameworks I built, I would start asserting on the important css values, such at the background colour and the text colour, so now I am able to assert the colour of the text and the messages. Better.

There is still the location problem.I am not checking the location of these elements. Of course I could assert on the X & Y co-ordinates, but I avoided that wherever I can; it can get very messy, very fast.

So one could argue that this is good enough. However, with the advancements in image comparison tools and providers, could those be harnessed in this scenario, and what would the benefit be?

Right now, if I implemented the above, I would have three elements to manage and maintain. I have to interact with those elements several times in each check, to read the text and get various css values to assert on.

So in the screenshot above, I would have 5 elements to interact with, and for the ease of math, let’s say 2 assertions on each, text and CSS values, and infer the classes are checked as they are used in the locators. So we have 10 calls and 10 assertions.

May never become a problem, but the more calls we make the higher the risk of flakiness, the more code we have to maintain, and if we are going to be critical, still not fully checking important details about the page. We are still missing location, and looking even closer at the page, the little triangle on the contextual messages. In the context of visual checking, all those assertions could be replaced by one image comparison.Let’s look at that.

There any many visual checking tools out there, as covered in a previous post. I was very skeptical about these new tools. When Dave Haeffner asked me to look, I ignored him twice, said they’re not for me. On the third time, I gave in and had a look, and believe me when I say, these tools have come a long way. So let’s explore one automated visual checking tool, of course, Eyes by Applitools to see how we could achieve the same thing using their tool.

The basic concept is very simple. You run the scenario and take screenshots where required. These images are sent to Eyes backend servers. When you run the scripts for the first time – they will fail – this is because in order to do comparisons, we need to set a baseline. So I log into Eyes, check the images myself, then approve them as being the baseline. The second time the checks are executed, Eyes will automatically compare against the baseline image, and pass/fail based on the results. You can listen to more specifics in a recent talk by Adam Carmi here.

Another convenient feature of Eyes, is that I don’t have to do a comparison on the whole image, so I could draw a section around the form and error panel, then eyes will only ever check that section for me.

So what are the advantages of using automated visual checking?

Well, the first advantage I see if that we are checking positioning. As I mentioned at the beginning of this post, it’s something I avoided doing in WebDriver as it can messy, and certainly something I don’t see many other people doing. When using Eyes though, the position of everything is compared.

Secondly is colour, doing the visual checking, all the colour of the form is also being compared, so that is hard to achieve in WebDriver, especially with inherited CSS.

Finally, as mentioned earlier, I can also check the design, so for example that little red triangle above the context field message would also be checked. So I would argue that we are indeed checking a lot more in this approach.

The biggest advantage I see is the reduction in the amount of code required, and subsequently a reduction in the amount of WebDriver calls required, which reduces the risk of flakiness.

Let’s compare the code required, for a check where we are trying to achieve the same thing:

As you can see all the asserts are replaced with a single call to Eyes. They’re a few additional lines of code for setting up Eyes, but those could be abstracted out. You can also see that in the non visual tool check, I also have to store all the required text for the labels, error panel and context panels, something I can avoid in the visual tool approach.

Another advantage is that Eyes takes full window screenshots, so the in the Eyes example above, we would also be visual checking the rest of the page, increasing the coverage of the check. Now, granted, this could be a good thing or a bad thing, so instead of digging into that now, we will explore this in Richard’s next post.

In Conclusion…

Now of course, there is no “best way” to do things; it all depends on the context. However, in exploring Eyes and other visual checking tools, I am starting to see their advantage for specific checks, especially ones that play a big part in the user experience of the application, something hard to check just going of the DOM.

So I encourage you to explore your current approach to checking form validation, and ask yourself, is it good enough? Is it checking the right things? If the answer is no, then explore the links in this post, watch the video, and give visual checking tools a go, see if they could improve your approach.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

 Increase coverage in minutes - with Applitools Eyes Automated Visual Testing

Author: Richard Bradshaw. You can read more about Richard’s view on automation on his blog: http://thefriendlytester.co.uk. He is also an active tweeter, where you can find him as @FriendlyTester.

The post Automated Form Validation Checking appeared first on Automated Visual Testing | Applitools.

]]>