Business Leaders Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/business-leaders/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Fri, 13 Jan 2023 18:10:39 +0000 en-US hourly 1 Continuous visual regression testing to enable regulatory compliance in the healthcare sector https://applitools.com/blog/visual-regression-regulatory-compliance/ Tue, 07 Jul 2020 05:23:59 +0000 https://applitools.com/?p=20014 The fourth industrial revolution – the digital revolution – has strong requirements for companies operating under strict business regulations. Particularly, in the healthcare sector, companies must spend great efforts to...

The post Continuous visual regression testing to enable regulatory compliance in the healthcare sector appeared first on Automated Visual Testing | Applitools.

]]>

The fourth industrial revolution – the digital revolution – has strong requirements for companies operating under strict business regulations.

Particularly, in the healthcare sector, companies must spend great efforts to survive “digital Darwinism”. The healthcare market is highly competitive and strongly regulated at the same time. Healthcare, pharmaceutical, and medical device companies invent new medicines and other products in a highly volatile business landscape. On one hand, they have to act as agile as possible, considering time-to-market delivery, on the other hand, they face strict compliance regulations, like FDA, HIPAA, etc.

The question is how healthcare companies can deliver new products and services at high speed while meeting their regulatory compliance obligations?

In this article, you will find the answer based on an example of the FDA-Requirement “Back-Box-Warning”.

Picture 1

The FDA (Food and Drug Administration) prescribes warnings and precautions on the package insert for certain prescription drugs. As these warnings are framed in a “Black Box” to catch the eye of the reader, they are also referred to as “Black Box Warnings” and can be found at the beginning of the package insert (see Picture 1) or in the drug description in the online store (see Picture 2).

Picture 2

If the FDA finds serious violations due to missing or unreadable Black Box Warnings, the FDA can take legal actions against a company.

Let me present an example showing how automated visual regression tests for web sites and PDF-documents can be implemented to automatically verifies regulatory requirements.  

The FDA’s General Principles of Software Validations recommends using visual regression testing for images and documents. The FDA makes this recommendation for companies using a software development lifecycle (SDLC) approach that integrates risk management strategies with validation and verification activities, including defect prevention practices.

What is visual regression testing?

Visual regression testing expands regression testing, where a program, or parts of it, repeatedly gets tested after each modification. To additionally avoid unintentional changes in design elements, positioning, and colors, QA-Teams use visual regression testing as part of their testing strategy and general quality assurance.

Visual regression testing can discover visual defects, obvious or not, due to modifications to the software. In practice, a baseline of original or reference images is stored. This “source of truth” can be compared after each program modification against a collection of “new” screenshots of a user interface. Each difference against the baseline will be highlighted and can serve as an alert.

Additionally, visual regression testing doesn’t only look at differences between the source and current status. It has the possibility to compare the source against any historical status on a UI level, independent of HTML, CSS, and JavaScript differences. It can also be used to highlight differences between documents, like PDFs, in the layout, or the content itself. For example, a missing black box warning in a package insert for a drug would be marked as a difference.

How to best use visual regression testing?

Many visual testing tools, such as Selenium, mark differences between screenshots or PDF-documents as passed or failed. With visual regression testing, you can choose which differences across multiple browsers and devices to accept or not. For example, a picture, which is displayed in a different resolution on a web-page after a program change may cause a problem in completing a user-action due to overlapping with an FDA-required text (see Picture 3). This can cause a reporting to the FDA by a competitor and a warning letter would be sent to the legal department of the healthcare company.

Picture 3

Visual regression testing tools and libraries, like Wraith, Gemini, and other Selenium-related frameworks, needs a deep knowledge by testers and high effort in installation and setup. The Applitools AI-Platform, where no installation-, setup- or coding-knowledge is required, could be a great alternative to start automated visual regression testing.

The Applitools Eyes cross-environment testing feature allows you to test your application on multiple platforms using a single, common baseline. The match level (Strict, Layout, Content, Exact) determines the way by which Eyes compares the checkpoint image with the baseline image.

Additionally, the Applitools PDF Tool allows you to easily run visual UI tests on a collection of PDF-files, by placing them inside a directory (see Picture 4). It runs as a standalone jar file and can be invoked as a process by any programming language and in your continuous delivery pipeline.

Summary

If you want to continuously deliver new products and services within a software development lifecycle, at high speed while considering regulatory compliance regulations, you should have an eye on visual regression testing. It can be used for automated testing by comparing hundreds or thousands of artifacts like images and PDF-Documents at very much speed. Therefore, it provides long term cost efficiency by avoiding extensive manual tests, especially when dealing with frequent changes on a UI or document base.

The post Continuous visual regression testing to enable regulatory compliance in the healthcare sector appeared first on Automated Visual Testing | Applitools.

]]>
What Business and Technology Leaders Should Know about the Quality of their Web and Mobile Apps in this Challenging Time https://applitools.com/blog/business-technology-leaders-app-quality/ Wed, 06 May 2020 19:44:35 +0000 https://applitools.com/?p=18191 We live in a day and age where web traffic and mobile app usage are at an all-time high. Recently, Verizon’s CEO recently reported that “In a week-over-week comparison, streaming...

The post What Business and Technology Leaders Should Know about the Quality of their Web and Mobile Apps in this Challenging Time appeared first on Automated Visual Testing | Applitools.

]]>

We live in a day and age where web traffic and mobile app usage are at an all-time high. Recently, Verizon’s CEO recently reported that “In a week-over-week comparison, streaming demand increased 12%, web traffic climbed 20%, virtual private networks, or VPN, jumped 30% and gaming skyrocketed 75%.”

According to the 2019 State of Automated Visual Testing Report, 400 leading software companies reported that today’s typical “Digitally Transformed” company now boasts 28 unique web and mobile applications, each with 98 pages or screens per app, each in five different screen sizes, and in six different languages. This amounts to about 90,000 page and screen variations accessible every day by customers of a typical company. Visual bugs that are common in such a variety of different screen variations, cost a typical company more than $2m a year in R&D related costs.

One of the most important things to remember is that the visual appearance of a company’s website or mobile app directly reflects on brand recognition. So how can organizations make sure their brand reputation remains impeccable and makes them stay ahead of the competition?

Representing time it takes to test apps
Photo by Jaelynn Castillo on Unsplash

Some say it takes only 50 milliseconds for users to form an opinion about a website or an app. Within this very small amount of time, people determine whether they like your site or not, whether they’ll stay or leave. And, as the saying goes “you get no second chance to make a good first impression.” So you need to make it look perfect on any device, any browser, and any screen size, from the first glance to the last. Any failure to do so can cause your customer to move to your competitor in a heartbeat. For more interesting stats about user experience, see https://www.sweor.com/firstimpressions.

These are the facts that should keep every one of us that has a “Digitally Transformed” business awake at night, looking for a solution as a top priority.

But is there a readily available solution to the above challenge?

The problem is that apps, websites and smart devices have proliferated to the point where any attempt by humans to manage visual and functional quality with the necessary timing and coverage is impossible. The number of screens and page variations is only expected to increase, and the software release cycles are only expected to become faster and faster to support Agile and CI/CD software development life cycles (SDLC). 

The only way to cope with this enormous problem in an automated way is by using Artificial Intelligence (AI).

According to Gartner’s Critical Capabilities for Software Test Automation (December 17, 2019), “61% of [its 2019 Software Quality Tools and Practices Survey] respondents said that AI/ML features would be very valuable in software testing tools. Improved defect detection (48%), reduction in test maintenance cost (42%), and improved test coverage (41%) were seen as the top benefits expected from incorporating AI/ML into test automation (multiple answers were allowed).”

Another Gartner research report, Gartner’s Innovation Insight for AI-Augmented Development (May 31, 2019) published by Mark Driver, Van Baker and Thomas Murphy recommends that “application leaders should embrace AI-augmented development now, or risk falling further behind digital leaders.”

In the specific situation I have described, a specific type of AI is needed: Visual AI.

Visual AI is composed of various AI algorithms that mimic the human eye and brain. It can do the work that tens of people would do in weeks, in a matter of minutes, while integrating with the entire app development toolchain and would respond in realtime in the most demanding timing constraints of CI/CD.

As a final note, I would like to say that as the co-Founder and CEO of the company that invented Visual AI and serves more than 400 enterprise customers, many of which are part of the Fortune 500, this kind of technology is quickly becoming top of mind for business and technology leaders. If you’re looking to disrupt your market and beat your competition through innovative software development and delivery practices, you must add Visual AI to your secret sauce, in order to lead your business to prosperity. It was true yesterday and it is many times more important in this challenging time!

Gil Sever is CEO and Co-Founder at Applitools

Cover Photo by Tomas Yates on Unsplash

The post What Business and Technology Leaders Should Know about the Quality of their Web and Mobile Apps in this Challenging Time appeared first on Automated Visual Testing | Applitools.

]]>
How To Modernize Your Functional Testing https://applitools.com/blog/modern-functional-testing/ Fri, 25 Oct 2019 00:28:30 +0000 https://applitools.com/blog/?p=6380 The first chapter compares modern functional testing with Visual AI against legacy functional testing with coded assertions of application output. Raja states that Visual AI allows for modern functional testing while using an existing functional test tool that relies on coded assertions results in lost productivity.

The post How To Modernize Your Functional Testing appeared first on Automated Visual Testing | Applitools.

]]>

I’m taking Raja Rao’s course, Modern Functional Testing with Visual AI, on Test Automation University. This course challenged my perspective on software testing. I plan to summarize a chapter of Raja’s course each week to share what I have learned.

The first chapter compares modern functional testing with Visual AI against legacy functional testing with coded assertions of application output. Raja states that Visual AI allows for modern functional testing while using an existing functional test tool that relies on coded assertions results in lost productivity.

To back up his claim, Raja shows us a typical web app driven to a specific behavior. In this case, it’s the log-in page of a web app, where the user hasn’t entered a username and password, and yet selected the “Enter” button, and the app responds:

“Please enter username and password”

Screen Shot 2019 10 22 at 3.25.41 PM

From here, an enterprising test engineer will look at the page above and say:

“Okay, what can I test?”

There’s the obvious functional response. There are other things to check, like:

  • The existence of the logo at top
  • The page title “Login Form”
  • The title and icons of the “Username” and “Password” text boxes
  • The existence of the “Sign in” button and the “Remember Me” checkbox – with the correct filler text
  • The Twitter and Facebook icons below

You can write code to validate that all these exist. And, if you’re a test automation engineer focused on functional test, you probably do this today.

Raja shows you what this code looks like:

Screen Shot 2019 10 22 at 3.55.24 PM

There are 18 total lines of code. There is a single line of navigation code to go to this page. One line of directs the browser to click the “Sign In” button. The remaining 16 lines of code assert that all the elements exist on the page. Those 16 lines of code include 14 identifiers (IDs, Name, Xpath, etc.) that can change, and 7 references to hard-coded text.

You might be aware that the test code can vary from app version to app version, as each release can change some or all of these identifiers.

Why do identifiers change? Let’s describe several reasons:

  1. Identifiers might change, while the underlying content does not.
  2. Some identifiers have different structures, such as additional data that was not previously tested.
  3. Path-based identifiers depend on the relative location on a page, which can change from version to version.

So, in some cases, existing test code misses test cases. In others, existing test code generates an error even when no error exists.

If you’re a test engineer, you know that you have to maintain your tests every release. And that means you maintain all these assertions.

Implications For Testing

Let’s walk through all the tests for just this page. We covered the negative condition for hitting “Sign In” without entering a username or password. Next, we must verify the error messages and other elements if one, or the other, field has data (the response messages may differ). Also, we need to handle the error condition where the username or password is incorrect.

We also have to worry about all the cases where a valid login moves to a correct next page.

Okay – lots of tests that need to be manually created and validated. And, then, there are a bunch of different browsers and operating systems. And, the apps can run on mobile browsers and mobile devices. Each of these different instances can introduce unexpected behavior.

What are the implications of all these tests?

Screen Shot 2019 10 22 at 4.31.51 PM

Raja points out the key implication. Workload. For every team besides QA, it matters to make the app better – which means changes. A/B testing, new ideas – applications need to change. And every version of the app means potential changes to identifiers – meaning that tests change and need to be validated. As a result, QA ends up with tons of work to validate apps. Or, QA becomes the bottleneck to all the change that everyone else wants to add to the app.

In fact, every other team can design and spec their change effectively. Given the different platforms that QA must validate, the QA team really wants to hold off changes. And, that’s a business problem.

Visual Validation with Visual AI – A Better Way

What would happen if QA had a better way of validating the output – not just line by line and assertion by assertion? What if QA could take a snapshot of the entire page after an action and compare that with the previous instance?

QA Engineers have desired visual validation since browser-based applications could run easily on multiple platforms. And, using Applitools, Raja demonstrates why visual validation tools appeal so much to the engineering team.

In this screen, Raja shows that the 18 lines of code are down to five, and the sixteen lines of validation code are down to three.    The three lines of validation code read:

https://gist.github.com/batmi02/b5174f538e13e3226dba7fac61fc2afc

So, we have a call to the capture session to start, a capture, and the close of the capture session. None of this code refers to a locator on the page.

Crazy as it may seem, visual validation code requires no identifiers, no Xpath, no names. Two pages with different build structures but identical visual behavior are the same to Applitools.

From a coding perspective, test code becomes simple to write:

  1. Drive behavior (with your favorite test runner)
  2. Take a screenshot

You can open a capture session and capture multiple images. Each will be treated as unique images for validation purposes within that capture session.

Once you have an image of a page, it becomes the baseline.  The Applitools service compares each subsequently captured image to the baseline. Any differences get highlighted, and you, as the tester, identify the differences that matter as either bugs or new features.

Handling Code Changes With Legacy Functional Test

The big benefit of visual validation with Visual AI comes from comparing new code with new features to old code.

When Raja takes us back to his example page, he now shows a new version of the login page which has several differences – real improvements you might look to validate with your test code.

Screen Shot 2019 10 23 at 11.26.45 AM

And, here, there are bugs with the new code. But, does your existing test code capture all the changes?

Let’s go through them all:

  1. Your existing test misses the broken logo (-1). The logo at the top no longer links to the proper icon file. Did you check for the logo? If you checked to see that the reference icon file was identical, your code misses the fact that the response file is a broken image.
  2. Your existing test misses the text overlap of the alert and the “Login Form” text (-1). The error message now overlaps the Login Form page title. You miss this error because the text remains identical – though their relative positions change
  3. Your existing test catches the new text in the username and password boxes (+2). Your test code correctly identifies that there is new prompt text in the name and password boxes and registers an error. So, your test shows two errors.
  4. Your existing test misses the new feature (-1). The new box with “Terms and Conditions” has no test. It is a new feature, and you need to code a test for the new feature.

So, to summarize, your existing tests catch two bugs (different locators for the username and password field), miss two bugs (the broken logo and the text and alert overlap), and don’t have anything to say about the new feature. You have to modify or write three new tests.

But wait! There’s more!

Screen Shot 2019 10 23 at 11.26.45 AM 1

  • Your existing test gives you two false alarms that the Twitter and Facebook links are broken. Those links at the bottom used Xpath locators – which got changed by the new feature. Because the locators changed, these now show up as errors – false positives – that must be fixed to make the test code work again.

Handling Visual Changes with Visual AI

With Visual AI in Applitools, you actually capture all the important changes and correctly identify visual elements that remain unchanged, even if the underlying page structure is different.

Screen Shot 2019 10 23 at 11.38.40 AM

Visual AI captures:

  • 1 – Change – the broken logo on top
  • 2 – Change – The login form and alert overlap
  • 3 – Change – The fact that the alert text has moved from version to version (the reason for the text overlap)
  • 4 – Change/Change – The changes to the Username and Password box text
  • 5 – Change – the new feature of the Terms and Conditions text
  • 6 – No Change – Twitter and Facebook logos remain unmoved (no false positives)

So, note that Visual AI captures changes in the visual elements. All changes get captured and identified. There is no need to look at the test code afterward and ask, “what did we miss?” There is no need to look at the test code and say, “Hey, that was a false positive, we need to change that test.”

Comparing Visual AI and Legacy Functional Code

With Visual AI, you no longer have to look at the screen and determine which code changes to make. You are asked to either accept the change as part of the new baseline or reject the change as an error.

How powerful is that capability?

Well, Raja makes a comparison of the work an engineer puts in to do validation using legacy functional testing and functional testing with Visual AI.

Screen Shot 2019 10 23 at 11.54.37 AM

With legacy functional testing:

  • The real bug – the broken logo – can only be uncovered by manual testing. Once it is discovered, a tester needs to determine what code will find the broken representation. Typically, this can take 15 minutes (more or less). And you need to inform the developers of the bug.
  • The visual bug – the text overlap – can only be uncovered by manual testing. Once the bug is discovered, the tester needs to determine what code will find the overlap and add the appropriate test (e.g. a CSS check). This could take 15 minutes (more or less) to add the test code. And, you need to inform the developers of the bug.
  • The intentionally changed placeholder text for Username and Password text boxes need to be recoded, as they are flagged as errors. This could take 10 minutes (more or less).
  • The new feature can only be identified by manual validation or by the developer. This test needs to be added (perhaps 5 minutes of coding). You may want to ask the developers about a better way to find out about the new features.
  • The false positive errors around the Twitter and Facebook logos need to be resolved. The Xpath code needs to be inspected and updated. This could take 15 minutes (more or less)

In summary, you could spend 60+ minutes, or 3,600+ seconds, for all this work.

In contrast, automated visual validation with Visual AI does the following:

  • You find the broken logo by running visual validation. No additional code or manual work needed. Incremental time: zero seconds. Alert developers of the error: 2 seconds. Alerting in Applitools can send a bug notification to developers when rejecting a difference.
  • Visual validation uncovers the text overlap and moved alert text. Incremental time: zero secondsAlert developers of the error: 2 seconds. Alerting in Applitools can send a bug notification to developers when rejecting a difference.
  • Visual validation identifies the new text in the Username and Password text boxes. Visual validation workflow lets you accept the visual change as the new baseline (2 seconds per change – or 4 seconds total)
  • You uncover the new feature with no incremental change or new test code, and you accept the visual change as the new baseline (2 seconds).
  • The Twitter and Facebook logos don’t show up as differences – so you have no work to do (0 seconds)

So, 10 seconds for Visual AI. Versus 3,600 for traditional functional testing. 360X faster.

Let’s Get Real

I would think that a productivity gain of 360X might appear unreasonable. So did Raja. When he went through the real-world examples for writing and maintaining tests, he came up with a more reasonable-looking table.

Screen Shot 2019 10 23 at 12.15.46 PM

For just a single page, in development and just a single update, Raja concluded that the maintenance costs with Visual AI remain about 1000x better, and the overall test development and maintenance would be about 18x faster. Every subsequent update of the page would be that much faster with Visual AI.

In addition, Visual AI catches all the visual differences without test changes and ignores underlying changes to locators that would cause functional tests to generate false positives. So, the accuracy of Visual AI ends up making Visual AI users much more productive.

Finally, because Visual AI does not depend on locators for visual validation, Visual AI ends up depending only on the action locators – which would need to be maintained for any functional test. So, Visual AI becomes much more stable – again leading to Visual AI users being much more productive.

Raja then looks at a more realistic app page to have you imagine the kind of development and test work you might need to ensure the functional and visual behavior of just this page.

Screen Shot 2019 10 23 at 12.25.42 PM

For a given page, with calculators, data sorting, user information… this is a large amount of real estate that involves display and action. How would you ensure proper behavior, error handling, and manage update?

Extend this across your entire app and think about how much more stable and productive you can be with Visual AI.

Chapter 1 Summary

Raja summarizes the chapter by pointing out that visual validation with Visual AI requires only the following:

  1. Take action
  2. Take a screenshot in Visual AI
  3. Compare the screenshot with the baseline in Visual AI

Screen Shot 2019 10 23 at 12.27.50 PM

That’s it.  Visual AI catches all the relevant differences without manual intervention. Using Visual AI avoids both the test development tasks for ensuring that app responses match expectations, and it eliminates the more difficult steps of maintaining tests as part of app enhancement.

Next Up – Chapter 2.

For More Information

The post How To Modernize Your Functional Testing appeared first on Automated Visual Testing | Applitools.

]]>
What The World’s Top 12% Testing Teams Are Doing (That You Can Do, Too) https://applitools.com/blog/top-test-teams-sovt/ Wed, 14 Aug 2019 00:14:29 +0000 https://applitools.com/blog/?p=6033 Learn about what top test teams are doing to improve coverage and accelerate time-to-market. This is not some fluff piece with vague ideas of what might (or might not) work. Read along, and we promise it will be worth your time.

The post What The World’s Top 12% Testing Teams Are Doing (That You Can Do, Too) appeared first on Automated Visual Testing | Applitools.

]]>

We know that you’re thinking, “Really? Another fluff post about the top people in my profession and what I need to do to be just like them?”

Listen, we hear you, but this is not some fluff piece with vague ideas of what might (or might not) work. Read on, follow along for the next few weeks, and we promise it will be worth your time! If not, write to us directly and tell us what we can do better next time. We will reply!

In case you missed it, on May 27th the 2019 State of Automated Visual Testing was released. Based on independent research sourced from over 350 testing teams around the world, we learned that 12% of you are getting much better results than the other 88%. We’re talking four times more successful as measured by the things you (and your boss, and bosses’ boss, and your bosses’ bosses’ boss) really care about – test coverage, release velocity, application quality, overall R&D teamwork, and cold hard cash! This is not our opinion. It’s not subjective. It’s objective data and information. Data and information that came from you and your peers.

In other words, we’re not asking you to take our word for it; we’re asking you to take your word for it. Take a moment to download the full report. We will be here when you get back!

Click here to download the report.

So what separates the top 12% from the other 88%?

Digital Transformation. Two words that have been baked into our world over the past 20 years. To such an extent that it may be fair to call it fluff? Fluff or not, IDC forecasts that worldwide spending on technologies and services that enable digital transformation will reach $1.97 trillion in 2022, per the (IDC) Worldwide Semiannual Digital Transformation Spending Guide.

(If you think a trillion is an abstract concept, check this out.)

Dang. That’s a lot of fluff! When your bosses’ bosses’ boss is spending that kind of cash, it’s always worth paying attention. It could be good for your career. As it turns out, 12% of the world’s testing teams did pay attention, and it’s their approach to managing the challenges of digital transformation that have set them apart.

The Testing World’s Digital Divide – Digital Transformation Quantified.

You can read up on The Enterpriser Projects CIO level take on Digital Transformation (warning: fluff alert) here, or you can quantify it for yourself with some simple math and see how you compare to other R&D and testing teams around the world. Got that calculator ready? Here we go…

  • How many applications do you have in production today? (Don’t forget those native mobile apps, they really add up).
  • How many pages or screens do you have in production on average for each of these applications? (Single Page Applications can be tricky we know, but give it your best guess).
  • How many viewport breaks do you support? (The market average is six if you are not sure.)
  • How many human languages do you support? (We’re talking about localization here – English, German, Spanish, Chinese, Japanese – not coding languages in case you’re wondering).

Now multiply those four numbers together. Congrats! You have just quantified the digital footprint of your business. People can write about digital transformation all they want, but you’ve just transformed all that fluff into something quite real. It’s probably a big number. Over 90,000 for a typical company and over 624,000 for the largest 30% of companies in the world. Here’s the kicker. You and your R&D team are responsible for managing the visual and functional quality of every single one of those pages and screens.

We categorized the number of screens in production by industry, based on your responses. Here’s what we discovered.

When functional test automation first emerged way back in 1989 with the launch of Mercury Interactive XRunner, it was a much different time. Browsers didn’t even exist. Since then, the browser wars have come and gone and standards are now in place. Applications have grown far more complex with native mobile, dynamic content, single page applications (SPA) and responsive design now an everyday reality. Digital transformation for any size company is officially past tense. We’re not transforming, we’re transformed. And now you have to deal with it, but how?

What Defines a Top 12% Global Testing Team?

Top 12% teams have overcome the massive challenge posed by digital transformation. Like any technical challenge we have ever faced, it started with someone on the team who felt the pain and set out to solve for it. And when they did, good things happened.

Test coverage increased by 60%.

Release velocity became 2.8x faster (even though coverage increased 60%)

Visual and functional quality improved 3x with far fewer escape bugs (even though they release 2.8x times faster)

R&D teams are 4x more satisfied with their visual and functional quality outcome (yea, I’d be more satisfied too with those types of results!)

All of this despite the fact that this 12% of testing teams were managing applications 2.2x larger than the other 88%. In all likelihood, they felt the pain before most of us, and have now led the way for the rest of us. We just need to follow.

Goodbye, But Only For A Week or So.

Today successful continuous management of application visual quality creates competitive advantage. Business leaders know this and are paying attention. As a result, testers are in a better position than ever to be heroes respected by their R&D teams for driving huge value for the companies they work for.

Over the next 10-12 weeks, Patrick and I will release a series of blogs that explain how these 12% of testing teams reinvented their testing approach to deal successfully with the challenges of Digital Transformation. We will get into the details as promised.

Or, if you simply can’t wait and want to ignore my spoiler alert, you can listen on-demand to the webinar Patrick and I hosted together entitled Wrong Tool, Wrong Time: Re-Thinking Test Automation, read the blog and view the slides here, or just reach out to us for help with your testing approach.

Until next time.

Yours Truly,

James the #NotSoEvilMarketingGuy

Patrick the #GuyWhoActuallyHasDoneTestingFor20Years

Find Out More

Check out Applitools blogs about digital transformation.

Read about how Microsoft incorporates visual testing to their DevOps delivery.

Request a demo of Applitools Eyes.

Sign up for a free Applitools account.

 

The original version of this blog post previously ran on devops.com

 

The post What The World’s Top 12% Testing Teams Are Doing (That You Can Do, Too) appeared first on Automated Visual Testing | Applitools.

]]>
Is AI Required for Successful Digital Transformation? https://applitools.com/blog/ai-digital-transformation/ Thu, 27 Jun 2019 00:33:13 +0000 https://applitools.com/blog/?p=5737 In today’s visual economy, customers are increasingly interacting with companies through a screen. For this reason, digital transformations are all about delivering quality at high velocity along with stability. To do this, leading firms of all sizes and verticals are embracing AI as a critical layer in their tech stack.

The post Is AI Required for Successful Digital Transformation? appeared first on Automated Visual Testing | Applitools.

]]>
Girl talking to robot in store

Gartner recently published a report that said: “application leaders should embrace AI-augmented development now, or risk falling further behind digital leaders.”* In today’s visual economy, customers are increasingly interacting with companies through a screen. For this reason, digital transformations are all about delivering quality at high velocity along with stability. To do this, leading firms of all sizes and verticals are embracing AI as a critical layer in their tech stack.

Why does AI matter to Digital Transformation?

Because it can help automate the error-prone and mundane tasks that often pull humans away from higher value tasks for an organization. The Gartner report states, “while early AI-augmented development addresses mundane, ‘plumbing’ tasks, this is exactly where application leaders can benefit from further automation and cost reduction.”*

However, we know that simply using AI for its own sake won’t get you very far. While these types of technological advancements can certainly help increase your productivity and increase the velocity of your software development pipeline, the areas where you can get the most proverbial “bang for your buck” is in processes that are either 1) entirely manual or 2) extremely frequent tasks (or sometimes both).

According to Gartner, “today’s statistical techniques are inadequate for optimizing testing, especially when changes to applications are frequent and where large software assets already exist that use a wide variety of microservices. Development and quality assurance (QA) in organizations cannot keep pace with the rate of innovation needed, due to: a heavy reliance on manual testing, skills gaps, insufficient resources, and an inability to scale technologies and processes. AI and ML are particularly suited to support the complex test automation required for back-end services within a mesh app and service architecture.* For more information from Gartner regarding microservices, visit 4 Steps to Design Microservices for Agile Architecture.**

__________________________________________________________

Gartner’s ‘Innovation Insight for AI-Augmented Development’ report is available to subscribers: (https://www.gartner.com/doc/3933974).

__________________________________________________________

Why is Digital Transformation difficult?

Consider the task of visually inspecting the user interface of an application or webpage. Oftentimes, this is still a job that is reserved for human eyes only. Someone — or rather a team of people — need to physically sit down and scan thousands of webpages across multiple browser types and devices, looking for inconsistencies and errors.

According to our 2019 State of Automated Visual Testing Report, today’s typical “Digitally Transformed” company now boasts 28 unique web and mobile applications, each with 98 pages or screens per app, viewable in five different screen sizes, and read in six different human languages. This amounts to around 90,000 screen variations accessible every day by customers. Here’s how that breaks down by industry:

image1 1
Source: 2019 State of Automated Visual Testing.

90,000 screens. That’s a lot.

In fact, if you laid all those screens end to end, they’d extend over 17 miles (27 kilometers).

Imagine walking that distance, carefully checking a screen every step of the way. How long would it take you? Would you like to do that with every test run?

image2 1
Source: @7seth on Unsplash.

Probably not.

Because of this, we’ve found that a typical company incurs between $1.8M and $6.9M annually due to visual bugs that have escaped into production. Ouch!

Source: @jpvalery on Unsplash.

What this data tells us is that it is not reasonable for humans to find the small inconsistencies in a webpage, especially ones that we are extremely familiar with. Just as you will “gloss over” a misspelling in a sentence when you know what it is supposed to say, it is very natural to “gloss over” a visual error when you know the color it’s supposed to be, or where the button is supposed to be positioned, or how the columns are supposed to align. But for your customers, who didn’t spend countless hours working on the website or application, they will most certainly notice the mistakes that you’ve missed.

In addition to the issue of familiarity, there is also the challenge of scalability. Reviewing 90,000 variations of how a screen looks is simply not sustainable in a time where the velocity of software release cycles is only getting higher.

So, realistically, what can be done?

Visual AI to the Rescue

We created Visual AI technology that emulates the human eye and brain with computer vision algorithms. These algorithms report only the visual differences that are perceptible to users. Also, they ignore insignificant rendering, size, and position differences that users won’t notice.

We’ve tuned these algorithms to instantly validate entire application pages, detect layout issues, and process the most complex and dynamic pages. And the best part? There’s no calibration, training, tweaking or thresholding required. Through years of engineering, we’ve gotten it to work with 99.9999% accuracy. That’s one false positive out of one million screens tested.

Source: @artmatters on Unsplash.

Using this Visual AI technology our AI-powered visual testing platform, Applitools Eyes, helps increase overall app UI test coverage by 60 percent and improves the overall visual quality of apps by a factor of three. With our Ultrafast Grid, you can gain the power our low-code, high availability, and easy-to-use visual testing platform to support your entire digital transform team: product owners, developers, test automation engineers, manual testers, DevOps, and marketing teams. Reach out or sign up for an Applitools demo today!

 

*Gartner, Innovation Insight for AI-Augmented Development; Analyst(s): Mark Driver, Van Baker, Thomas Murphy, Published: 31 May 2019

**Smarter with Gartner, 4 Steps to Design Microservices for Agile Architecture, 7 August 2018, https://www.gartner.com/smarterwithgartner/4-steps-to-design-microservices-for-agile-architecture/

 

The post Is AI Required for Successful Digital Transformation? appeared first on Automated Visual Testing | Applitools.

]]>
Using Genymotion, Appium & Applitools to visually test Android apps https://applitools.com/blog/genymotion-appium-android/ Thu, 13 Jun 2019 05:29:12 +0000 https://applitools.com/blog/?p=5553 How to use Genymotion, Appium, and Applitools to do visual UI testing of native mobile Android applications.

The post Using Genymotion, Appium & Applitools to visually test Android apps appeared first on Automated Visual Testing | Applitools.

]]>

If you want to run mobile applications, you want to run on Android. Android devices dominate the smartphone market. Genymotion allows you to run your Appium tests in parallel on a range of virtual Android devices. Applitools lets you rapidly validate how each device renders each Appium test. Together, Genymotion and Applitools give you coverage with speed for your functional and visual tests.

As a QA automation professional, you know you need to test on Android.  Then you look at the market and realize just how fragmented the market is.

How fragmented is Android?

Fragmented is an understatement. A study by OpenSignal measured over 24,000 unique models of Android devices in use, running nine different Android OS versions across over three dozen different screen sizes, manufactured by 1,294 distinct device vendors. That is fragmentation. These numbers are mind-boggling, so here’s a chart to explain. Each box represents the usage share of one phone model.

Plenty of other studies confirm this. There are 19 major vendors of Android devices. Leading manufacturers include Samsung, Huawei, OnePlus, Xiaome, and Google. The market share of the leading Android device is less than 2% of the market, and the market share of the top 10 devices is 11%. The most popular  Android version accounts for only 31% of the market.

We would all like to think that Android devices behave exactly the same way.  But, no one knows for sure without testing. If you check through the Google Issue Tracker, you’ll find a range of issues that end up as platform-specific.

Implications for Android Test Coverage

So, if every Android device might behave differently, exactly how should you test your Android apps? One way is to run the test functionally on each platform and measure behavior in code – that’s costly. Another way is to run functionally on one platform and hope the code works on the others. Functionally, this can tell you that the app works – but you are left vulnerable to device-specific behaviors that may not be obvious without testing.

To visualize the challenge of testing against 24,000 unique platforms, imagine your application has just 10 screens. If you placed these ten different screens on 24,000 unique devices end-to-end, they would stretch over 30 miles. That’s longer than the distance of a marathon!

Could you imagine manually checking a marathon’s worth of screens with every release?

I can’t run a marathon, much less do while examining thousands of screens. Thankfully there’s a better way, which I’ll explain in this post: using Genymotion, Appium, and Applitools.

What is Genymotion?

Genymotion is the industry-leading provider of cloud-based Android emulation and virtual mobile infrastructure solutions. Genymotion frees you from having to build your own Android device farm.

Once you integrate your Appium tests with Genymotion Cloud, you can run them in parallel across many Android devices at once, to detect bugs as soon as possible and spend less time on test runs. That’s powerful.

With Genymotion Cloud, you can choose to test against just the most popular Android device/OS combinations. Or, you can test the combinations for a specific platform vendor in detail. Genymotion gives you the flexibility to run whatever combination of Androids you need.

pasted image 0 3

Why use Genymotion Cloud & Applitools?

Genymotion Cloud can run your Android functional tests across multiple platforms. However, functional tests are a subset of the device and OS version issues you might encounter with your application. In addition to functional tests. you can run into visual issues that affect how your app looks as well as how it runs. How do you run visual UI tests with Genymotion Cloud? Applitools.

Applitools provides AI-powered visual testing of applications and allows you to test cross-platform easily to identify visual bugs. Visual regressions seem like they might be simply a distraction to your customers. At worst, though, visual errors block your customers from completing transactions. Visual errors have real costs – and without visual testing, they often don’t appear until a user encounters them in the field.

Here’s one example of what I’m talking about. This messed-up layout blocked Instagram from making any money on this ad, and probably led to an upset customer and engineering VP. All the elements are present, so this screen probably passed functional testing.

pasted image 0 1

You can find plenty of other examples of visual regressions by following #GUIGoneWrong on Twitter.

Applitools uses an AI-powered visual testing engine to highlight issues that customers would identify. More importantly, Applitools ignores differences that customers would not notice. If you ever used snapshot testing, you may have stopped because you tracked down too many false positives. Applitools finds the issues that matter and ignores the ones that don’t.

How to use Genymotion, Appium & Applitools?

Applitools already works with Appium to provide visual testing for your Android OS applications. Now, you can use Applitools and Genymotion to run your visual tests across numerous Android virtual devices.  To sum up:

  1. Write your tests in Appium using the Applitools SDK to capture visual images.
  2. Launch the Genymotion cloud devices via command line.
  3. Your Appium scripts will run visual tests across the Genymotion virtual devices.

That’s the overview. To dive into the details, check out this step-by-step tutorial on using Genymotion, Appium, and Applitools.

While it’s pretty complete, here’s some additional information you’ll need:

We’ve put together a series of step-by-step tutorial videos using Genymotion, Appium, and Applitools. Here’s the first one:

https://www.youtube.com/watch?v=qXuMglfNEeo

Genymotion, Appium, and Applitools: Better Together

When you run Appium, Applitools, and Genymotion together, you get a huge boost in test productivity. You get to re-use your existing Appium test scripts. Genymotion lets you run all your functional and visual tests in parallel. And, with the accuracy of Applitools AI-powered visual testing, you track down only issues that matter, without the distraction of false positives.

Find Out More

Read more about how to use our products together from this Genymotion blog post.

Visit Applitools at the Appium Conference 2019 in Bengaluru, India.

Sign up for our upcoming webinar on July 9 with Jonathan Lipps: Easy Distributed Visual Testing for Mobile Apps and Sites.

Find out more about Genymotion Cloud, and sign up for a free account to get started.

Find out more about Applitools. You can request a demo, sign up for a free account, and view our tutorials.

 

The post Using Genymotion, Appium & Applitools to visually test Android apps appeared first on Automated Visual Testing | Applitools.

]]>
Wrong Tool, Wrong Time: Re-thinking Test Automation https://applitools.com/blog/state-of-visual-testing-research-webinar/ Fri, 07 Jun 2019 14:24:36 +0000 https://applitools.com/blog/?p=5463 Watch this on-demand webinar, and learn how the world’s most innovative testing teams have reinvented their test automation to support a fully automated CI-CD process. This session included live polls taken during the recording -- so you can compare your team's results to those of your colleagues, and see how you rank.

The post Wrong Tool, Wrong Time: Re-thinking Test Automation appeared first on Automated Visual Testing | Applitools.

]]>

Watch this on-demand session to learn: What Are The World’s Most Innovative Test Automation Teams Doing That You Are Not?

Speakers: James Lamberti -- Applitools CMO (left), and Patrick McCartney -- Director of Customer Success engineering @ Applitools (right)
Speakers: James Lamberti — Applitools CMO (left), and Patrick McCartney — Director of Customer Success engineering @ Applitools (right)

As much as we all hate to admit it, our test automation efforts are struggling. Coverage is dropping. Bugs are escaping to production. Our apps are visually complex, growing rapidly, delivered continuously, and changing constantly – so much so that our functional framework is now bloated, broken, and unable to keep up with Agile and CI-CD release best practices.

No wonder that in our latest State of Visual Testing research, the majority of companies surveyed reported that their CI-CD and automation processes are not helping them to successfully compete in today’s fast-paced ecosystem, and are not effective in ensuring software quality in a scalable and robust way.

But what about those elite testing teams that got it right? What’s their secret? Can we copy what they did, instead of setting ourselves to fail?

Watch this on-demand session, and learn how the 10% of the world’s most innovative testing teams have reinvented their test automation to support a fully automated CI-CD process, and guaranteed their company’s digital transformation was a success.

Watch this webinar to learn:

  • Why the majority of test automation efforts are falling behind
  • How your QA and testing efforts compare to these elite teams — via live polling results
  • 4 modern techniques that the top 10% of testing teams globally are doing every day, and that you can do too

Slide deck:

Full webinar recording:

Additional Materials and Recommended reading:

  1. State of Visual Testing Research Report — Click here to download your copy of the Executive Summary Whitepaper
  2. Webinar: DevOps & Quality in The Era Of CI-CD: An Inside Look At How Microsoft Does It — with Abel Wong of Microsoft Azure DevOps
  3. How to Run 372 Cross Browser Tests In Under 3 Minutes — post by Jonah Stiennon
  4. How Visual Regression Testing Can Help You Deliver Better Apps — post by Jay Phelps
  5. Want to see Visual AI in action? Contact us, and we’ll get one of our solution architects to show you around!
  6. Release Apps with Flawless UI: Open your Free Applitools Account, and start visual testing today.
  7. Improve your test automation skills, and build your resume for success — with Test Automation University! The most-talked-about test automation initiative of 2019: online education platform led by Angie Jones, offering free test automation courses by industry leaders. Enroll, and start showing off your test automation certificates and badges!

— HAPPY TESTING —

 

The post Wrong Tool, Wrong Time: Re-thinking Test Automation appeared first on Automated Visual Testing | Applitools.

]]>
What Types of Software UI Bugs Are We Seeing in 2019? Here Are 13 Examples https://applitools.com/blog/examples-software-ui-bugs/ Fri, 11 Jan 2019 16:46:48 +0000 https://applitools.com/blog/?p=3394 Take a guess: how long have we been dealing with software bugs? It’s not 30 years, around the time Windows was first released. It’s not 48 years, the start of...

The post What Types of Software UI Bugs Are We Seeing in 2019? Here Are 13 Examples appeared first on Automated Visual Testing | Applitools.

]]>
Apple iOS home screen visual bug

Take a guess: how long have we been dealing with software bugs?

It’s not 30 years, around the time Windows was first released.

It’s not 48 years, the start of the Unix epoch.

It’s actually much longer. 71 years and 2 days, to be exact. Here’s why.

Back on September 9, 1947, Grace Hopper, a Harvard computer scientist, was running tests on a calculator and found calculation errors. She did some investigation and found a moth that had landed between two solenoid contacts, shorting out an electromechanical relay. Apparently, the bug had been attracted by the warmth of the machine.

We now commemorate this occasion every September 9, Tester’s Day.

As you can see in her logbook entry below, dated September 9, the actual offending moth was taped to the page. So not only is the first known example of a software bug, it’s probably the most tangible example of one as well.

https://upload.wikimedia.org/wikipedia/commons/8/8a/H96566k.jpg
The first known bug, via Wikipedia

71 years after Grace Hopper’s discovery, software continues to be infested with bugs of a more modern variety. Some of these have been pretty spectacular.

Like that time in the 80s when the entire world could have been destroyed due to a software bug.

Really.

The Bug to end all Bugs

Here’s what happened: a Soviet early warning system showed that five American nuclear missiles were flying to Russia.

You have to understand that this was during a particularly tense time during the Cold War, since the Soviets had shot down a Korean passenger jet three weeks earlier. And the United States and USSR both had over 50,000 nuclear weapons, each of which could destroy a city.

Thankfully, the Russian commander that saw this ignored the warnings, believing (correctly) that if the US were to attack the Soviet Union, it wouldn’t launch just five missiles. When the early warning system was later analyzed, it was later found to be riddled with bugs.

This guy deserves the Nobel Peace Price

Thankfully, the bugs we’re seeing in 2018 are a bit less alarming.

But that said, they’re still pretty annoying in our day-to-day life, given how dependent we are these days on software. Let’s dive into some of them.

Trippy Text Layout

Visual Bug on TripAdvisor App
Overlapping text on TripAdvisor App

On the TripAdvisor mobile app, ratings are overlaid with the hotel name, making it so that, in some cases, you can’t read either. This doesn’t exactly encourage potential guests to make a booking on their app. And that’s a problem given how many travel booking apps there are out there.

Wanna Get A Way (to Book)

Visual Bug on Southwest Airlines App
Text blocking buy button on Southwest Airlines website

On the Southwest Airlines website, a visual bug prevented customers from clicking the Continue button and actually buying a ticket. The visual bug was that their Terms and Conditions text was overlaid on top of the Continue button. Southwest drives about $2.5M through their website every hour. So even if this bug was up for a short time, it would have cost them a lot.

The airline industry is very competitive. Not wanting to be left behind, United Airlines has done their part to cut off their revenue by hiding their purchase button behind text.

Text blocking buy button on United Airlines website
Text blocking buy button on the United Airlines website

SerchDown

Visual Bug on ThredUp Website
Search box blocking shopping cart on ThredUp Website

The ThredUp website prominently provides a convenient search field on its homepage. But it’s not so convenient to block access to buttons to view your shopping cart, or sign in to view your account.

No Order for You

Visual Bug on Amazon App
Off-screen quantity popup on Amazon App stopping the buy process

On Amazon’s mobile app, there was a visual bug that prevented users from continuing the purchase process if they tried to switch their order quantity to something other than one. It’s like the software version of everyone’s favorite restaurant worker.

I’m Feeling Unlucky

No search on the Google website
No search on the Google website

For years, Google’s homepage has been minimalist in design so it loads quickly and they can help users find what they need and get on their way. However, this rendering of their website, using Chrome on macOS, seems to be taking minimalism a bit far.

Maybe privacy really is dead

Public and Private options mashed together on LinkedIn Sales Navigator
Public and Private options mashed together on LinkedIn Sales Navigator

Privacy permissions on social are a big deal. Some things you might okay with sharing publicly, and others you’ll want to share with just your network. With LinkedIn‘s overlapping privacy choices, whether a post is public or private can be a roll of the dice.

Repetitive Redundancy

Repeated company listing on LinkedIn
Repeated company listing on LinkedIn

Speaking of LinkedIn, they’re a bit repetitive here…

Repetitively Repetitive Redundancy

Repeated discount notice on Banana Republic website
Repeated discount notice on Banana Republic website

With another repetition-based visual bug, we’re treading into dead-horse-beating territory here, but Banana Republic really, really wants to let you know that all denim and pants are 40% off.

Alexa, are you done yet?

Text stays in "move mode" on Amazon Alexa app
Text stays in “move mode” on Amazon Alexa app

On the Amazon Alexa mobile app, if you rearrange the order of the podcasts in your flash briefing, the app will still appear as if a podcast is still “settling in” to its new location, and the app will appear to be hung.

Home Screen Blues

Overlapping text on Apple iOS home screen
Overlapping text on Apple iOS home screen

Apple’s iPhone home screen can sometimes improperly position the message that no older notifications exist.

Craigslist not looking so bad

Super narrow column in Facebook Marketplace website
Very narrow column in Facebook Marketplace website

There’s probably some really useful text in that leftmost column of the Facebook Marketplace. We’re just not sure what it is.

What language is that?

Improperly rendered special character on Air France website
Improperly rendered special character on Air France website

You never know what languages you’ll encounter on an airplane. This is why, on the Air Canada site, they explain their commitment to speaking to customers in their preferred official language, whether it be English, Chinese, or whatever that third choice is. (I’m surprised French isn’t listed.)

So why does software have visual bugs?

None of these examples are intended to throw developers under the bus. Writing code is hard.

Anticipating every possible scenario is near impossible. Your users continuously interact with all kinds of devices with a dizzying variety of operating system versions, browser versions, screen sizes, font sizes, and languages.

When you multiply all these together, the number of different combinations can easily be in the tens of thousands.

That’s a heck of a lot more than this pile of phones: 

Keep Calm and Pile Your Phones

So yeah, life’s not easy for developers.

At the same time, your web or mobile app is the now the front door of your business for an increasing number of users. And you have to ensure that storefront doesn’t have any visual glitches.

Unlike these guys: 

Die Thru?
Is that a new Bruce Willis movie?

Or these guys

Conclusion

But back to the software world. There, visual perfection can mean the difference between one of your customers loving or hating your product. That why at Applitools, we want to help developers and testers come together to find one class of bugs — visual bugs — as quickly as possible through visual UI testing.

We might not save the world, but hopefully, we’ll save you a bit of time in getting a visually perfect app shipped into production.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

What bugs have you seen in web or mobile apps? Tweet them out with hashtag #GUIGoneWrong. If we like your entry, we’ll ship you one of our “Visually Perfect” shirts.

 

The post What Types of Software UI Bugs Are We Seeing in 2019? Here Are 13 Examples appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Introduces UI Version Control https://applitools.com/blog/applitools-introduces-the-worlds-first-ui-version-control/ Sun, 22 Apr 2018 10:21:45 +0000 https://applitools.com/blog/?p=2335 We just released version 10.3 of Applitools Eyes on our Application Visual Management (AVM) platform and would like to tell you about three great new capabilities we added: UI Version...

The post Applitools Introduces UI Version Control appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Baseline Comparison

We just released version 10.3 of Applitools Eyes on our Application Visual Management (AVM) platform and would like to tell you about three great new capabilities we added:

  1. UI Version Control
  2. New View: Apps & Tests
  3. New View: List View of Test Baselines

1. User Interface Version Control

UI Version Control lets you view the history of an application UI over time, and see how it has changed, what has been changed, and by whom.

Up until now, Applitools has only shown baselines in the context of test results from the test results manager, and you’ve never been able to view the history of a test baseline.

Now in Applitools 10.3, you can see the history of all your test baselines in each branch, compare them to prior versions of your baselines, and revert to an earlier version if necessary. You can do this by selecting ‘Save as latest’ from the baseline options menu. So, if you accidentally accepted a baseline that you shouldn’t have, you can undo your mistake.

Applitools Baseline History
Applitools Baseline History

You can also merge baselines branches just like you merge your code changes. This gives you a lot of flexibility around baseline management.

This capability is essentially version management applied to the concept of the visual aspects of your application’s entire user interface. We call this User Interface Version Control, or UIVC. And we are the first company to build this.

Applitools Baseline Comparison
Applitools Baseline Comparison

Another benefit of Applitools’ UIVC is that it gives you a system-of-record to understand how your product’s UI has evolved. We found it amazing that, in 2018, with so many businesses driving the bulk of their revenue through their app or website, there wasn’t a place to go to see the visual evolution of your product. So, we went out and built one!

Here’s a demonstration from our CTO, Adam Carmi, of our new UI Version Control:

2. New View: Apps & Tests

The second new feature in Applitools 10.3 is Apps & Tests View. In Applitools, visual tests have always been associated with their respective baselines and been part of an application group. However, there hasn’t been an easy way to visualize this taxonomy of apps, tests, and baselines from the test results manager.

In 10.3, a new user interface screen labeled Apps & Tests now lets you:

  • View a list of all applications and their corresponding tests with the ability to filter, rename, or delete in the application list or the test list.
    • Note: any time you rename an application or test, you should also update your test script so that Applitools doesn’t create a new test during subsequent runs.
  • View details on each test such as the last test execution date and baseline update by user.
  • View all baselines for a test by selecting Show Baselines from the options panel for a test.
  • View last executed or saved result for a test in the Test Results Manager from the options panel.

Applitools Applications & Test UI
Applitools Applications & Test UI

3. New View: List View of Test Baselines

The third new update in 10.3 is a list of all baselines for a given test. You can get to this screen from the Apps & Tests screen by clicking on a test, as well as by clicking the options icon for a given test.

Applitools Test Baselines UI
Applitools Test Baselines UI

The screen lets you:

  • Group baselines by various parameters
  • Filter baselines by branches, name, and status parameters
  • View related details such as properties, last saved date and total tests run
  • Drill into the history of any baseline from the options panel

To learn more about Applitools’ visual UI testing and application visual management (AVM) solutions, check out the tutorials on the Applitools website. To get started with Applitools, request a demo, or sign up for a free Applitools account.

The post Applitools Introduces UI Version Control appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Raises $31M to Advance Application Visual Management (AVM) Category https://applitools.com/blog/press-release-applitools-raises-31m/ Tue, 17 Apr 2018 14:13:23 +0000 https://applitools.com/blog/?p=2317 Digital Transformation drives rapid market expansion of Visual AI from automated UI testing and monitoring to holistic support of all visual aspects of software applications SAN MATEO, Calif., April 17,...

The post Applitools Raises $31M to Advance Application Visual Management (AVM) Category appeared first on Automated Visual Testing | Applitools.

]]>

Digital Transformation drives rapid market expansion of Visual AI from automated UI testing and monitoring to holistic support of all visual aspects of software applications

SAN MATEO, Calif., April 17, 2018 – Applitools (https://applitools.com), the leader in Application Visual Management, today announced it has raised $31 million in Series C funding led by OpenView, the expansion stage venture firm, with participation from its existing investors Sierra Ventures, Magma Venture Partners, iAngels, and La Maison. Applitools will use this investment to fuel market expansion of its Artificial Intelligence (AI) Powered Visual Testing and Monitoring solution by scaling its R&D, Operations, and Sales. With tens of thousands of users across more than 300 companies, Applitools Eyes recently crossed a total of 100 million visual comparisons and one billion component level validations. Since it was founded in 2013, Applitools has raised more than $46 million. 

Read more about the next phase of Applitools’ company journey at: (https://applitools.com/blog/the-next-phase-of-the-applitools-journey).

Our mission is to help customers automate all visual aspects of application delivery, and address the growing importance of providing exceptional digital experiences across any device, browser, operating system, and language,” said Gil Sever, CEO of Applitools. “We are excited to partner with OpenView to scale our enterprise-grade platform to support digital transformation with the velocity and quality that businesses demand.

Applitools developed the first and only Visual AI Engine that mimics the human eye and brain in a reliable and scalable fashion. Applitools Eyes (https://applitools.com/features), the company’s Automated Visual AI Testing and Monitoring Platform, leverages the largest data set of UI validations in the world and achieves 99.999 percent accuracy, i.e. less than 10 false detections in a million comparisons. The company’s AI engine continues to evolve through machine learning by analyzing millions of new images on a daily basis.

Applitools’ Automated Visual AI Testing and Monitoring Platform v10 is available now. To learn more and open a free trial account, visit: (https://applitools.com/users/register).

If there’s a single commonality of the fastest growing companies it’s that they understand their brand is the sum of every experience a customer has with you,” said Bill Macaitis, former CMO at Slack, Zendesk and SVP Marketing at Salesforce. “The need to deliver delightful, incredible digital experiences to millions of customers, 24/7, on a breadth of devices, browsers, and operating systems can be daunting. Applitools makes it easy via their Application Visual Management (AVM) approach, which enables automated validation of every aspect of the visual user experience. Applitools is a game changer.

OpenView is thrilled to partner with the Applitools team as they solve for one of the few remaining bottlenecks in the continuous delivery process – visual testing,” said Jim Baum, Venture Partner at OpenView who joins the Applitools board. “For modern enterprises, a web or mobile application is the face of their brand and Applitools provides a crucial protective layer.” Jim will bring world class operational experience to Applitools’ Board of Directors, as he was CEO of Netezza and took the company public and later drove its acquisition by IBM in 2010 for nearly $2B.

To help guide Test Automation Engineers, DevOps Teams, Front End Developers, Manual QA experts, and Digital Transformation executives, Applitools created Application Visual Management – a new category framework that simplifies and automates all visual aspects of application creation, testing, delivery and monitoring. The goal is to help shorten application delivery cycles and improve software quality because what the customer sees is what matters most. By helping prevent visual flaws from occurring in the application delivery process, teams can avoid the issues that frequently result from events such as browser and operating system updates, new devices penetrating the marketplace, and the effects of dynamic content on the web.

Today’s DevOps toolchain only supports the functional aspects of modern application delivery in areas like testing, monitoring, Continuous Integration (CI), Continuous Delivery (CD), accessibility, security, bug tracking, collaboration, source control, and more. AVM applies Visual AI technology to add automated visual validation of all the visual aspects of application delivery to the DevOps toolchain, allowing acceleration and full automation of the entire delivery process. Fortune 100 companies have already realized significant benefits through the use of Visual AI technology and those benefits can now be offered to any Enterprise or SMB. This round of funding will be used to expand the offering with new capabilities and to target users across the entire software development, delivery, and monitoring toolchain.

To learn more about AVM, download the Application Visual Management whitepaper for free at: (http://go.applitools.com/AVM-Category-Whitepaper.html).

 

About Applitools

Applitools is on a mission to help Test Automation Engineers, DevOps Teams, Frontend Developers, and Digital Transformation Executives release, test and monitor flawless mobile, web, and native apps in a fully automated way that enables Continuous Integration and Continuous Delivery (CI-CD). Founded in 2013, Applitools uses sophisticated AI powered image processing technology to ensure that an application appears correctly and functions properly on all mobile devices, browsers, operating systems and screen sizes. Applitools has more than 300 customers from a range of verticals, including Fortune 100 companies in software, banking, online retail, insurance, pharmaceuticals, and more. Applitools is headquartered in San Mateo, California, with an R&D center in Tel Aviv, Israel. For more information, please visit applitools.com.

 

About OpenView

OpenView, the expansion stage venture firm, helps build software companies into market leaders. Through its Expansion Platform, OpenView helps companies hire the best talent, acquire and retain the right customers and partner with industry leaders so they can dominate their markets. Their focus on the expansion stage makes OpenView uniquely suited to provide truly tailored operational support to its portfolio companies. Learn more about OpenView at openviewpartners.com.

 

Applitools Media Contact:

Jeremy Douglas
Catapult PR-IR
303-581-7760, ext. 16
jdouglas@catapultpr-ir.com

The post Applitools Raises $31M to Advance Application Visual Management (AVM) Category appeared first on Automated Visual Testing | Applitools.

]]>
The 2017 Surprises and 2018 Predictions for Software Delivery, Testing and More! https://applitools.com/blog/the-2017-surprises-and-2018-predictions-for/ Thu, 04 Jan 2018 14:06:18 +0000 http://blog.applitools.com/the-2017-surprises-and-2018-predictions-for/ With 2017 now behind us, we thought this would be a great time to reflect on what the year brought us, and to prepare for what 2018 may bring! From Cloud...

The post The 2017 Surprises and 2018 Predictions for Software Delivery, Testing and More! appeared first on Automated Visual Testing | Applitools.

]]>
2018 Predictions

With 2017 now behind us, we thought this would be a great time to reflect on what the year brought us, and to prepare for what 2018 may bring! From Cloud to DevOps to IoT, there was certainly a lot to learn and still plenty of room for growth.

Some of our most shocking revelations were the rise of serverless architecture and seeing companies finally take security head-on, especially in regards to IoT. In 2018, we will see more data collection and analysis as a means to help improve security and the proliferation of mobile automation.

There’s so much more that we learned and are looking forward to in regards to Cloud, DevOps, IoT, Java and mobile.

Continue reading for more insights from Applitools contributors Daniel Puterman – R&D Director; Gil Tayar – Senior Architect and Evangelist; and Ram Nathaniel – Head of Algorithms and AI. 

Cloud

  • 2017 Surprise: The biggest surprise is that after being declared dead in previous years, PaaS has risen from the grave as “serverless architecture” — Gil Tayar
  • 2018 Prediction: Serverless will continue to grow as a paradigm whereby your application doesn’t care where and how it runs — Gil Tayar

DevOps

  • 2017 Surprise: Docker surprised everyone by declaring support for Kubernetes in Docker EE (alongside their own Docker Swarm), thus ceding victory and confirming that Kubernetes will be, for 2018, the industry standard orchestrator for Microservices. — Gil Tayar
  • 2018 Prediction: Kubernetes will continue to grow and secure its place as the leading orchestrator for Microservices. The three big cloud vendors will support managed Kubernetes — Gil Tayar

IoT

  • 2017 Surprise: With the widespread adoption of IoT development, IoT security had evolved from a theoretical problem to an actual issue to deal with — Daniel Puterman
  • 2018 Predictions: With the appearance of cheap deep learning acceleration in hardware, in 2018 we will start seeing smarter IoT devices coming to the market. From computer vision based sensors to voice activated window shades — the world around us will become smarter, and more adaptive to our needs — Ram NathanielIn addition to security, IoT data collection and analysis will become the next target of companies and entrepreneurs in the data science field — Daniel Puterman

Java

  • 2017 Surprises: Kotlin comes from seemingly nowhere to dethrone the Java emperor, with the backing of two heavyweights of the field: Google and Jetbrains — Gil Tayar

    It’s a surprise that with the rise of JVM based alternatives (such as Kotlin for Android development & Scala for data science), Java usage hadn’t actually declined. I expect that 2018 will show the same trend — Java development will continue to take a large portion of software development — Daniel Puterman

  • 2018 Prediction: Kotlin will continue eating away at Java’s dominance of the non-MS enterprise market. It will be a long while till Java will be dethroned, but it will happen — Gil Tayar

Mobile

  • 2017 Surprise: Was the year mobile automation frameworks entered the mainstream mobile development lifecycle, with the appearance of Espresso for Android and XCUI for iOS — Daniel Puterman
  • 2018 Predictions: React Native and Progressive Web Apps will start being a credible alternative to native mobile web apps, thus continuing the increasing dominance of Web technologies on mobile development — Gil Tayar

2018 will continue to make mobile automation more prevalent, specifically with mobile cloud environments becoming more reliable — Daniel Puterman

What do you think? We’d love to hear some of your predictions and reflections on the world of software delivery and testing. 2018 should be another exciting, fast-paced year — we can’t wait to see what it has in store!

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.


The post The 2017 Surprises and 2018 Predictions for Software Delivery, Testing and More! appeared first on Automated Visual Testing | Applitools.

]]>
Why Digital Executives Should Care about Automated Visual Testing https://applitools.com/blog/why-digital-executives-should-care-about-automated/ Wed, 15 Jun 2016 17:28:32 +0000 http://162.243.59.116/?p=164 These days, to make an impact on your customers and reach new audiences, it’s not enough to have a great product. A company without a strong digital presence – and...

The post Why Digital Executives Should Care about Automated Visual Testing appeared first on Automated Visual Testing | Applitools.

]]>

These days, to make an impact on your customers and reach new audiences, it’s not enough to have a great product. A company without a strong digital presence – and by extension, a great digital experience – can fall behind.

What do we mean by the digital experience? This translates to delivering an app that works functionally and looks beautiful across any digital platform that your customers care about. 

But creating this experience isn’t always easy. It’s hard to keep up with an ever-growing number of different devices, browsers and screen resolutions.

Adjusting your site or app to the huge variety of platforms out there is a time-consuming task, and you might find yourself asking how to maintain a great digital experience without wasting time on QA, physical device testing and other maintenance procedures.

However, maintenance and testing is key to delivering a good digital experience. Data collected from users’ online behaviour teaches us that in many cases, a user who has closed your app after encountering a bug won’t open it up again.
Moreover, a visual-faulty UI creates an unprofessional feel to your site or app and a negative reputation to your brand. To keep your customers engaged and to ensure they are loyal and excited about your site or app, you must provide them a friendly and flawless experience.

With today’s growing numbers of operating systems, devices and browsers, a flawless digital experience can be achieved only by automatically detecting UI bugs and monitoring your site or app, assuring continuous quality in your product’s lifecycle. Assuring a visually perfect experience across multiple platforms requires the automation of your testing – not only of the functional aspects of your site or app, but the visual ones as well.

Hello, Automated Visual Testing

This is why the practice of Automated Visual Testing is gaining huge momentum right now. Whether you’re a Digital Experience Executive or QA Manager, Automated Visual Testing can offer you many benefits.

Visual Testing allows you to handle changes in your site or app that would have otherwise demanded a large amount of work using conventional testing methods by your developers and QA teams.
Both functional and visual bugs in your UI that would have taken hours to detect using manual QA personnel can now be automatically detected. This allows your development team to work better and faster, while widening your feature scope and shortening your release cycle. The automation of your visual tests will allow your developers and QA engineers to push your product forward, instead of chasing and fixing UI bugs.

Adding Automated Visual Testing to your existing testing infrastructure doesn’t take much effort. Implementation into your workflow will take some time, but automating your visual tests will quickly boost your team’s productivity, enable you to test and monitor your digital experience across all platforms.

Keep your customers happy – give them a flawless experience by automating your visual tests today!

Contact us to learn more about how Automated Visual Testing can help you ensure a flawless digital user experience.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

The post Why Digital Executives Should Care about Automated Visual Testing appeared first on Automated Visual Testing | Applitools.

]]>