User Stories Archives - Automated Visual Testing | Applitools https://applitools.com/blog/tag/user-stories/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Thu, 12 Jan 2023 00:51:08 +0000 en-US hourly 1 How Do I Build A Culture of Quality Excellence? https://applitools.com/blog/quality-excellence/ Tue, 01 Sep 2020 00:19:42 +0000 https://applitools.com/?p=22016 How Do You Build A Culture of Quality Excellence? Greg Sypolt, VP of Quality Assurance at EverFi, led an hour-long webinar for Applitools titled “Building A Culture of Quality Excellence –...

The post How Do I Build A Culture of Quality Excellence? appeared first on Automated Visual Testing | Applitools.

]]>

How Do You Build A Culture of Quality Excellence?

Greg Sypolt, VP of Quality Assurance at EverFi, led an hour-long webinar for Applitools titled “Building A Culture of Quality Excellence – Understanding the Quality Maturity Rubric.” Let’s review his key points and his core slides.

About Greg Sypolt

Greg has been writing blogs about his experiences as a quality advocate inside his company. He wants to learn how to make testing more efficient and share what he learns with others. He loves to geek out about testing and technology, but he’s also a sports fan, he loves doing DIY projects, and he’s a husband and father of three.

Overview

Greg’s webinar focuses on you – someone who is looking to improve the quality culture at your organization. No matter where you are on the journey, from just starting out to making significant progress, Greg has useful thoughts and ideas for you.  His webinar breaks into several key pieces:

  • Vision
  • Journey
  • Three Pillars of App Quality Excellence
  • How To Measure The Quality Maturity Rubric
  • Building A Quality Platform
  • Reaping The Savings

Vision

Greg’s key message starts with a key idea: Quality is a journey, not a destination. Your tools and software constantly evolve, and your processes and procedures evolve as well.

How do you build your culture of quality excellence? To start, Greg says, envision an organization where each member has the mindset that he, she, or they own quality. This vision differs drastically from the legacy idea of a “QA team”. By getting on that path towards broad quality ownership, Greg says, each team member can focus on becoming the voice of quality for the organization.

At the same time, Greg says, understand your long-term goals. You want to build an organization where quality aligns with your organization’s other goals. You want to provide both speed and efficiency, in addition to reliability. Your organization likely needs a higher level of collaboration. And, by speed, he means focusing on the speed of testing – making sure that your actual tests run quickly. Quality should not be your bottleneck.

Quality Journey Objectives

Next, Greg shows the following slide:

image5

Greg describes each of these steps in turn.

  • Everyone understands how we make and test things. In this step, you have a good understanding that design requirements involve validation requirements. You know that you want unit tests, API tests, integration tests, and UI tests. You want to handle functional cases and failure cases. These cases need to be part of the design.
  • Building a quality mindset. Here, you recognize the importance of quality in every aspect of your process to produce a quality product.
  • Understanding our current testing coverage. Provide transparency to your team about what you cover and what you do not. Prioritize areas for test development.
  • Reduction of manual testing, advocating for automating the right things. Slow, inconsistent and inaccurate manual tests impede your testing speed. But prioritize. Know which tests benefit from automation.
  • Build a balanced testing portfolio. Understand how well you cover behavior across unit, API, integration, UI, performance, and security tests. Know your deficiencies and address them.
  • Shift-left mindset by embracing quality in every stage of the CICD pipeline. As you build, test. Ensure that your pipeline gates work well to check each code build against expected behavior.
  • Transitioning to exploratory charters and fully automated testing. This step gets you to automating the right things and beginning to expand your thinking to uncovered cases.
  • Building in quality velocity for developers. Now you deliver ways to help your development team create with a quality mindset by designing with test, testability, and validation results in mind.
  • Quality visibility and accountability. Create the tools and the feedback loop to show discovered defects and their origin – not as a path to punishment but to aid improvement.
  • Remove the traditional QA silo mentality. As part of all the other steps, quality becomes a team metric, rather than the responsibility of a subgroup tasked with assurance.
  • Not a destination. The journey continues – improvements continue along your path.

Greg’s Journey As An Example

Greg shows some data from his experience at his previous company. Greg had worked there for five years.  When he started, the company had no test automation. Each release took weeks to accomplish.

image8

At the end of his journey, the company:

  • Ran 4000 or more builds per week
  • Regularly generated 600 Selenium, 55 Visual, and 75 Integration automated tests
  • Ran an average of 5,000,000 or more tests per month with an average 99.2% pass rate
  • Authored tests in less than 15 minutes and ran all their tests in less than 5 minutes.

By creating all this test infrastructure to go along with the software development builds, Greg’s team created the infrastructure to validate application behavior through test automation. They also built the tools to automate test generation. Finally, they created a discipline to build tests that would run quickly with comprehensive coverage, so testing would allow those 4000+ weekly builds.

Clearly, the process before and after required an evolution at the company. He is working with his team at Everfi with the goal of achieving a comparable outcome.

The Three Pillars of Application Quality Excellence

Next, Greg speaks about the core pillars of building a culture of quality.

First, Greg describes “Quality over quantity.” The first step here is to test the right things, but not everything. Every line of test code you write becomes test code you need to maintain. New features get added, and your test code will likely have to change. You need your developers to learn to think from an end-user’s perspective – how a user will use the application? Analytic tools can help your team understand the paths that users take through  your application to know what’s important. That data can help you get a robust test infrastructure that targets how your users use your products.

Quality over quantity also focuses your team on transparency for quality. If a developer makes a change and kicks off a build, the build result provides immediate feedback to the developer and the team if the added feature causes test failures. Hold everyone accountable for the quality of their code.

image7

Second, Greg describes “Work smarter not harder.” At Greg’s previous company, they had a dedicated developer solutions team providing self-service solutions for testing. That team made it easy for developers to build the app and tests at the same time. They also built a central repository for metrics, so testing results across behavior, visual, and integration tests could be viewed and analyzed in one place. They worked on tools to author tests more quickly and more effectively. The result made it easier to create the right tests.

They also looked at ways to include more than just developers in helping to both understand the tests and to help create tests.

Third, Greg explains what he means by “Set the right expectations.” He makes it clear that you need to set the right expectations with the development team. When developers begin to develop tests, they need to understand what you expect from them – both from a coverage as well as velocity perspective. You need to make sure that everyone understands who can ensure quality and performance of the product – and it’s the people who write the code who have the greatest influence on the quality of that code.

Greg also makes it clear that the you need a clear plan to move to a more mature quality structure. Everyone needs to know what is expected, what will be measured, and what outcomes the team wants to achieve.

Measuring Quality Maturity Rubric

At this point, Greg shares his rubric for analyzing quality maturity.image4

He mentions that the metrics involve the people on your teams, the technology you use, and the processes you put in place. Your people need to be ready and organized appropriately. You require the right technology to increase your automation. And, you need to put processes in place that allow you to succeed at improving your quality maturity level.

As a preface, Greg describes these metrics and this rubric as based on his experience. He expresses an openness to update these based on the experience of others.

The Quality Maturity Rubric

His full rubric involves 23 different maturity attributes that each have four range values:

  • Beginner – getting started on maturity
  • Intermediate – taking steps to organize people, use technology, or apply processes
  • Advanced – organizing effectively to achieve the maturity measure outcome
  • Expert – having automated the process to be achieved with people used at their most efficient.

Greg goes through several attributes in detail, giving examples of beginner through expert for each.  For example:

image1

This table shows the “Culture” attribute.

  • Beginner – No shared quality ownership, siloed development and QA, losing sight of the larger quality picture.
  • Intermediate – Identify and define specific levels of quality involvement for individuals on teams, enable shared responsibilities
  • Advanced – Teams are finding problems together
  • Expert – Machines are identifying problems

To use this table, compare your organization to the description of each value that most closely matches yours. Give yourself a “1” if you’re a beginner, “2” if you’re Intermediate, “3” if you’re advanced, and “4” if you achieved expert.

Greg has created tables for each of the attributes. For example:

image2

For the Environment attribute, the Advanced level involves automated builds and staging, with push-button deployment.

Using the rubric, you go through each attribute and assign a numeric value based on how close you are to one of the levels.

As you go through the analysis, you can also use the table to set your goals for improvement over time, as you look to increase the quality maturity of your organization.

image6

Greg posted his grading rubric online – you can find it at:

http://bit.ly/qmc-grading-rubric

Once you use the grading rubric, you can start to figure out the next two steps:

  • Create an improvement map. You can’t move everything all at once. Focus on the key attributes that matter. Greg points out that culture allows you to move everything else. Figure out where you can go with culture maturity.
  • Move to implementation. Once you know what you want to do, move forward with appropriate steps. Do you have the right people? Do you have process changes you need to deploy? New technology? Move forward in steps appropriate to your organization.

Quality Roles

Now that you have people, processes and technology thought through, along with your approach to maturity, it’s time to think about your people and their skills.

Greg presents a map of QA role clarity that helps you think about your existing technical skills among your team. Greg created this table based on his experiences in quality engineering.

image3
  1. Level 1 is a QA specialist, who focuses on manual and exploratory test
  2. The second level is an automation engineer, who has the Level 1 skills plus the ability to develop automation test scripts.
  3. At Level 3 you find an Industry SDET (software development engineer in test) who does the work of the first two levels, plus can write code and develop test algorithms.
  4. Level 4 defines the need of EverFi – an EverFi SDET. In addition to the first three levels, an EverFi SDET must be proficient with DevOps. Greg hopes to get his team to this level in the next 12 to 18 months. A DevOps SDET can help make the development team proficient integrating test with development.
  5. The top level, Level 5 – the Next Gen SDET, incorporates the prior four levels. On top of that, a Next Gen SDET brings proficiency in security, data science, and machine learning. This level is more aspirational. Greg expects that, over time, more quality engineers will obtain these skills.

Greg sees this table as another rubric you can use to evaluate your people. You can evaluate where you are today. And, you can start to think about skills you want to add to your team. The people on your team will help you execute your vision, so you need to know where you are and where you want to go.

You can look to hire people with these skills. You likely can find engineers with Level 3 skills of SDET. More than likely, though, you, like Greg, will be building these skills among your team over time.

Building A Quality Platform

Once you have thought through your quality maturity, reviewed your existing processes and technologies and begun to evaluate your team, Greg wants you to think about your current and future states as a “quality platform”.

He reminds you that your quality platform serves key outcomes:

  • Building a culture that embraces quality at every stage from intake, discovery, execution and release
  • Enabling and driving continuous improvement and adoption of quality practices
  • Giving teams the ability to lead with a sense of purpose, openness and trust.

The combination of people, processes, and technologies that make up your quality platform can help you deliver major quality improvements.

image9

At the core of your quality platform is the Developer Solutions team – working to create quality solutions among the software development team.

Next, you need data insights that help create visibility across the organization.

Third you develop turnkey solutions that simplify the deployment of key functions or processes. For example you can easily deploy Selenium or Cypress test automation through a set of well-defined code structures, and. You can easily build new tests and structures in your code. For example, at EverFi, Greg has deployed Applitools to easily add visual validation to tests – simplifying overall test development.

Fourth – your platform team serve as quality ambassadors. They represent quality practices and endorse effective change within the organization.

Fifth – you focus on functional values of the platform, making testing better and easier for everyone.

Lastly, you focus on non-functional behaviors, like monitoring the critical paths inside your application. You make sure to understand the critical paths and ensure they work correctly.

Reaping The Savings

Finally, Greg gets to the bottom line.

How does the quality platform deliver savings to your organization? Greg shows this handy table.

image10

Each element of the quality platform contributes savings.

Greg gives the example of Data Insights providing cost savings because they help you know where you can add tests as well as providing data that can be consumed by the team. Developer Solutions helps you reduce the time to deploy pre-existing or new solutions. Functional tests can help improve your quality by ensuring you run tests on every pull request and get instant feedback, instead of waiting for savings. Or, a turnkey session with an external vendor can help you get up to speed quickly with the vendor’s technology.

Building Your Own Quality Platform

As Greg points out, moving to your own quality platform results in a journey of constant improvement. Your begin with your current state, envision a future state, and move to deliver that future state through people, technology, and process improvements. Each step along the way you end up with measurable savings.  

Here’s to your journey!

For More Information

The post How Do I Build A Culture of Quality Excellence? appeared first on Automated Visual Testing | Applitools.

]]>
How Do You Test A Design System? https://applitools.com/blog/how-to-test-design-system/ Fri, 07 Aug 2020 16:43:57 +0000 https://applitools.com/?p=20506 How do you test a design system? You got here because you either have a design system or know you need one. But, the key is knowing how to test...

The post How Do You Test A Design System? appeared first on Automated Visual Testing | Applitools.

]]>

How do you test a design system? You got here because you either have a design system or know you need one. But, the key is knowing how to test its behavior.

Screen Shot 2020 08 06 at 10.54.50 PM

Marie Drake, Principal Test Automation Engineer at News UK, presented her webinar, “Roadmap To Testing A Design System”, where she discussed this topic in some detail.

Marie is many things. In addition to her work at News UK, she is a Cypress Ambassador and organizer of the Cypress UK community group. If you want to know more about using Cypress, she’s a great speaker. In addition, she blogs about testing and tech at her own blog, mariedrake.com.

This post summarizes her webinar and highlights some of the key points.

Who Is News UK?

Screen Shot 2020 08 06 at 10.55.08 PM

News UK is the UK subsidiary of News Corp, the large global publishing and media company. Marie’s team supports the sites that develop and deliver online versions of The Sun, The Times, and The Sunday Times. They also run the Wireless media site. Marie supports the various development teams that deliver news and information that change regularly.

Why A Design System At News UK?

Screen Shot 2020 08 06 at 10.50.35 PM

Think of a design system as building blocks. A design system provides a repository for design components used to construct your web application. Or, more precisely, applications. By using a design system, you can eliminate redundant work across different parts of your web application.

Screen Shot 2020 08 06 at 10.50.38 PM

Marie gave the example of “share bars” at News UK. Share bars let you share content to social networks like Twitter, Facebook, Instagram, and WhatsApp. You likely have seen share bars on blogs or media pages. Inside News UK, design groups had coded their own share bars. They found19 different share bars in use across different parts of the News UK business.

The implication of lots of redundant code written by different people involves the cost of maintenance. Sure, having 19 different teams write 19 different parts of the app sounds like great division of labor. But, when you get 19 different sound bars – how do you maintain them? How do you choose one for your next part of your web business? What happens when you decide to resize the share bars across your site?

Fast Coding Does Not Equal Agility

Screen Shot 2020 08 06 at 10.50.29 PM

Marie showed an even more problematic example of a web business, Fabulous, that wanted to change their brand color from #E665BF to #EA6596. When engineers looked at the potentially impacted code and the areas requiring post-change validation, they estimated the change would take six months. Half a year for a color change?

The coding effort at Fabulous involved two code bases. First, they had the website that needed to be updated. Second, they had the tests that needed to be updated to match the new site. A large part of the test change – even with no functional or other visual change – just required code inspection to ensure that the desired branding color change had been applied as expected across the entire site.

Marie’s seven-plus years in software quality led her to understand that raw coding speed rarely correlated with agility. Here, “agility” means something different from “agile.” In software, agility comes from the ability to make quick changes and have confidence on both their impact on intended behavior and avoiding unexpected changes. While many software developers can write code quickly, few write thoughtfully in ways that make code maintainable – especially across the entire site.

Benefits of the News UK Design System

Screen Shot 2020 08 06 at 10.50.53 PM

In describing the design system deployed by News UK, Marie quickly pointed out its benefits.

  • Cost efficient. Once you set up a design system, you have standardized building blocks for building your site. If you can use the design system to customize, your teams can consume the building blocks instead of rewriting from scratch. And, you reduce software maintenance costs.
  • Reusable. A good design system allows you to re-use code.
  • Speed To Market. As mentioned in the section on agility, the design system reduces the amount of code you need to write from scratch. It also reduces the amount of code you will manually change as you make updates.
  • Scalable. A good design system lets multiple users access the system – making the developers much more efficient.
  • Standard Way Of Working. With a design system, you standardize the process of writing code. You can help new people get up to speed on existing code and simplify the code maintenance process.
  • Consistency. In the end, you can look to the design system to ensure consistent behavior (visual, functional) from your applications.

Marie showed a loop of the design system at News UK. The components get developed and maintained in Storybook. Developers can grab elements and add them into applications being built. The playground feature in Storybook makes it easy for developers and designers to play with Storybook components to mock up the functioning web application before it gets built.

As Marie pointed out, consistency in the components simplifies both development and testing.

Testing A Design System – Requirements

If a design system should make code easier to create and maintain, how do you test a design system?

Screen Shot 2020 08 06 at 10.51.08 PM

Marie started outlining the testing requirements developed by News UK.

  • Test different components easily. Expect the system to mature and develop over time. Some components will be entirely visual, and some may include audio. Make sure all this works.
  • Test cross-browser. News UK needed this capability as they knew their content got consumed on mobile devices and a range of platforms.
  • Visual Tests – Write visual tests with less maintenance in mind to reduce impact on testing workflow and speed the process of testing small changes that touch lots of components.
  • Deliver a high-performing build pipeline – build plus test concludes within 10 minutes
  • Integrate design review earlier in the process to improve collaboration, find misunderstandings and differences between design and development early in the process.
  • Test for accessibility on both the component and site level for all users.
  • Catch functional issues early.
  • Have all tests written before deploying a feature. There are 2 full-time QA engineers on the Product Platforms team, so they need to share QA responsibility with developers.

Testing A Design System – Strategy

From here, Marie outlined the strategy to run tests of the design system.

  • First, unit testing. Developers must write unit tests for each component and component modification.
  • Second, snapshot testing. Capture snapshots and validate the status of component groups.
  • Third, component testing. All components need to be validated for functionality, visual behavior, and accessibility.
  • Fourth, front-end testing.  Make sure the site behaves correctly with updated components. Validate for functionality, visual behavior, and accessibility.
  • Fifth – cross-browser tests. Ensure there are no unexpected differences on customer platforms.

Testing A Design System – Challenges

Marie described some of the challenges with different test approaches.

Screen Shot 2020 08 06 at 10.51.17 PM

Purely functional tests can include lots of code. Marie’s pseudocode shows this problem. The more comprehensive your functional tests, the more code that exists in those tests. Assertion code – the code used to inspect the DOM for visual elements – becomes a burden for your team going forward.

Screen Shot 2020 08 06 at 10.51.31 PM

Visual testing serves a strategic function, except that most visual testing approaches suffer from inaccuracy. Marie showed an example of a “spot-the-differences” game, which highlighted the challenges of a manual visual test. Then, she showed pixel differences, which she found become problematic on cross-browser tests. From a user’s perspective, the pages looked fine. The pixel differences highlighted differences that, after time-consuming inspection, her team judged as inconsequential pixel variations.

Another visual testing inaccuracy Marie described involved visual testing of dynamic data. On news sites, content changes frequently as news stories get updated. When the data changes, does the visual test fail?

Marie and her team had chosen to use available open-source tools for visual testing. Marie showed some of the visual errors that got through her testing system. These had passed functional tests but weren’t caught visually.  

So, Marie and her team discovered that their existing tests let visual bugs through. They knew they needed to solve their visual testing problem.

Choosing New Tools

Marie’s team looked at three potential solutions to their visual testing problem: Screener, Applitools and Happo.  After putting all three through their paces, the team settled on Applitools for accuracy. Being way more accurate helped Marie write up the use case for News UK to purchase a commercial tool instead of adopting an open-source solution.

The team also looked at UI testing tools. They looked at Puppeteer, Selenium, and Cypress for driving web application behavior. As a team, they chose Cypress. They could have used any of these tools with Applitools. Marie’s team chose Cypress because its developer-friendly user experience made it easy for developers to write and maintain tests.  

Screen Shot 2020 08 06 at 10.51.56 PM

The final test suite included:

Using Applitools

Next, Marie shared the approach her team used for deploying Applitools.

Prior to using any part of Applitools, the team needed to deploy an API key. This key, found on the Applitools console, permits users the access the Applitools API. Once read into the test environment, the key grants the tests access to the Applitools service.

The team needed to add the Eyes code to Storybook for component tests and to Cypress for the site-level tests.

Component Tests

Screen Shot 2020 08 06 at 10.52.03 PM

Next, Marie demonstrated the code for validating the Storybook components. The tests involved cycling through Storybook and having each component captured by Applitools. Individual component tests either matches in Applitools, or showed differences. The test team would walk through the inspected differences to either approve the changes and update the baseline image with the new capture, or rejected the change and send the component back to the developers.

Cypress Tests

Screen Shot 2020 08 06 at 11.11.09 PM

Similar to the component tests, the Cypress tests integrated Applitools into captures of the built site using the new components. Again, Applitols compared each capture against the existing baseline to find differences.

For Marie’s team, one great part about using Applitools involved the built-in cross-browser testing using the Applitools Ultrafast Grid. Simply by specifying a file of target browsers and viewport sizes, Applitools could automatically capture images for the targets separately and compare each against its baseline.

Auto Maintenance

Marie talked about one of the great features in Applitools – Auto Maintenance. When Applitools discovers a visual difference, it looks for similar differences on other pages captured during a test run. When an Applitools user finds a visual difference and approves it, Auto Maintenance lists the other captures for which the identical difference exists. The Applitools user can then batch-approve the identical changes found elsewhere. A single user, in a single step, can approve site-wide menu, logo, color, and other common changes all at once.

Handling Dynamic Changes

Another benefit of Applitools involves pages with dynamic data. In addition to the example of news items updating regularly, Marie showed an example of the new Internet radio service offered by News UK. The player page can sometimes show different progress in a progress bar during different captures, depending on data being read when taking a screen capture.

Applitools has a layout mode that ensures that all the items exist in a layout, including the relative location of the items, but layout mode ignores content changes within the layout.

Accessibility Tests

Next, Marie talked about accessibility tests.

Marie demonstrated component accessibility testing with Cypress AXE. She showed that, once integrated with Cypress, AXE can cycle through components. Unfortunately, AXE and other automated tests uncover only about 20% of accessibility tests.

Lighthouse and other tests get run manually to validate the rest of visual accessiblity.

She also showed the Safari screen reader accessibility testing.

Workflow Integration

Marie then described workflow, and how the workflow integration mattered to the Product Platforms team.

Screen Shot 2020 08 06 at 10.52.42 PM

She made the team’s first point – quality is everyone’s responsibility. For the product platforms team, the two quality engineers serve as leads on approaches and best practices. Developers must deliver quality in the design system.

Screen Shot 2020 08 06 at 10.52.45 PM

To accentuate this point, she explained that the team had developed pull request guidelines. Check-ins and pull requests required documentation and testing checklist of unit, component, and page-level tests. Everyone agreed to this level of work for a pull request.

Screen Shot 2020 08 06 at 10.52.49 PM
Screen Shot 2020 08 06 at 10.52.52 PM

Next, Marie showed the workflow for a pull request. Each pull request at the component level required a visual validation of the component before merging. She explained how Applitools could maintain separate baselines for each branch and manage checkpoints independently. Then, she showed the full develop workflow build pipeline.

Screen Shot 2020 08 06 at 11.16.31 PM

Finally, she showed how Github integration linked visual testing fit into the entire Circle CI build. She also showed how the buld process linked to Slack, so that the team could be notified if the build or testing encountered problems. The build, including all the tests, needed to complete within 10 minutes.

Overall Feedback

Screen Shot 2020 08 06 at 10.53.07 PM

Marie provided her team’s general feedback about using Applitools. They concluded that they required Applitools for visual validation of the component-level and site-level tests. Developers appreciated how easily they could use Applitools with Cypress, and how they could run 60 component tests in under 5 minutes across a range of browsers. The design team also uses Applitools to validate the design, and they found the learning curve was fast for figuring out the visual elements.

As users, they did have feedback for improvement to share with the Applitools product team. One of the most interesting came from the design team, who wondered if they could use UI design tools (Sketch, Figma, Abstract, etc.) to seed the initial baseline for an application.

Beyond Applitools, the accessibility testing has helped ensure that News UK can deliver visual and audio accessibility for their users.

Conclusion

Marie Drake made a strong case for using a design system whenever there are multiple design and development groups working independently on a common web application. The design system eliminates redundancy and helps speed the rate of change whenever groups want to roll out application enhancements.

She also made a strong case for building testing into every phase of the design system, from component-level unit, functional, visual, and accessibility tests all the way to page-level tests of the same. For testing speed, testing accuracy, ease of test maintenance, and cross-browser tests, Marie made a strong case for using Applitools as the visual test solution for the News UK design system.

For More Information

The post How Do You Test A Design System? appeared first on Automated Visual Testing | Applitools.

]]>
The end of Smoke, Sanity and Regression https://applitools.com/blog/end-smoke-sanity-regression/ Tue, 30 Jun 2020 23:04:40 +0000 https://applitools.com/?p=19990 Test Categories When it comes to Test Strategy for any reasonable sized / complex product, one aspect would always be there – the categories of tests that would be created,...

The post The end of Smoke, Sanity and Regression appeared first on Automated Visual Testing | Applitools.

]]>

Test Categories

When it comes to Test Strategy for any reasonable sized / complex product, one aspect would always be there – the categories of tests that would be created, and (hopefully) automated – ex: Smoke, Sanity, Regression, etc.

Have you ever wondered why these categories are required? Well, the answer is quite simple. We want to get quick feedback from our tests (re-executed by humans, or better yet – by machines in the form of automated tests). These categories offer a stepping-stone approach to getting feedback quickly.

But, have you wondered, why is your test feedback slow for functional automation? In most cases, the number of unit tests would be a decent multiple (10-1000x) of the number of automated functional tests. YET, I have never ever seen categories like smoke, sanity, regression being created for the unit tests. Why? Again, the answer is very simple – the unit tests run extremely fast and provide feedback almost immediately.

The next question is obvious – why are the automated tests slow in giving running and providing feedback? There could be various reasons for this:

  • The functional tests have to launch the browser / native application before running the test
  • The test runs against fully / partially integrated system (as opposed to to a stubbed / mocked unit test)

However, there are also other factors that contribute to the slow functional tests:

  • Non-optimal tool sets used for automation
  • Skills / capabilities on the team do not match what is required to automate effectively
  • Test automation strategy / criteria is not well defined.
  • The Test Automation framework is not designed & architected correctly, hence making it inefficient, slow, and gives inconsistent results.
  • Repeating the execution of the same set of Functional Automated Tests on variety of browsers, and viewport sizes

Getting faster feedback from Automated Functional Tests

There are various techniques & practices that can be used (appropriately, and relevant to the context of product-under-test) to get faster, reliable, consistent feedback from your Functional Tests.

Some of these are:

  • Evolve your test strategy based on the concept of Test Automation Pyramid 
  • Design & architect your Functional Test Automation Framework correctly. This post on “Test Automation in the World of AI & ML” highlights various criteria to be considered to build a good Test Automation Framework.
  • Include Visual AI solution from Applitools that speeds up test implementation, reduces flakiness in your Functional Automation execution, includes AI-based Visual Testing and eliminates the need for Cross-browser testing – i.e. removes the need to execute the same functional tests in multiple browsers

Optimizing Functional Tests using Visual AI

Let’s say I want to write tests to login to github.

github login screen

Expected functionality is that if I click on “Sign In” button without entering any credentials, I see the error message as below:

github login screen with errors

To implement a Selenium-based test for such validation, I would write it as below:

code example

Now, when I run this test against a new build, which may have added functionality, like shown below, lets see the status of this test result.

code example 2

Sure enough, the test failed. If we investigate why, you will see that the functionality has evolved.

github login with errors and labels

However, our implemented test failed for the 1st error it found, i.e.

The title of the page has changed from “SignIn” to “Sign in to Github”

I am sure you have also experienced these types of test results.

The challenges I see with the above are:

  • The product-under-test is always going to evolve. That means your test is always going to report incorrect details
  • In this case, the test reported only the 1st failure it came across, i.e. the 1st assertion failure. The rest of the issues that could have been captured by the test were not even executed.
  • The test would not have been to capture the color changes
  • The new functionality did not have any relevance to the automated test

Is there a better way to do this?

YES – there is! Use Applitools Visual AI!

After signing up for a free Applitools account, I integrated Applitools Selenium-Java SDK using the tutorial into my test.

My test now looks like this:

code example 3

As you can see, my test code has the following changes:

  • There are no assertions that I had before
  • There are hence, fewer locators I need to worry about
  • Hence the test code is more stable, faster, cleaner & simpler

The test still fails in this case as well, because of the new build. However, the reason for the failures is very interesting.

When I look at the Applitools dashboard for these mismatches reported, I am able to see the details of what went wrong, functionally & visually!

pasted image 0 5

Here are the details of the errors

Screen 1 – before login

pasted image 0 1

Screen 2 – after login with no credentials provided

pasted image 0 3

From this point, now, I can report the failures in functionality / user experience as a defect using the Jira integration, and accept the new functionality and update the baseline appropriately, with simple clicks in the dashboard.

Scaling the test execution

A typical way to scale the test is to set up your own infrastructure with different browser versions, or to use a cloud provider which will manage the infrastructure & browsers. There are a lot of disadvantages in either approach – from a cost, maintenance, security & compliance perspective.

To use any of the above solutions, you first need to ensure that your tests can run successfully and deterministically against all the supported browsers. That is a substantial added effort.

This approach of scaling seems flawed to me.

If you think about it, where are the actual bugs coming from? In my opinion, the bugs are related to:

  • Server bugs which are device / browser independent

Ex: A broken DB query, logical error, backend performance issues, etc.

  • Functional bugs – which are 99% are device / browser agnostic. This is because:
    • Modern browsers conform to the W3C standard
    • Logical bugs occur in all client environments.

Examples: not validating an input field, reversed sorting order, etc.

That said, though the browsers are W3C standard compliant, they still have their own implementation of the rendering engines, which means, the real value of running tests in different browsers and varying viewport sizes is in finding visual bugs!

By using Applitools, I get access to another awesome feature, due to which I can avoid running the same set of tests on multiple browsers & viewport sizes. This ended up saving my test implementation, execution and maintenance time. That is the Applitools Ultrafast Grid.

pasted image 0 7

See this quick video from about the Applitools Ultrafast Grid.

To enable the Applitools Ultrafast Grid, I just needed to add a few configuration details when instantiating Eyes. In my case, I added the below to my Eyes configuration.

pasted image 0 2

When I ran my test again on my laptop, and checked the results in the Applitools dashboard, I saw the awesome power of the Ultrafast Grid.

NOTE: The test ran just once on my laptop, however the Ultrafast Grid rendered the same screenshots by capturing the relevant page’s DOM & CSS in each of the browser & viewport combinations I provided above, and then did a visual comparison. As a result, in a little more than regular test execution time, I got functional & visual validation done for ALL my supported / provided configurations. Isn’t that neat!

pasted image 0 1

Do we need Smoke, Sanity, Regression suites?

To summarize:

  • Do not blindly start with classifying your tests in different categories. Challenge yourself to do better!
  • Have a Test Automation strategy and know your test automation framework objective & criteria (“Test Automation in the World of AI & ML” highlights various criteria to be considered to build a good Test Automation Framework)
  • Choose the toolset wisely
  • After all the correct (subjective) approaches taken, if your test execution (in a single browser) is still taking more than say, 10 min for execution, then you can run your tests in parallel, and subsequently, split the test suite into smaller suites which can give you progressive indication of quality
  • Applitools with its AI-power algorithms can make your functional tests lean, simple, robust and includes UI / UX validation
  • Applitools Ultrafast Grid will remove the need for Cross-Browser testing, and instead with a single test execution run, validate functionality & UI / Visual rendering for all supported Browsers & Viewports

The post The end of Smoke, Sanity and Regression appeared first on Automated Visual Testing | Applitools.

]]>
When Quality Engineering Meets Product Management https://applitools.com/blog/quality-product-management/ Fri, 14 Feb 2020 18:48:54 +0000 https://applitools.com/blog/?p=6972 How do you bridge the gap between quality engineering and product management? In October 2019, Evan Wiley from Capital One presented on this topic for an Applitools webinar. Evan introduced...

The post When Quality Engineering Meets Product Management appeared first on Automated Visual Testing | Applitools.

]]>

How do you bridge the gap between quality engineering and product management?

In October 2019, Evan Wiley from Capital One presented on this topic for an Applitools webinar. Evan introduced a number of cool tools that bridge the gap between the product specified by the product managers and the nuts and bolts of testing. Interested? Read on.

Evan’s Background

Evan Wiley spent six years in Quality Engineering at Capital One before moving into product management. In his engineering time, Evan discovered that product managers and quality engineers share complementary views on their products. Product managers envision behaviors that quality engineers execute in their tests.

Evan experienced this relationship directly when he was invited by product managers to attend “empathy interviews.” In these interviews, team members speak with customers to understand the customer’s environment, needs, fears, and expectations. In attending these interviews, Evan heard the first-hand lives of customers who were using the results of his work. These interviews both informed his work in quality engineering and later fueled his move to product management

What Is Quality Engineering?

Screen Shot 2020 02 13 at 2.03.06 PM

Evan described the work of quality engineering as finding bugs within products early and often. Noting that the job varies from organization to organization and company to company, Evan described the responsibilities including:

  • Manual testing – ensuring that the product can be exercised and behaves as expected
  • Test automation – creating automated tests that can be run to assure behavioral coverage without manual intervention
  • Production testing – verifying behavior with a production-ready product.
  • Test case design – ensuring that tests cover both positive cases, or normal function, as well as negative cases, or handling problematic inputs, internal inconsistencies, and other errors.
  • Test execution – running tests and reporting results
  • Penetration testing – running tests and reporting results for tests based on known threat vectors
  • Accessibility testing – ensuring that customers with known visual and other disabilities can navigate and use the product successfully.

Evan noted that the nature of this work changes as products, product delivery platforms, and customer environments evolve.  And, things change constantly.

What is Product Management?

Screen Shot 2020 02 13 at 2.03.20 PM

Evan next dove into describing the role of product management. Frankly, describing product management can take a two-day course, and even then not cover what everyone thinks product management might do at their company. I know this as I remain a poorly-hidden closet product manager.

Evan does not try to describe product management comprehensively. Instead, he focuses on how empathy interviews drive product managers to make product decisions.  Primarily, Evan says, empathy interviews help product managers become the “voice of the customer.”

One part of empathy interviews guides testing. For example, does your test environment match the customer’s environment? Do your test conditions match the customer’s test conditions?

A larger set of questions helps product managers understand problems their customers try to solve, how to prioritize solutions to these problems, which of these problems need to be higher on the near-term backlog versus further back, and how customers might respond to different marketing messages.

And, when product managers take their products to the field, they can validate a customer’s reaction with expectations from empathy interviews to help make their empathy interviews even more effective. The initial empathy interview forms the basis for product management key performance indicators.

Ultimately, the needs of product managers and quality engineers diverge significantly. But involving quality engineering in customer empathy interviews help the overlap succeed.

Screen Shot 2020 02 12 at 12.35.21 PM

Quality Engineering and Product Manager Needs Overlap

Quality over Quantity

Screen Shot 2020 02 13 at 2.03.31 PM

Evan spends a bit of time discussing how Capital One prioritizes quality over quantity.  Evan points out that the company makes this choice, and that the decision permeates the company culture.  At Capital One, that goal permeates all engineering – not just quality engineering.

Evan explains with an example from another company.

“If there’s a quality engineer at Facebook, they might have a lot of test cases, and in some cases, they can stand in for the end-user with the knowledge of being a user. So, if they’re testing, say, logout from Facebook, they might think, ‘I can make that simpler for an end-user because I’d want it to be.’ And this insight empowers the quality engineers to work directly with the developers tweak a behavior for an end-user.”

To this end, Evan sees that the whole of engineering contributes to quality over quantity culture. Hiring processes, skill selection, and testing approaches involve transparency that allows for a breadth of experience and diversity of perspectives on the team.

And, this mix of backgrounds leads to cross-training product managers with quality engineers. Bringing both groups together inside Capital One leads, in Evan’s perspective, to better outcomes for customers.

Evan gave the example of testing a set of features across multiple browsers alongside product managers. He was able to show the product managers where the different browsers handled certain functions differently as a way to build the culture of quality at Capital One.

Gherkin Scenarios

Screen Shot 2020 02 13 at 2.04.09 PM

Next, Evan demonstrated the use of Gherkin Scenarios for writing stories that describe the behavior of a product. If you don’t know about Gherkin, it’s basic statements are:

  • Scenario
  • Given
  • When
  • Then
  • And

So, for example, Evan talks about the Google home page. He imagines the product manager writes something like this:

These scenarios have several useful properties.

First, they help the product manager describe the detailed behavior a user will experience before any software gets written. Product managers can validate these stories with prospective customers and identify any issues that might impact the behavior of the product.

Second, developers get an understanding of the intended user experience and outcome to plan their coding activity.

Third, engineers can use their experience to determine scenarios that might not have been described and what should be the intended behavior in those situations.  And, engineers can ask relevant questions involving both design as well as behavior.

A Gherkin Example

Imagine some questions that come from the scenario listed above. Here are a few:

  • How much time should the user experience between clicking the “Google Search” button and getting a response back?
  • What happens when the user has 600ms latency between the browser and server?
  • Since the scenario specifies that the test uses valid text for a Google search, define valid text.
  • What about the scenario when the user enters not valid text?

These questions lead to more scenarios, which lead to more complete product specifications, which lead to more complete products. The larger number of scenarios also leads to more complete testing.  Quality engineers read these scenarios and begin planning their acceptance tests based on these different scenarios and conditions.

So, much of the engineering-product management conversation ends up as quality engineering talking with product management about scenarios – tightening the connection between product management and quality engineering.

Evan did not talk about it, but a tool like Cucumber reads in Gherkin scenarios to help build test automation.

Visual Validation Baselines

From there, Evan moved on to talk about visual validation. And, for visual validation, Evan talked about Applitools – a tool he uses at Capital One.

First, Evan spoke about the idea of user acceptance testing. When you run through the scenarios with product managers, you will end up with the screens that were described. You want to capture the screen images and go through them with product managers to make sure they meet the needs of the customers as understood by the product management team.

So, part of the testing involves capturing images, and that means following your Gherkin scenarios to make sure you capture all the right steps. Evan showed some examples based on a Google page and describing how those test steps in the Gherkin scenarios became captured visual images.  Evan pointed out that these visual images begin to define how the application should behave – a baseline for the application.

Screen Shot 2020 02 13 at 2.15.42 PM

As you go through the different pages, you can complete the baseline acceptance tests. Once you have a saved good baseline, you know the state of an accepted page going forward.

If you find a behavior you don’t like, you can highlight the behavior and share it with the developers.

You can find problems during visual testing that do not appear in development. For instance, someone realizes you need to test against some smaller viewport sizes. Or, you test a mobile app, but not the mobile browser version.

So, you build your library of baseline images for each test condition. You make sure to include behaviors on target browsers, operating systems, and viewport sizes in your Gherkin scenarios. As it turns out, with Applitools, your collection of completed baselines makes your life much easier going forward.

Visual Validation Checkpoints

Next, Evan dove into using Applitools for subsequent tests.

Screen Shot 2020 02 13 at 2.05.00 PM

Once you develop your test cases and run them through Applitools, you have baselines. As you continue to develop your product, you continue to run tests through Applitools. Each test you run and capture in Applitools can be compared with your earlier tests. A run with no differences shows as a pass. A run with differences shows up as “unresolved.”

Screen Shot 2020 02 13 at 2.06.10 PM

Evan showed how to deal with an unresolved difference. He inspected one difference, saw it was due to an expected change, and accepted the difference by clicking the “thumbs-up” button in the UI. In this case, the checkpoint automatically becomes the new baseline image.  He inspected another difference. In this case, the difference wasn’t one he wanted. He clicked the “thumbs-down” button in the UI. He showed how this information becomes a “failed” test, and how the information of the can get routed back to developers.

Unlike other visual testing tools you have used, the Applitools uses computer vision to identify visual differences. The Visual AI engine can ignore rendering changes at the pixel level that do not result in user-identifiable visible differences. And, there are other capabilities for maintaining applications, such as handling changes that displace the rest of the content on pages or managing identical changes that occur on multiple pages.

Quality over Velocity

Evan went back to discuss the company culture about prioritizing quality. Capital One developed an engineering culture over time to focus on quality. Any decision to emphasize delivery over quality must be documented and validated. Release decisions at Capital One end up being team decisions, as the whole team is responsible for both the content and quality of a release.  So, the entire decision to deliver quality products brings the product management, product development, and quality engineering teams together with a common purpose.

Evan noted that, in his experience, other companies approach these problems in different ways. The culture at Capital One makes this kind of prioritization possible. Cross-training makes this delivery possible because cross-training makes all team members aware of priorities, approaches, and tools used to deliver quality. The result, says Evan, is a high-performing team and consistency of meeting customer expectations.

Questions about Quality Engineering

A questioner asked Evan if Quality Engineering at Capital One had sole responsibility for quality. Evan said no. Evan pointed out that he spoke from his perspective, and while Quality Engineering came up with the approach to validate product quality, the whole team – product management, development, and quality engineer – participated in the testing. The approach helped the team deliver a higher quality product.

Another questioner asked about the benefit of getting customer knowledge directly to Quality Engineering. That’s valuable, Evan said. For example, during an empathy interview, a customer raises the question of a specific problem they have when trying to execute something specific. During the interviewer, the interviewer dives deeper into this issue. The result is a more complete understanding of the customer use case, the expected behavior, and the actual behavior observed.  This results in both better test cases as well as future enhancements.

Questions about Visual Testing and Tools

A questioner asked if Gherkin scenarios made sense in all situations. Not always, said Evan. Gherkin scenarios make great sense when fitting into behavior for development to create and quality engineering to test. Evan thought about cases, such as technical debt cases, for which the intended behavior may not be user behavior.

Another questioner asked about the value of visual testing to Capital One. Evan talked about finding ways to exercise a behavior, capture the results, and share the results with product people.  Test pass/fail results could not capture the user experience, but visual test results do so as part of the visual testing solution.  One example Evan gave was for a web app that had an unexpected break on a mobile browser, due to the different browser behavior on a different operating system.  Without visual testing, the error condition would likely not have been caught in-house. If Capital One were only using manual tests, the condition might not have been covered if the specific device version was not included in the test conditions. With the automated visual tests, they found the problem, saved the image, and used that as a new Gherkin scenario in the next release.

Questions about Product Management and Quality Engineering

Next, Evan was asked about how to integrate product management and quality engineering more closely. Evan said he wasn’t sure how to do this in the general case. At Capital One, the need for engineers and product managers to collaborate on issue grooming, with the ability to capture the visual behavior during a test run, improved the ability of the engineers and product managers to agree on issues that needed to be addressed, in what priority, and for what purpose.

Finally, Evan was asked how to get Product Management to involve engineering more closely. Evan focused on empathy interviews as ways to align engineering and product management, and Gherkin scenarios as tools to bring a common language for both development and test requirements. Evan also talked about his own transition from Quality Engineer to Product Manager – and how he went from being tool-and-outcome focused to customer-and-strategy focused.

About the Webinar

Evan’s Slides (Slideshare)

Evan’s Presentation (YouTube)

 

For More Information about Applitools

The post When Quality Engineering Meets Product Management appeared first on Automated Visual Testing | Applitools.

]]>
DevOps Plus Visual Testing: Abel Wang of Microsoft https://applitools.com/blog/devops-plus-visual-testing/ Thu, 23 May 2019 18:11:50 +0000 https://applitools.com/blog/?p=5130 Abel Wang, Principal Cloud Advocate for Microsoft Azure DevOps, spoke in an Applitools webinar recently. He called his webinar: DevOps and Quality in the Era of CI-CD: An Inside Look...

The post DevOps Plus Visual Testing: Abel Wang of Microsoft appeared first on Automated Visual Testing | Applitools.

]]>
Abel Wang

Abel Wang, Principal Cloud Advocate for Microsoft Azure DevOps, spoke in an Applitools webinar recently. He called his webinar: DevOps and Quality in the Era of CI-CD: An Inside Look at How Microsoft Does It. Because he knows the workings of the team, Abel knows what it takes to operate Azure DevOps. In this webinar, Abel described why the build-test-release model was too slow, and he explained what drove changes in their development team enabled them to deliver software with both speed and quality. He explained that the combination of Azure DevOps plus visual testing results in better developer productivity, higher quality, even with frequent releases on a rapid cadence. And, Abel described how he came across Applitools as a way to provide visual testing for the Microsoft Azure DevOps development team, and why he likes Applitools so much.

I get sweaty palms thinking about some things that just freak me out. For example, my hands sweated profusely when I watched Free Solo, about climber Alex Hornnold who climbed El Capitan in Yosemite National Park without the benefit of a rope. I also get sweaty palms considering a change from a build-test-release model to a continuous integration – continuous deployment (CI-CD) model. So I appreciate hearing from someone who has done something successfully and is willing to share lessons they learned along the way.

Screen Shot 2019 05 23 at 9.24.16 AM

Requirements Conflict With Conventional Wisdom

Conventional wisdom says that, for any project, you can have targets for scope, quality, and resources. Two targets remain to be fixed, but one must give way. When applied to build-test-release, this conventional wisdom means you’re either going to miss your scope, your date, or your quality target.

Screen Shot 2019 05 23 at 8.48.28 AM

Companies that consider a shift to CI-CD are doing so hoping to break this conventional wisdom. Abel described the following as key issues that the Azure DevOps at Microsoft needed to address:

  • Rapid Change – Customers demand new capabilities, and Azure DevOps wants to respond. The platform changes every three weeks, with bug fixes released in the interim.
  • Quality – Customers expect existing features to continue to behave properly, and they expect new features not to impact existing behavior.
  • Customer Choice – In today’s Microsoft, the company cannot dictate which tools customers can or should use. Customers want to choose their own tools and interface with other tools using standards.
  • Reliability – All with no downtime

So, anyone in any CI-CD solution requires flexibility, quality, speed, and reliability. That clearly violates conventional wisdom. Microsoft maps out all the potential capabilities needed, and software tools to use, in a map like this:

Image courtesy Abel Wang, Microsoft

Clearly, Microsoft Azure DevOps expects customers to use any kind of tool for any purpose and have it work in their flow.

Why Waterfall Won’t Work

Next, Abel explained the old way of doing things at Microsoft, and why it just wouldn’t work for Azure DevOps.

Abel related his stories of development at Microsoft – specifically his motivations when he was a developer. His job, he recalled, was to develop code. Once it was done, checked in, and validated, he was done – and hopefully onto another project that was interesting and exciting. Testing was QA’s responsibility. Developers didn’t know much about the quality of any unit tests they wrote.

QA spent the bulk of their time writing and running functional tests. Small products could take months. Large products like a Windows release could take as long as a year. During the testing time, bugs were discovered and fixed, and the bug list burned down. Eventually, the release date would arrive, and the big issue for the product team was whether they thought customers could live with the remaining bugs.

The time scales

Development (Months) > QA (Months) > Release

cannot work for services like Agile DevOps. Microsoft needed a different workflow for developing software that brought speed and quality into the development process.

Microsoft’s Solution: Shift Left

Abel then relayed the five strategic changes that Microsoft employed to deliver speed and quality to their software development. They were:

 

  • Make developers responsible for software quality
  • Make developers feel the pain of their bad code
  • Employ heavy use of Code Reviews / Pull Requests
  • Move away from functional testing toward unit testing
  • Feature Flags

This approach, commonly called “Shift Left” among developers, places the responsibility for software quality squarely on the developers.  Compared with the way things used to happen at Microsoft, the new approach for CI-CD required a radical shift for developers. And, as Abel said, most people were freaked out by their responsibilities.

Responsible for Quality

Making developers responsible for software quality meant that they had to think beyond their piece. Developers were required to consider how their code would be used in the real world. As a result, they started having to measure code coverage in their unit tests. Most of them had never considered coverage seriously. At first, they were frustrated by the extra work this entailed. Until they experienced the real pain of the code they had written.

Feel The Pain

What pain? Microsoft required each team to assign a single developer to be responsible for the team’s code during off hours, and the assignment rotated from person to person. If a production bug were found, the responsible engineer received a notification and had to join a bridge call – with all the executives for the business on the call.  And, while the executives were pushing for outcomes, it was up to the software engineer to fix the problem push the fix into production, and ensure that the root cause was identified and resolved the problem never, ever happened again.

Abel related from experience that, when you knew that the top people of the cloud business at Microsoft were on the call and demanding outcomes, that was both stressful and painful. The engineer who resolved that bug never wanted to be on a call like that again. And that pain made its way back to the team.

Use Code Reviews/Pull Requests

Once engineers had first-hand experience of the pain of an executive-level bridge call due to bad code, they all understood the need for quality. This drove the next behavior – code reviews and pull requests. Teams agreed that individuals could not simply push code into a build. Quality depended on the team doing reviews to ensure that the code would work across all the cases to which the team was developing.

Move To Better Unit Tests

The pain from bridge calls also drove another quality behavior in development – better unit tests. While the initial response to being responsible for the quality of their own code was frustration, developers changed their behavior. They started understanding the value of comprehensive test code to validate their work. And they began writing code with more comprehensive coverage. As a result, team code improved, and the unit tests could be run during code check-in.

These better unit tests also dropped the need for large functional test suites. These tests, which could take hours to run and even longer for root cause identification of failures, were needed for fewer and fewer test cases. In most situations, the unit tests validated functional behavior sufficiently so feature tests were unnecessary. Functional testing for behavior accounted for a huge chunk of the QA time in the build-test-release model. The increased coverage of unit testing decreased the amount of functional testing that was required.

Use Feature Flags

The last Shift Left process change involved feature flags. New features could be released to production flagged, so only certain users could see how they were behaving. As a result, new features could be tested in production! If the feature behaved, the flag could be changed and anyone might be able to use this feature. If errors were discovered, the feature could be removed or fixed in the next build without impacting paying customers.

Test for Real Eyeballs: Applitools

While the developer process changes with Shift Left resulted in faster delivery of features with quality, one big change was missing: visual errors. A small CSS change could have a big impact on the visual behavior of the application, and neither unit testing nor functional testing alone could spot this kind of error consistently.

In the past, Microsoft needed to run the functional tests in front of a group of manual testers. They could see the behavior of the application on-screen and determine whether or not the application behaved as expected. An army of manual testers had to run the tests and do comparisons to see how individual pages responded on individual browser/operating system combinations.

Abel relates that, in this quest for tools, Microsoft came across Applitools. As Abel said:

“There is a really cool technology from Applitools, it’s not even a Microsoft product, that really helps with this. It uses artificial intelligence to do visual testing. Basically, you write your automated UI test, (I write my test using Selenium). You can use Applitools to take pictures of your screen for you. Then they can do comparisons between what your baseline image should be and. what the images for your latest code check-in, and they can flag those differences for you.”

“Or if you want to keep those differences you can say you know what this is correct. The changes correct. Well let’s go ahead and set that as my new baseline moving forward. So you have the power to do that. And now what is so incredibly cool about this is you’re shifting even left. Even your testing is being shifted left – your manual or your automated UI testing has shifted left – where instead of needing human eyeballs all the time you can use Applitools to act as your human eyeballs. And only if there is a difference it will let you know and then you can now decide, ‘Is this a change you want or did somebody mess up my CSS file and now things look wonky? So it is super powerful and super useful.

Demonstrating Azure DevOps Plus Visual Testing with Applitools

Abel then went into a demo of his own environment.

“So what I want to do is show you just how easy it is to use Applitools and to integrate that into our Azure DevOps pipelines.”

Using Page Objects in Selenium

“I’m using Selenium to write my test. And when I write my automated UI test, I use a page object type of pattern where I create an object for every single page that my application has. In this page object, it’s just a simple object but it has all the actions that you can do on that particular screen. And it also has all the verifications that you can do on that screen. So for instance actions would be like, oh, well, you can click click on the exercise link or maybe I can enter in text into this specific text box. Those are all the actions that you can do on that screen.”

View the code on Gist.

“I also include the verification that you can do. So maybe I can enter this text, click on next, and then there should be a text box that pops up, or whatever. I need to verify that that text box shows up. I write tests that can actually show me that as well.

“When you write your test in this page object pattern, maintaining your code incredibly easily. You separate the test code from the page object. If your page changes, you just need to modify your page object.”

Reading Selenium Tests

“The other benefit is that my automated UI tests become incredibly easy to read as well. For instance, here’s my sample Healthy Food web app, and I’m going to go ahead and launch it. I pass in the browser, which is going to be Chrome, and then…

“Let’s go ahead and browse to the home page of the application. I pass in the URL. Then I verify my home page will be reached. I click on the Nutrition in the app link I remove all the donuts from the fields. Next, I create a new link. Now, I verify the page then read I had a donut in there I verify that the donut shows up on the table. So even if you’re not a coder you can look at this and be like: ‘Oh I know exactly what this test is doing.’ It just makes for a very very clean way to write your automated UI test.

“If we go ahead and jump into my launch code you can see this is where I actually do my selenium. So the first thing that I do is I check the browser and if I pass in the correct browser and Chrome I create a chrome driver. Then I launch the browser I set the size of the window and then the next thing that I do is I browse to my homepage. So if I go to browse to my homepage I pass into your URL and I just say “driver.navigate”. Go to the URL and I return to my home page and I pass in the driver.”

Adding Applitools

“Now I have these tests I want to add Applitools the ability to take pictures of my screens as I’m running through my tests so that I can compare our future tests with my base images to make sure nothing has changed or things haven’t turned wonky or weird with my UI from a visual perspective. It’s actually super easy to add Applitools into your tests.”

View the code on Gist.

“If you look at my test now, there’s a couple of things that I need to gather before I even run. Number one is I need to have a specific Applitools key. So I need to have that key if I want to run my Applitools type stuff in my build and release pipeline in Azure DevOps. I also grab a batch name and batch ID that automatically populates my environmental variables as well. Let me add that little chunk of code.

“Now let’s go into my test. Let’s go into this. I launch my web app. Next, I created chrome driver but now I grab the batch name in the batch ID. Then, I create an Eyes object which is part of the Applitools API which you can just get. This is the important code. I create my eyes object I call eyes.Open. I set Firefox to my window size. And here I set Chrome to the same window size as well. So the next thing I do is browse to my home page verify that the home page has been reached. Then I had this method called take a visual picture.”

View the code on Gist.

“When I take a visual picture it’s literally one line of code that I do I set eyes.CheckWindows and of course I pass in a tag that will show me all the data. Just helping organize stuff a little bit easier. So now what that does is on this particular page. Applitools will snap a screenshot and it can set it as my baseline. Or I can set it as manually set it as a baseline and then future tests, it will compare the pictures that it takes with the baseline to say has anything changed and if it has, is this a mistake or do you want to set your new changes as your new baseline. So super super super simple to do so once I add this check.

“If something looks wonky even though I’m on the right page like for instance the CSS got changed. Not a big deal because guess what Applitools will flag you and let you know. Super powerful super easy to use. The code changes that you make its almost nothing. So what does this look like? How do you set this up so it’s automated in your build and released pipelines? Well, let me go ahead and jump and show you what that looks like.”

 Azure DevOps Plus Visual Testing with Applitools

“Here’s my build pipeline and here’s my Applitools build and I’ll go ahead and edit it to show you guys what that looks like now. This build should look incredibly similar to the previous build because it’s literally the same thing but I added a couple of things. I added the Applitools build task and you can get this build task from the marketplace. You can add it to your marketplace and I’ve already installed it so I can now describe and drop it where I need to. Add it right before you compile your application. Then I run my unit test and then if everything looks good I go ahead and deploy my application into a C.I. environment. And now I run my automated UI test using selenium and that is literally all I have to do.”

“So we can come in here we can look to build summary. We see that 100 percent of our test is passed (hooray for that). That looks good. And if we look at the Applitools tab (you get a brand new Applitools tab now), it will actually show you all the pages that that were done. Everything looked good. Everything is passed. If you notice if I click on these you’ll be able to see screenshots of every time I took a picture.”

Resolving Visual Testing Differences with Applitools

“We’ll go ahead and jump into my summary and you’ll notice here 90.9% percent of my tests have passed. Two of them are in the “Other” category. Well, that’s weird. You’ll notice two of my automated UI tests are returning back “inconclusive”. If I look at Applitools (because that’s what my automated UI tests are running) I’ll notice that things are starting to look weird and Applitools flagged it for me. And if I jump in here and actually take a look at this you’ll notice that the table is no longer nicely formatted. Somebody must’ve messed up the CSS. Logically everything still works but visually it doesn’t look good. But I didn’t need a human to tell me that Applitools actually told me. So now I can either say, ‘This is the way I want it.’ or not. Thumbs up, it’s ok. Thumbs down, nope.”

Let’s go ahead and thumbs down. Right now. It will mark this as failed. Just like that. Now, future builds will be able to figure out what’s going on.

“This makes me so incredibly happy because it’s really powerful. It shifts automated UI testing to the left. And it makes our pipelines go faster and smoother. Super cool. Super, super useful. I am not an Applitools expert. Not even close to it. I’m just a code slinger. I ran into this toolset.  It was freaking amazing how easy it was to use and how useful.”

 

See The Full Webinar

The webinar is also covered in this blog post link.

Abel’s full code examples on GitHub.

The original slide deck on slideshare.com:

Find out more about Applitools

Setup a live demo with us, or if you’re the do-it-yourself type, sign up for a free Applitools account and follow one of our tutorials.

If you liked reading this, here are some more Applitools posts and webinars for you.

  1. Visual UI Testing as an Aid to Functional Testing by Gil Tayar
  2. How to Do Visual Regression Testing with Selenium by Dave Haeffner
  3. Why Visual UI Testing and Agile are a Perfect Fit
  4. The ROI of Visual Testing by Justin Rohrman

 

The post DevOps Plus Visual Testing: Abel Wang of Microsoft appeared first on Automated Visual Testing | Applitools.

]]>
Quality, Coverage, and Teamwork: The Holy Trinity Of USA Today’s Winning Dev/Test Team https://applitools.com/blog/building-a-winning-dev-test-team/ Fri, 17 May 2019 18:35:22 +0000 https://applitools.com/blog/?p=4973 Watch Joe Colantonio interview Greg Sypolt — Director of Quality Engineering @ Gannett | USA Today — and learn how he built his Dev/Test team for success – including: hiring...

The post Quality, Coverage, and Teamwork: The Holy Trinity Of USA Today’s Winning Dev/Test Team appeared first on Automated Visual Testing | Applitools.

]]>
Greg Sypolt (Gannett | USA Today) and Joe Colantonio (TestTalks)

Greg Sypolt (Gannett | USA Today) and Joe Colantonio (TestTalks)

Watch Joe Colantonio interview Greg Sypolt — Director of Quality Engineering @ Gannett | USA Today — and learn how he built his Dev/Test team for success – including: hiring test engineers, organizational culture, must-have tools, work processes, and best practices. 

Improving coverage, quality, and teamwork — which lead to faster, better releases — requires a seamless merger of leadership skills, engineering savvy, advanced best-in-market tools, and cross-team collaboration and communication. But although most organizations claim they’re committed to agile development and digital transformation — including fully automated CI-CD processes — this is not a simple task: 75% of companies that have started this journey testified to failure in achieving those goals.

So what is the secret of success of the 25% of companies that have triumphed with implementing this change?

Greg Sypolt — Director of Quality Engineering @ Gannett | USA Today Network — will share the steps he took on his successful quest to conquer a fully automated CI-CD release pipeline, while supporting dozens of different apps, across a myriad of digital platforms covering thousands of end-user engagement points.

In this live interview hosted by Joe Colantonio — founder of the Guild Conferences and TestTalks podcast — you will discover what is Greg’s secret recipe for building a winning Dev/Test team that delivers quality software products – faster.

Watch this live interview and learn:

  • Hiring for success: building a winning Dev/Test team
  • Establishing and maintaining a culture of teamwork and effective communication
  • Improving test coverage with more accuracy and less code
  • Accelerating release cycles, while improving quality but without going over-budget
  • Tools of the trade: what is in Greg’s tech stack that he can’t — and won’t — do without

 

Full Webinar Recording:

Greg is hiring for his team!

Have you got what it takes? Check out the job requirements and apply here

Additional Resources

Here is some recommended reading to learn more:

From Greg Sypolt

From Joe Colantonio

Special Offer — 30% Off on Secure Guild Conf Tickets!

As thanks for joining us on this session, please enjoy 30% off when you sign up for Secure Guild — an online conference dedicated 100% to security testing, taking place May 20-21.

Enter the coupon code applitoolsecure in the ‘click here to enter your code‘ link at checkout to receive your 30% off.

From Applitools’ Content Editors

  1. Webinar: DevOps & Quality in The Era Of CI-CD: An Inside Look At How Microsoft Does It — with Abel Wong of Microsoft Azure DevOps
  2. How to Run 372 Cross Browser Tests In Under 3 Minutes — post by Jonah Stiennon
  3. How Visual Regression Testing Can Help You Deliver Better Apps — post by Jay Phelps
  4. Challenges of Testing Responsive Design – and Possible Solutions — post by Justin Rohrman
  5. Webinar: Creating a Flawless User Experience, End-to-End, Functional to Visual — Practical Hands-on Session w. Greg Bahmutov (Cypress.io) and Gil Tayar (Applitools)
  6. How to Do Visual Regression Testing with Selenium — by Dave Haeffner
  7. Release Apps with Flawless UI: Open your Free Applitools Account, and start visual testing today.
  8. Improve your skills and build your resume for Success — with Test Automation University! The most-talked-about test automation initiative of 2019: online education platform led by Angie Jones, offering free test automation courses by industry leaders. Enroll, and start showing off your test automation certificates and badges!

— HAPPY TESTING —

 

The post Quality, Coverage, and Teamwork: The Holy Trinity Of USA Today’s Winning Dev/Test Team appeared first on Automated Visual Testing | Applitools.

]]>
How Applitools changed visual UI testing at LexBlog https://applitools.com/blog/visual-ui-testing-lexblog/ Wed, 20 Sep 2017 10:47:22 +0000 http://blog.applitools.com/how-applitools-changed-product-development-at/ Guest post by Jared Sulzdorf, Director of Product Development at LexBlog There is an art to managing websites that many do not appreciate. The browser wars of the late 90s...

The post How Applitools changed visual UI testing at LexBlog appeared first on Automated Visual Testing | Applitools.

]]>
Change

Guest post by Jared Sulzdorf, Director of Product Development at LexBlog

There is an art to managing websites that many do not appreciate.

The browser wars of the late 90s carried over into the early 2000s, and while web standards have helped align how these browsers render front-end code, there are still dozens of versions of the five major browsers (Edge has joined the crew!), all with their own understanding of how the web should look and act. Add to that the fact most digital properties ship with many templates, each with their own way of displaying content, and those templates in turn may run on dozens or hundreds of websites, and you’re left wondering if this internet thing is really worth it.

At LexBlog, where I hang my digital hat as the lead of the product development team, we manage well over one thousand websites. From the front to the back-end, it’s our responsibility to ensure things are running smoothly for our clients who range from solo lawyers to some of the largest law firms in the world. With that in mind, it’s no surprise that expectations are high, and our team works to meet those standards during every update.

While our product development team makes great use of staging environments, functional testing, performance monitoring/server error logs, and unit tests to catch issues through the course of the development cycle, it still isn’t enough. The reality of developing on the web is that you can’t unit test how CSS and HTML will render in a browser and humans can only look at so many screenshots before losing focus.

Prior to finding Applitools, LexBlog used Fake (http://fakeapp.com/), a web automation software, to visit our sites in a staging environment after a front-end update and take a screenshot. One of our support teammates would then leaf through these screenshots in an effort to find unexpected changes and report back when they were done. Unfortunately, this approach was just not scaling, and our team was running into a myriad of issues:

  • Deployments would wait for weeks while manual testing was performed
  • Manual tests would invariably miss errors and when the code was deployed, hotfixes would quickly follow
  • The overhead in managing the humans and systems necessary for performing manual tests was untenable
  • All of the above led to low morale

The first major breakthrough was finding Selenium to replace Fake. This allowed the product team to better manage the writing and reviewing of tests. However, now that the ball was fully in our court, there was still the problem of reviewing all these screenshots.

Being technically inclined, we looked for programmatic solutions to comparing two screenshots (one taken before an update, the other after an update) and finding the diff. A myriad of tools like PhantomJS (http://phantomjs.org/) or BackstopJS (https://garris.github.io/BackstopJS/) were researched and thrown aside until we found Applitools.

Applitools had everything we were looking for:

  • An SDK that supported languages our team was familiar with
  • UI for reviewing screenshot comparisons
  • A cloud-based portal allowing multiple teammates to log in and view the results of the same test

After spending some time investing in a host of desktop scripts written in Python, we noticed immediate improvements. Deployments for front-end changes went out the door faster and with less errors and our team was able to focus on higher-purpose work, leading to happier clients.

Over the years, we’ve consistently updated our approach to visual regression testing to the point where we can now introduce large changesets in code that renders on the front-end and catch nearly every regression in staging environments before deploying to production. Applitools, in combination with Selenium, Node, React, and the WordPress REST API, has allowed us to create a fully-fledged visual regression testing application that saves our team countless hours and headaches all while ensuring happy clients. It really is a match made in heaven.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.


The post How Applitools changed visual UI testing at LexBlog appeared first on Automated Visual Testing | Applitools.

]]>
How Concur Technologies (a SAP company) Leverages Visual Testing for Localization Tests https://applitools.com/blog/webinar-recording-how-concur-technologies-a-sap/ Wed, 16 Dec 2015 10:36:26 +0000 http://162.243.59.116/2015/12/16/webinar-recording-how-concur-technologies-a-sap/ Watch test automation expert Peter Kim, and learn how he and his team at Concur Technologies (an SAP company) successfully leverage visual testing to handle thousands of validations per localization....

The post How Concur Technologies (a SAP company) Leverages Visual Testing for Localization Tests appeared first on Automated Visual Testing | Applitools.

]]>

Watch test automation expert Peter Kim, and learn how he and his team at Concur Technologies (an SAP company) successfully leverage visual testing to handle thousands of validations per localization.
Peter also shares how much coverage he gained from adding visual testing, as well as the amount of QA resources saved.

This webinar covers: 

  • How a well designed command-based automation framework will help you scale your test coverage more reliable and faster
  • Overcoming the challenges of validating thousands of pages/forms across more than a dozen languages, plus the hurdles of validating all your unique client’s custom-themed web apps!
  • Live demo of automated visual testing, using a command-based test strategy that scales across different browsers/OS and resolutions leveraging Applitools Eyes
  • 3 simple automation design concepts to help you make your own framework more reliable and scalable

Watch it here:

Peter’s slidedeck can be found below:

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

Increase Coverage - Reduce Maintenance - with Automated Visual Testing

The post How Concur Technologies (a SAP company) Leverages Visual Testing for Localization Tests appeared first on Automated Visual Testing | Applitools.

]]>
Visual Test Automation with PayPal’s Nemo and Applitools Eyes https://applitools.com/blog/webinar-recording-visual-test-automation-with/ Thu, 29 Oct 2015 08:21:46 +0000 http://162.243.59.116/2015/10/29/webinar-recording-visual-test-automation-with/ In our last webinar, we took an inside look at how PayPal is managing its test automation efforts with Nemo: open source node.js-based Selenium-webdriver wrapper, and Applitools Eyes automated visual...

The post Visual Test Automation with PayPal’s Nemo and Applitools Eyes appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Eyes

In our last webinar, we took an inside look at how PayPal is managing its test automation efforts with Nemo: open source node.js-based Selenium-webdriver wrapper, and Applitools Eyes automated visual testing.

Watch this webinar, hosted by test automation experts Matt Edelman from PayPal and Daniel Puterman from Applitools, to learn about:  

  • Intro to Nemo: learn how to leverage its advanced capabilities to your automation needs
  • Intro to Automated Visual Testing with Applitools Eyes
  • Live demo of Nemo with Applitools Eyes: learn how to easily avoid UI bugs and visual regressions across devices and browsers
  • Best practices for automated visual testing

Full Recording: 

Matt’s “Intro to Nemo and Nemo-Eyes” slide deck: 

Daniel’s “Intro to Automated Visual Testing” slide deck: 

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

 

Automation Expert - You should be automating your visual testing!

The post Visual Test Automation with PayPal’s Nemo and Applitools Eyes appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation at Mozilla – by Andreas Tolfsen https://applitools.com/blog/test-automation-at-mozilla-video-by-andreas/ Tue, 23 Jun 2015 09:39:55 +0000 http://162.243.59.116/2015/06/23/test-automation-at-mozilla-video-by-andreas/ Ever wondered how big companies handle automation? Watch this talk by Andreas Tolfsen – Sr. Software Engineer at Mozilla and core contributor to the Selenium project – to learn about...

The post Test Automation at Mozilla – by Andreas Tolfsen appeared first on Automated Visual Testing | Applitools.

]]>

Ever wondered how big companies handle automation? Watch this talk by Andreas Tolfsen – Sr. Software Engineer at Mozilla and core contributor to the Selenium project – to learn about how Mozilla is automating its products and helps testing the web forward to achieve greater inter-operability.

This presentation includes a demonstration of the next generation replacement for FirefoxDriver known as Marionette, and the web-based mobile operating system Firefox OS.

Watch Andreas’s talk right here: 


Talk was recorded at the Selenium TLV Meetup, on Jan 29, 2015.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

Automation Expert - You should be automating your visual testing!

The post Test Automation at Mozilla – by Andreas Tolfsen appeared first on Automated Visual Testing | Applitools.

]]>
Applitools Mentions/Reviews on Twitter https://applitools.com/blog/applitools-mentionsreviews-on-twitter/ Sun, 26 Oct 2014 08:51:00 +0000 http://162.243.59.116/2014/10/26/applitools-mentionsreviews-on-twitter/ These Twitter mentions made us very happy indeed, so we would like to thank these Applitools users that gave us a shoutout, telling us and their followers how much they...

The post Applitools Mentions/Reviews on Twitter appeared first on Automated Visual Testing | Applitools.

]]>

These Twitter mentions made us very happy indeed, so we would like to thank these Applitools users that gave us a shoutout, telling us and their followers how much they enjoy our visual test automation solution!

Andreas Tolfsen 

Andreas Tlfsen

 

 

 

 

 

 

 

 

 

WebdriverIO

 

 

 

 

 

 

 

 

 

Shahar Talmi

 

 

 

 

 

 

 

 

Julien Muetton

 

 

 

 

 

 

 

Thank you all for the mentions!

If you also enjoy using Applitools Eyes, be sure to mention us on Twitter, so we can give you a shoutout on our next “Thank You” blog post, or simply re-tweet any of the above mentions.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

 

Increase Coverage - Reduce Maintenance - with Automated Visual Testing

The post Applitools Mentions/Reviews on Twitter appeared first on Automated Visual Testing | Applitools.

]]>
Selenium TLV Meetup #5: Selenium Tips, and Wix Automation Secrets https://applitools.com/blog/selenium-meetup-5-selenium-tips-and-wix/ Mon, 28 Jul 2014 10:47:00 +0000 http://162.243.59.116/2014/07/28/selemium-meetup-5-selenium-tips-and-wix/ Last week, we held the 5th Israeli Selenium Meetup at Google TLV Campus. Both meetup lectures were dedicated to automation of test processes: Wix rocked the room with their state-of-the-art test automation infrastructure...

The post Selenium TLV Meetup #5: Selenium Tips, and Wix Automation Secrets appeared first on Automated Visual Testing | Applitools.

]]>
Selenium TLV Meetup Group

Last week, we held the 5th Israeli Selenium Meetup at Google TLV Campus. Both meetup lectures were dedicated to automation of test processes: Wix rocked the room with their state-of-the-art test automation infrastructure and Yanir from Applitools shared Selenium tips & tricks

In the first sessionYanir Taflev, Applitools Customer Success Engineer, shared Selenium tips, tricks and best practices, useful for both beginners and experienced testers.

Yanir Taflev from Applitools at SeTLV Meetup
Yanir Taflev from Applitools at Selenium TLV Meetup

Yanir’s talk, which covered topics such as: working with frames, multiple windows, explicit and implicit waits, and screenshot taking, is available here.

If you have any questions or comments, please contact Yanir here.

Following Wix’s excellent talk on large-scale Selenium automation, many of you requested to learn more on its Java-based automation framework. Wix obliged, and the meetup’s second session discussed this topic in depth.

 

Ilan Tal and Roi Ashkenazi from Wix’s Automation Team at SeTLV Meetup
Ilan Tal and Roi Ashkenazi from Wix Automation Team at Selenium TLV Meetup

Ilan Tal and Roi Ashkenazi from Wix’s Automation Team dove deep into the infrastructure of Wix Selenium-based automation, covering the test lab structure, parallel tests execution, and test context initialization.

Check out Wix presentation here.

Wix at SeTLV Meetup
Wix presentation at Selenium TLV Meetup

For updates and future events, join Selenium Meetup Group.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

The post Selenium TLV Meetup #5: Selenium Tips, and Wix Automation Secrets appeared first on Automated Visual Testing | Applitools.

]]>