Lewis Prescott, Author at Automated Visual Testing | Applitools https://applitools.com/blog/author/lewis-prescott/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Tue, 28 Mar 2023 19:19:49 +0000 en-US hourly 1 Preparing for a Technical QA Engineer Job Interview https://applitools.com/blog/preparing-for-a-technical-qa-engineer-job-interview/ Tue, 28 Mar 2023 19:19:48 +0000 https://applitools.com/?p=48843 When you see technical assessment as one of the stages of an interview, what do you do? Does the panic set in? In order to gain confidence in technical tasks,...

The post Preparing for a Technical QA Engineer Job Interview appeared first on Automated Visual Testing | Applitools.

]]>
laptop with notepad and coffee

When you see technical assessment as one of the stages of an interview, what do you do? Does the panic set in? In order to gain confidence in technical tasks, the best way is to practice and tackle them head on.

In this blog, I will walk through:

  • My experience as a candidate, having previously had job titles including QA Engineer, SDET, and QA Automation Lead.
  • Encountering different types of technical tasks, from the classic whiteboard FizzBuzz programming task to a take-home task building a test framework from scratch.
  • Advice as a hiring manager from my experience as QA Lead at a healthcare startup in the UK, where I will touch on not only the technical aspects of the interview but also the behavioral and situational based questions we ask candidates, providing tips on how to prepare as well.

All tips and advice are based on my experience as a hiring manager, hiring for a QA Engineer.

laptop with notepad and coffee

My experience as a candidate

I’ve never been the most technical person. I graduated university with a psychology degree and managed to land a graduate scheme job which taught me the skills on the job, alongside working as a Test Analyst. Therefore, whenever I see a technical part of the job interview process, my anxiety definitely sets in. However, having been through quite a few of these during my career I’ve come up with a few different tactics.

My approach is to refresh my skills either in the programming language or automation test framework and practice! This may mean accepting multiple job interviews in order to use some companies’ technical assignments just as a form of rehearsal. If that’s not possible, Test Automation University (TAU) provides code samples and assignments to refresh your skills.

Of course, every job has its own spin on the technical task but generally follows a similar pattern. Interviews for QA Engineer and Test Automation roles often focus on end-to-end testing with frameworks based on Selenium, Puppeteer, Cypress, or Playwright. While doing these tasks, I always spent too long on them and would focus on using the latest language features or making sure to abstract any duplicate code to helper functions.

Some of the tasks I encountered as a candidate, I definitely thought I had failed as I completed them, especially when they were whiteboard technical tasks. The first one was FizzBuzz and another was sorting and filtering a list. It’s very difficult for me to perform these tasks on a whiteboard using pseudocode without having a keyboard in front of me and without Stack Overflow. Often the person interviewing uses these types of tasks to understand your thought process and how you communicate throughout the activity.

Advice from my candidate experience

These tasks often don’t relate to the daily activities an automation tester or SDET will be performing. In my opinion, the interviewer shouldn’t be assessing the completion of the task or the actual solution. From my experience, my advice for these type of programming tasks: 

  • Don’t overthink it: Get a solution down and rewrite it later.
  • Ask questions: It can buy some time for thinking.
  • Become comfortable with silence: The interviewer won’t be comfortable with the silence either.
whiteboard coding exercise

The technical assessment

As a hiring manager, I have seen some bad and some bizarre submissions as part of the technical test. Some red flags, I have witnessed while reviewing our technical tasks:

  • One of the tasks was missing from the submission
  • No assertions in automated checks
  • Assertion within test wrapped in try / catch
  • “Negative” scenario provided is actually a positive scenario

Getting the basics right is so important. TAU has many courses to help refresh and upskill in preparation for technical jobs.

How I evaluate technical assessments

I will walk through the process of how I evaluate a technical task, which will help how to approach the task effectively:

  • Firstly, once I have received the technical task back from the candidate, I open up the README.md file and try to run the project. If the instructions are clear and easy to understand, this gives a great first impression. Making the hiring manager’s job easier by describing how to run the tests is a really good start.
  • Secondly, by running the tests (hopefully, they are green), I read the names given to the tests. This defines what is actually being tested, demonstrating understanding of the task itself.
  • Thirdly, looking at the new code, have the assertions answered the right question? Especially around the negative scenario, does it handle unwelcome behavior? Too often, negative testing is misunderstood.
  • Fourthly, when the test is focused on UI test automation, what element locators have been used? Following the hierarchy of best practices in Cypress documentation is a good reference. Obviously, the best locators are not always available to the candidate, but commenting what should be used will not go unnoticed. This is where comments in the code are allowed and should definitely be used to assist with helping the hiring manager understand your thoughts.
  • Fifthly, has the code been structured to facilitate maintenance and stability? For example, moving setup steps to beforeEach proves you know how to make tests independent. Also, refactoring code to be easier to read and understand helps in proving your experience of working with unfriendly code.

These are some key areas which I focus on during the review process, but overall, I’m looking for simplicity, clarity, and following best practices. I always request candidates don’t take too long on a task. Comments with future improvements are enough in this scenario.

The behavioral assessment

Often the interview will be split into multiple sections, one of those could be behavioral or situational style questions. For example, “How do you deal with a situation when a developer says a bug is actually a feature?” The role as an automation tester involves working as part of a team, therefore it’s important to prepare for these questions. As before with the coding exercises, practice can help prepare for this style of interview questions. By rehearsing examples from your experience, the answers are often articulated more fluently.

How I evaluate behavioral assessments

If we take the example of dealing with challenging developers, questioning bugs. Some things I look for:

  • Firstly, how closely a tester works with developers and what kind of relationship they have. Working at a startup, it is very important that the QA Engineers and developers work together to solve problems.
  • Secondly, persuading and influencing peers. Whether they involve other stakeholders and how much information is gathered before presenting the argument. Again, within a startup environment, we are looking for people who solve their own problems. Involving managers and other stakeholders is still appropriate in certain circumstances, but trying to resolve on your own first shows proactivity and independence.
  • Thirdly, attention to detail when it comes to acceptance criteria and what stage this conversation happens within the software development lifecycle (SLDC). Particularly what I am looking for is someone who promotes “3 Amigos” (ideally all 3 but 2 is good enough). These 3 Amigos conversations help eradicate requirements being misunderstood before development starts.

These behavioral or situational questions relate to daily activities for a tester, how someone works within a team, and especially their communication skills. Obviously as a hiring manager, I want to hear about real experiences candidates have had. However, including your opinion on how the team or process could be improved is also valued. Describing the kind of environment the candidate would like to work in helps differentiate between previous and desired experience.

tester pointing at laptop

Tips from a hiring manager

Having interviewed many candidates and reviewed lots of technical assessments, these are a few of my tips to think about when interviewing:

  1. Don’t be afraid to ask questions. A core attribute of a good tester is asking good questions. Therefore this is encouraged within my interviews, the more questions or clarifications the better.
  2. Show your workings. Just like when you are doing a math exam at school. It’s important how you got to the answer, whether that is as comments within code or verbally when explaining your solution.
  3. Admit what you don’t know. It’s better to state what you are unsure about, whereas trying to guess, the interviewer can only interpret what you say. An honest person is very well received from my perspective. As I can teach a skill to a prospective candidate, the other attributes are more difficult to coach.

To conclude

As a hiring manager, I am not looking for the finished article. Everyone has had different experiences and opportunities. This should always be taken into consideration. What’s important to demonstrate within the interview process is how you communicate, work as part of a team, and your technical skills. In order to do that, explain your thought process, provide your opinion, and be clear what you still need to learn.

ICYMI: Get connected, be inspired, and sharpen your skills with on-demand expert session recordings from Test Automation University Conference 2023.

The post Preparing for a Technical QA Engineer Job Interview appeared first on Automated Visual Testing | Applitools.

]]>
Acceptance Test-Driven Development for Front End https://applitools.com/blog/acceptance-test-driven-development-for-front-end/ Mon, 13 Feb 2023 16:33:40 +0000 https://applitools.com/?p=46338 I first was introduced to Acceptance Test-Driven Development (ATDD) at a meetup run by ASOS technology. I loved the theory and ended up joining ASOS where I was able to...

The post Acceptance Test-Driven Development for Front End appeared first on Automated Visual Testing | Applitools.

]]>
ATDD flow chart

I first was introduced to Acceptance Test-Driven Development (ATDD) at a meetup run by ASOS technology. I loved the theory and ended up joining ASOS where I was able to see it in practice. Working within a cross-functional development team, through pair programming or working at the same time on feature code and test code. This was an amazing environment to work in where I learnt a lot, especially the whole team owning the quality of their software. ATDD encourages QA engineers and developers to work together on implementing the behavioural tests. Meaning edge cases and testers mindset is applied right from the beginning of the development cycle. Behaviour is documented as part of the feature development as well meaning you have live documentation about what was actually implemented.

Gojko Adzic defines ATDD as:

“Write a single acceptance test and then just enough production functionality/code to fulfill that test”

Introduction to TDD
diagram of atdd flow
Flow chart of ATDD

In this article I will aim to distinguish between ATDD and Test-Driven Development (TDD) and also explain the differences between ATDD and Behavioural-Driven Development (BDD). As well, I will:

  • Explain how ATDD fits into your agile testing strategy.
  • Detail the flow of ATDD using a simple example using frontend technologies and how it can be applied for backend.
  • Share the importance of not forgetting about visual changes when it comes to test coverage.

Finally, if you want to learn more about this then you can take my course on test automation university: Acceptance Test Driven Development for the Front End.

My introduction to ATDD

We would create pull requests which included the feature alongside unit, component, and acceptance tests. I was fortunate to be on-boarded onto the team by a brilliant engineer, who walked me through the process step by step. One of those steps was using Gherkin syntax (Given, When, Then) to create the acceptance test scenarios, right at the start of development of a feature. This would allow the QA Engineer to share the Gherkin scenarios with the whole team including the Business Analyst, Solutions Architect, and Developers. Everyone understood and “accepted” the test cases before they were implemented, saving any wasted time and energy.

How ATDD differs from other strategies

The focus of ATDD is on behaviour of the user. Acceptance criteria are always written from the user’s perspective and aim to explain the requirements of the user to be able to be translated into software (this is harder than it is described). The difficulty with writing code is that you can lose sight of the user requirements during the development process. Thinking about design patterns, writing clean functions and classes, and engineering for future scaling considerations. It’s only human to think like this and get lost in the details. Hence, the addition of acceptance tests can help align the code implementation with the acceptance criteria.

TDD vs ATDD

TDD focuses on the implementation of units of code or isolated functions. TDD is used within ATDD as the inner circle (refer to ATDD diagram). For example, where you have a username field on the registration page, the username input tracks changes when the user types into the text box. This may be abstracted within the code to a function which could be tested in isolation. This is an implementation detail which would not concern the user; they just expect to be able to use the input field without concerning themselves with the implementation detail. ATDD focuses on the overarching behaviour from the perspective of the user.

source code editor with 2 windows

BDD vs ATDD

BDD can be used as part of the ATDD cycle to describe the acceptance criteria. BDD describes the process of how business requirements are documented or designed and is often referenced when using Gherkin syntax. The Gherkin requirements can be translated to acceptance tests. The difference being BDD relates to the business phase and ATDD relates to the development phase.

Agile testing vs ATDD

Agile testing is the strategy or practice at which ATDD could be part of. Agile Testing involves testing throughout the software development lifecycle. For example, within a scrum agile environment, you would think about testing requirements through to testing in production. When analysing the test strategy for an agile team I would think about including ATDD as part of the continuous testing as this would enable teams to deliver quickly and with quality.

Example of ATDD in action

As previously mentioned, we are using a registration page as an example. We want to build the username field using ATDD starting with the Minimal Viable Product (MVP), which is the username input which accepts any value – no validation or username duplication checks.

  1. Write the acceptance test for the input box.
  2. Run the acceptance test – which fails.
  3. Write the unit test for the input box.
  4. Run the unit test – which fails.
  5. Implement the MVP username field.
  6. Run both acceptance and unit tests.
  7. Refactor code if required.
  8. Add visual tests for CSS / Responsive design.

At this point, you continue with the loop, adding additional features to the username field such as validation using the ATDD cycle.

mobile phone registration input box
Registration example

Writing your acceptance tests

You can write your acceptance tests using whichever framework or style you wish: Using native programming constructs with plain English test names and wrapping test scenarios in well described statements. Or with the use of a BDD framework which offers gherkin syntax abstraction. I would recommend only using the abstraction if you are communicating the scenarios with business users or, even better, collaborating with them on the scenarios. However, sometimes your test report with clearly described tests can be just as easy to read without the complexities and maintenance costs of using a BDD abstraction.

Using data test ids

As mentioned in the example (1a), a way to make acceptance tests easier when writing the tests before the code is implemented is to default to using data-testids. Primarily decide on a standard for your test ids within your team (e.g. feature_name-{element_type}-{unique identifier}, curly brackets text if required and to make them unique). Then whenever you want to reference an element, you can work out what the data-testid will be based on the standards. Even if you don’t do this upfront, you can easily substitute the id quickly after the component is implemented. The other way to achieve this is to make sure code that is implemented follows HTML Semantics. Therefore, you will be able to reference the appropriate html tag or attribute.

Benefits of ATDD

As described in the example, ATDD means you can make changes to the component in development without risk of breaking other parts of the feature. Additional benefits include:

  • Collaboration of test cases through plain english acceptance test scenarios
  • Acceptance tests part of the definition of done
  • Code coverage doesn’t get added to the backlog after the sprint
  • Developers and Testers working on acceptance tests

Code coverage !== test coverage

As mentioned in the benefits, ATDD helps achieve code coverage at the time of development. However, this does not mean all cases are covered, especially when it comes to considerations like responsiveness and visual elements. This is where visual testing really helps cover those cases not achievable by automated checks. Using Applitools, we can easily configure our automated checks to run against multiple browsers and devices to see how the feature looks. The flexibility to be able to use your testing framework of choice and run the tests in the Ultrafast Grid means you can capture a wide range of issues not covered within functional tests. Again, building in visual testing into the ATDD cycle means that it’s not an afterthought and the backlog doesn’t contain lots of bugs related to responsive design.

different devices

Conclusion

I hope you have takeaways from this blog of how you can engineer this within your team. Hopefully, I have articulated that you can be flexible in how you want to deliver it and it is an approach rather than a defined set of rigid steps. ATDD does require discipline and collaboration to follow the process for every feature. If you want to learn more and walk through the steps using React, Testing Library, and Cypress then head over to my course on Test Automation University, Acceptance Test Driven Development for the Front End.

The post Acceptance Test-Driven Development for Front End appeared first on Automated Visual Testing | Applitools.

]]>
How to Simplify UI Tests with Bi-Directional Contract Testing https://applitools.com/blog/how-to-simplify-ui-tests-bi-directional-contract-testing/ Wed, 22 Jun 2022 18:35:39 +0000 https://applitools.com/?p=39466 Learn how you can improve and simplify your UI testing using micro frontends and new Pactflow bi-directional API contract testing.

The post How to Simplify UI Tests with Bi-Directional Contract Testing appeared first on Automated Visual Testing | Applitools.

]]>

Learn how you can improve and simplify your UI testing using micro frontends and new Pactflow bi-directional API contract testing.

End-to-End Testing within Microservices

When you are writing end-to-end tests with Cypress, you want to make sure your tests are not flaky, run quickly and are independent of any dependencies. What if you could add contract tests to stabilise, speed up and isolate your UI tests? There’s a new kid on the block – now this is possible with Pactflow’s new feature, bi-directional contracts. UI tests often offer the confidence of the application working end-to-end, which is why utilising contract tests can eliminate some of the challenges mentioned above. To attempt to simplify the explanation of this testing approach, I’m using a recipe web application to describe the interactions between the consumer (web app) and provider (api service). If you want to learn more about API Contract Testing checkout the Pactflow docs.

recipe web app on an ipad

Microservices was a term first coined in 2011, and microservices have since become a popular way to build web services. With the adoption of microservices, testing techniques have had to adapt as well. Integration tests become really important when testing microservices, ensuring that any changes don’t impact the consuming services or applications.

Micro Frontends started being recognised around 2016. Often when building microservices you need a micro frontend to make the application truly independent. In this setup the integration between the web app and the API service is much easier to test in isolation. The benefits of an architecture that uses micro frontends and microservices together mean you can release changes quickly and with confidence. Add in contract testing to the mix, and you can apply the independent approach to end-to-end testing as well.

Traditionally the running of end-to-end tests looks a little something similar to this:

diagram of end-to-end test flow

How to Simplify Your UI Tests with Contract Testing

Using this traditional approach, integration points are covered from the end-to-end tests which can take quite a while to run, are difficult to maintain and are often costly to run within the continuous integration pipeline. Contract Testing does not replace the need for integration tests but minimises the amount of tests needed at that level. 

The introduction of bi-directional contract tests now means you can generate contracts from the UI component tests or end-to-end tests. A great opportunity to utilise the existing tests you already have, this will then provide the confidence that the application works end-to-end without running a large suite of end-to-end tests. The contracts could also be used as stubs within your Cypress tests once they’ve been generated.

In my podcast, I spoke to a developer advocate from Pactflow who told me that they realized there was a barrier to getting started with contract testing. Which was that engineers already had tools which were defining contract interactions such as in mocks or pre-defined openAPI specifications. The duplication of adding pact code to generate these contracts seemed like a lot of work when the contracts had already been defined. Often development teams realise the potential of introducing contracts between services but don’t quite know how to get started or what the true benefits are.

What Benefits Do API Contract Tests Bring to Your UI Tests?

  • End-to-end tests can run in isolation, while retaining the confidence of fully integrated tests
  • Service providers will verify any API changes before deploying making dependent applications more stable
  • How the consumer app interacts with the API service is visualised and better understood as a result
  • Versioning and tagging contracts allows you to deploy safely between environments

In a world of micro frontends and microservices, it’s important to isolate services while ensuring quality is not impacted. By adding contract tests to your UI testing suite, not only do you gain the benefits listed you also gain time and costs. Running tests in isolation means your tests are faster to run, with a shorter feedback loop and no need to rely on a dedicated integration environment, reducing environment costs.

The Benefits of Bi-Directional Contract Testing

two way road sign

When building the example recipe app, two teams were involved in defining the API schema. An API contract was documented on the teams’ wiki, which presents the ingredients for a specific cake recipe. Both teams go away and build their parts of the application in line with the API documentation. 

The frontend team uses mocks to test and build the recipe Micro Frontend¹. They want to deploy their Micro Frontend to an environment to see whether they can successfully integrate with the ingredients API service². Also during the development process they realized they needed another field within the ingredients service³, so they communicated with the API team and the developer on the team made the change in the code which generates a new swagger openAPI document⁴ (however they didn’t update the documentation). 

From this scenario there are a couple of things to draw attention to (see numbers 1-4 above):

  1. Mocks are often used to test integrations which can be utilised within bi-directional contract testing as test scenarios
  2. With contract testing you don’t need a dedicated environment in order to test the interactions between web app and API service
  3. Specifications defined before development often change during implementation which can be documented and continuously updated within a centralised contract store such as Pactflow
  4. Generated openAPI specifications generated by code can be uploaded to the pact broker as well which can be compared directly with the frontend mocks

As mentioned earlier, the introduction of bi-directional contract testing allows you to generate contracts from your existing tests. Pactflow now provides adaptors which you can use to generate contracts from your mocks for example using Cypress:

describe('Great British Bake Off', () => {
    before(() => {
        cy.setupPact('bake-off-ui', 'ingredients-api')
        cy.intercept(`http://localhost:5000/ingredients/chocolate`,
        {
          statusCode: 200,
          body: ["sugar"],
          headers: { 'access-control-allow-origin': '*' }
        }).as('ingredients')
    })

    it('Cake ingredients', () => {
        cy.visit('/ingredients/chocolate')
        cy.get('button').click()
        cy.usePactWait('ingredients').its('response.statusCode').should('eq', 200)
        cy.contains('li', 'sugar').should('be.visible')
    })
})

Once you have generated a contract from your end-to-end tests, the interactions with the service are now passed to the API provider via the contract store hosted in Pactflow. Sharing the contracts means that the interactions of how the web app behaves after implementation aligns with the API service or if any changes occur post initial development. Think of it like sharing test scenarios with the backend engineers which they will replay on the service they have built. The contract document looks similar to this:

{
    "consumer": {
        "name": "bake-off-ui"
    },
    "provider": {
        "name": "ingredients-api"
    },
    "interactions": [
        {
            "description": "Cake ingredients",
            "request": {
                "method": "GET",
                "path": "/ingredients/chocolate",
                "headers": {
                    "accept": "application/json"
                },
            },
            "response": {
                "status": 200,
                "body": [
                    "sugar"
                ]
            }
        }
    ]
}

Once the openAPI specification has been uploaded by the API service and the contracts have been uploaded by the web application to Pactflow, there is just one more step remaining to call can-i-deploy, which will compare both sides and check that everything is as expected. Voila, the process is complete! You can now safely run tests which are verified by the API service provider and reflect the actual behaviour of the web application.

Changing the Mindset of API Test Responsibility

I know it’s a lot to take in and can be a bit confusing to get your head around this testing approach, especially when you are used to the traditional way of testing integrations with a dedicated test environment or by calling the endpoints directly from within your tests. I encourage you to read more about contract testing on my blog, and to listen to my podcast where we talk about how to get started with contract testing.

When you are building software, quality is everyone’s responsibility and everyone is working towards the same goal. When you look at it like that then interactions between integrations are the responsibility of everyone. I have often been involved in conversations where the development team building the API service have said it’s not their responsibility what happens outside of their code and vice versa. The introduction of contracts to your UI tests allows you to break down this perception and start having conversations with the API development team to speak the same language. 

For me, the biggest benefit that comes from implementing contract tests is the conversations that come out of it. Having these conversations about API design early, with clear examples, makes developing microservices and micro frontends much easier.

The post How to Simplify UI Tests with Bi-Directional Contract Testing appeared first on Automated Visual Testing | Applitools.

]]>
How to Build a Successful QA Team in a Rapid Growth Startup https://applitools.com/blog/how-to-build-qa-team-startup/ Fri, 06 May 2022 20:43:36 +0000 https://applitools.com/?p=38099 Learn how to build an effective QA team that thrives in an environment where things change quickly.

The post How to Build a Successful QA Team in a Rapid Growth Startup appeared first on Automated Visual Testing | Applitools.

]]>

Learn how to build an effective QA team that thrives in an environment where things change quickly.

Within a startup projects can be the highest priority one minute and then sidelined the next. As the QA Lead at a growing startup, recruiting testers was my number one priority in order to keep up with the pace of new projects and demands on my current testers. I’d been told a new project was top priority, so I’d just offered a role to a tester who I thought would be a great fit to hit the ground running on it. 

Luckily, I hired a person who was resilient and took the immediate pressure in stride, but putting so much pressure on them with such a high priority project and requiring them to learn the domain super quickly was not how I would onboard a tester in normal circumstances.

And then, three months of hard work later, the project was scrapped. In a startup, this happens sometimes. It’s never fun, but that doesn’t mean you and your QA team can’t learn from each stage of the experience.

Dealing with Changing Priorities, as a Domain Expert

page not found error on computer screen

Testers often gain considerable domain knowledge while navigating the application from the users perspective multiple times a day. However this can be difficult to sustain when the priorities are changing so frequently. During this project there were changes to designs, user journeys and business logic right up until the last minute.

How does a tester keep on top of all these changes while maintaining tests and finding bugs which are relevant and up-to-date with the latest requirements? Within this environment it can be hard to stay motivated and perform your role at the level you aspire to. As a manager in this situation, I made sure that the tester knew their work would not be compared to normal circumstances and I was aware that all the changing requirements would lead to delays. Dealing with these challenges requires a person with certain hardy attributes.

It’s All a Learning Experience

Tests or code you’ve written which are deprioritized can be disheartening, but looking at it as a learning opportunity can help deal with this. The tester involved saw opportunities to reuse some code they had written for future tests and also felt they could use the test data spreadsheet as a template for other pages of the site. I was really proud of how they dealt with changing priorities and saw opportunities to take learnings with them to future testing scenarios.

It’s Okay to Write Poor Quality Automated Tests

In this fast moving environment, where designs are likely to change and user journeys are not set in stone, it’s okay to write poor quality tests.

When writing code, you want it to be easy to understand, maintain and reuse. However in this environment that isn’t possible, so writing code that you know you will need to refactor later is absolutely fine. The automation tester often would request a code review with a long description explaining why he’d duplicated code or why he’d not moved a piece of logic to a separate function. I always approved their pull requests and suggested they leave a comment as a TODO for them to revisit once the designs and user journeys were more stable. Always reiterating that I’m not judging them for this approach. 

Get Comfortable with Tests Being Red

The tests were red, often. Seeing your tests fail less than 24 hours after you’ve created them can be quite a demoralising feeling. Especially when it’s due to the developer refactoring some part of the UI and forgetting to keep the test-id you needed for your tests. This can be very frustrating and make maintenance difficult.

In a fast moving environment, it’s okay for your tests to be red. The important thing is to keep adding tests as these will be your lifeline when you are getting ready for a release and you have a short amount of time available. These tests will be critical in the lead up to go live.

Dealing with Design Changes, as an Automation Tester

design wireframes different variations

Designs are often not complete before you start development, particularly when working with overly optimistic deadlines. In this project, even the user experience (UX) research wasn’t complete at this stage. Meaning that the foundations of the frontend development process are not finalised. During this project, as mentioned previously, there were changes to designs on a regular basis. This impacts the automation tester quite significantly and can cause them to think they should wait until the frontend is stable. Often it is recommended to not automate when the frontend is changing frequently.

So what do you focus on in this scenario without becoming annoyed by all the wasted effort? Building the structure for the tests including visual, accessibility and performance. As the automation tester knew they couldn’t rely on elements or any specific locators, they focused on whole page visual snapshots, accessibility reports and page render performance metrics.

Visual Snapshots before Functional Checks

As the website was going to be changing on a daily basis, functional tests would be so brittle that we sought other alternatives.

Visual testing seemed like a great alternative as we could easily replace the base snapshots when the designs had stabilised. With this approach we weren’t targeting specific parts of the page or components, which is how I would usually use visual snapshots in order to ignore dynamic content. 

To combat this, within the content management system (CMS) we could create test pages with the same layout as the homepage, for example, to run our visual tests against the whole page. This way the content wouldn’t change on the page and we could make comparisons quickly across different resolutions. This saved us a lot of time and effort compared to functional tests.

Whole Page Accessibility Audits

As images were being swapped out, colour changes and font swaps were happening frequently, meaning developers forgetting about accessibility was a common occurrence.

The accessibility audits on the page allowed developers to get instant feedback on quick fixes they needed to make to ensure the accessibility wasn’t impacted with their most recent changes.

Page Render Performance as a Smoke Test

Marketing would frequently send over a new image, then a developer would update the image on the site, often skipping the optimization step.

Using Google Lighthouse as a smoke test, it was easy to identify images that hadn’t been optimised. Perhaps the image was the wrong size or wasn’t suitable for mobile. This meant we could quickly go back to marketing and ask them to provide a new image. Catching these performance issues as early as possible means you don’t have 100’s of images to optimise at the last minute before release.

Dealing with Projects Being Scrapped, as a Person

passion led us here

Days after the release, when the website was just released to the world, we got the news. Due to time pressures and changing designs right up until days before the site was made live, we didn’t deliver the highest quality website. There were bugs, the journey for the user was a bit clunky and the search results weren’t very accurate. None of this was down to the team that worked on the website, there were really some superstars and people worked weekends or late nights to deliver the website on time. However business stakeholders had decided to engage with an agency, with a view to outsource the website. 

This came as a real shock to the team and wasn’t quite the news everyone was expecting just days after working hard towards a common goal. All the tech debt and automated test coverage we left for post release was now put on hold. So how would you react when the domain knowledge you’ve acquired, code you’ve written and technology you’ve learnt overnight is not required anymore? It can be very disheartening to hear your project has been scrapped and the hard work you put in can seem like it was for nothing.

Lessons Learned, Delivering Rapidly and for Nothing

It’s not all doom and gloom. There are many lessons learned along the way which will help you develop as a resilient member of the team and also how to work in a rapidly changing environment, quite useful if you are working for a startup.

One of the most important lessons that I learned was to focus on what I could control, such as working with the automation tester to come up with solutions to the fast moving changes. I couldn’t control the deadline or the scope changes 2 days before going live. But I can give my advice as someone with experience in these situations before, of the risks late changes will cause.

Another positive to come out of this was the focus on visual, accessibility and performance in a holistic fashion. Usually I would focus on making my tests robust, target specific components and use this at the end of the project for regression purposes. However, now I have another use case for these testing types.

Testing in this setting is not ideal and requires some sacrifices in terms of quality. Leading QA on this project wasn’t an enjoyable experience, required careful management and one that I will not forget anytime soon. But I learned way more on this project than I would have learned if the project had gone smoothly.

The post How to Build a Successful QA Team in a Rapid Growth Startup appeared first on Automated Visual Testing | Applitools.

]]>