Automated Visual Testing | Applitools https://applitools.com/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Thu, 28 Mar 2024 15:15:10 +0000 en-US hourly 1 Recap: Streamlining Your Tech Stack https://applitools.com/blog/recap-streamlining-your-tech-stack/ Tue, 26 Mar 2024 18:10:00 +0000 https://applitools.com/?p=56396 Webinar recap of a novel approach to streamlining test automation tech stack that led to faster test execution times, flexibility across CI providers, and more.

The post Recap: Streamlining Your Tech Stack appeared first on Automated Visual Testing | Applitools.

]]>

During a recent webinar—Streamlining Your Tech Stack: A Blueprint for Enhanced Efficiency and Coverage in Challenging Economic Times, Applitools’ DevRel Dave Piacente and Mike Millgate from EVERFI presented an enterprise case study showcasing strategies to optimize capabilities and enhance value. The approach involved utilizing current tools for multitasking and collaborating with third-party providers for extended coverage.

Dave and Mike dove into how EVERFI effectively maneuvered through the economic downturn by optimizing its tech infrastructure and processes. The implementation achieved fast test execution times within five minutes, optimized cost-effectiveness by exploring affordable options, maintained quality despite test suite growth, ensured flexibility across CI providers, and improved efficiency through parallel test execution.

Dave and Mike took a deep dive into the key advantages of efficient browser testing including:

  • Cost savings: Traditional grid-based browser testing can be costly. Distributing tests horizontally and opting for more affordable solutions can cut infrastructure expenses.
  • Swift feedback: By streamlining test execution and using concurrency, you can get quicker feedback on test results, helping identify and resolve issues faster.
  • Cross-browser coverage: With the Applitools Ultrafast Grid, test your application across multiple browsers without extra costs or complexity.

Designed for practitioners and executives looking to enhance the efficiency and resilience of their Software Development Life Cycle (SDLC) operations amidst financial constraints, there is a detailed example ready for your team’s use, including a project repository to test this method, on the event archive page. Or better yet—book a session with one of our testing experts today for a deeper look into Applitools’ unparalleled parallel test automation capabilities!

The post Recap: Streamlining Your Tech Stack appeared first on Automated Visual Testing | Applitools.

]]>
Recap: A Test Automation Platform Designed for the Future https://applitools.com/blog/recap-a-test-automation-platform-designed-for-the-future/ Thu, 29 Feb 2024 17:15:32 +0000 https://applitools.com/?p=55981 What exactly does “platform” mean in today’s software world? At Applitools, it means more than just a tool – it’s a comprehensive solution enabling you to test like the best...

The post Recap: A Test Automation Platform Designed for the Future appeared first on Automated Visual Testing | Applitools.

]]>

What exactly does “platform” mean in today’s software world? At Applitools, it means more than just a tool – it’s a comprehensive solution enabling you to test like the best in the business. In the world of test automation, having a platform that can take your practice to the next level is crucial.

With more than a decade of experience serving top engineering teams and leading companies worldwide, we have taken a fresh approach to developing our platform. Our journey started with building the world’s first and best visual test automation solution, and we didn’t stop there. We continuously leveraged our insights to create more products that address the evolving needs of the industry. Now, companies using our platform can achieve higher quality and faster results than ever before, empowering developers to work smarter and push the boundaries of their test automation practice.

During our recent webinar, we unveiled our Intelligent Testing Platform and shared some key highlights, including:

  • Easy one-click setup – Just one Autonomous at your website, and you are done. Everything you need is available out of the box.
  • Automatic website/app discovery – Automatically create self-adjusting test suites that detect new, missing, changed, or faulty pages and components on every run.
  • Natural language test builder – Describe complex end-to-end flows using nothing more than plain English. No coding or element-locating skills are required.
  • Cross-device and browser testing – Test your public and internal apps on any device, browser, and OS using the world’s most modern test infrastructure available out of the box.
  • Flexible test orchestration – Run tests on demand from your CI/CD or a webhook, or use our built-in test scheduler. No DevOps skills required.
  • AI-assisted test maintenance – Self-heal broken locators, avoid repetitive maintenance activities, and group similar UI changes and issues together.

Ready to see more? Join Effortless Testing with Applitools Autonomous: A Hands-On Webinar. Dave Piacente will showcase more intricate use cases that demonstrate not just the platform’s technical prowess but its ability to transform your testing landscape with plain English.

With the Applitools Intelligent Testing Platform, you can reduce risk, enhance delivery velocity, and provide superior digital experiences that consistently exceed consumer expectations. It’s the ultimate tool to stay ahead in today’s ever-evolving landscape. The days of doing things the same old way are gone. It’s time to shake things up, test smarter, and embrace a new era of test automation.

Watch the webinar on-demand here, and join us as we continue to pave the way for the future of test automation!

The post Recap: A Test Automation Platform Designed for the Future appeared first on Automated Visual Testing | Applitools.

]]>
NLP-Driven Test Automation with Applitools’ Alex Berry | Techstrong Interview https://applitools.com/blog/intelligent-testing-platform-applitools-techstrong-interview/ Thu, 22 Feb 2024 13:57:54 +0000 https://applitools.com/?p=56093 CEO Alex Berry shares news about the Applitools Intelligent Testing Platform, including Autonomous, in this interview with Techstrong.

The post NLP-Driven Test Automation with Applitools’ Alex Berry | Techstrong Interview appeared first on Automated Visual Testing | Applitools.

]]>

We recently announced the launch of the Applitools Intelligent Testing Platform. Comprised of three powerful solutions – Autonomous, Eyes, and Preflight – our platform redefines the future of AI-powered test automation.

Alex Berry, Applitools CEO, sat down with Alan Shimel of TechStrong TV to share more about the Applitools Intelligent Testing Platform and what drove him to join the company as CEO last year. Most notably? Alex was drawn to the strength of our vision and mission to create an end-to-end testing platform and his belief that our technology can be a game-changer in the test automation market.

As for the Intelligent Testing Platform, Alex notes that unlike traditional testing providers where you have to be a developer with advanced coding skills to use their tool – our latest offering, Autonomous, opens up the opportunity for greater collaboration and opens up testing to a new cohort of users like marketing, finance, security, etc. that may not be as tech-savvy.

Learn more about Alex and the latest from Applitools by watching the full interview:

techstrong.tv

The post NLP-Driven Test Automation with Applitools’ Alex Berry | Techstrong Interview appeared first on Automated Visual Testing | Applitools.

]]>
The Rise of Generative QA https://applitools.com/blog/the-rise-of-generative-qa/ Mon, 12 Feb 2024 14:00:00 +0000 https://applitools.com/?p=54987 Explore how Applitools Autonomous revolutionizes testing by replicating the intelligence and accuracy of the best QA practitioners at scale.

The post The Rise of Generative QA appeared first on Automated Visual Testing | Applitools.

]]>

In the ever-accelerating digital product landscape, the speed of development and deployment has become a critical factor for success. As businesses push for faster time-to-market, traditional QA and testing methodologies have increasingly become bottlenecks, unable to keep pace with the rapid development cycles. Enter Applitools Autonomous, a groundbreaking solution designed to transform the QA process and ensure that businesses can deliver flawless digital experiences faster and more efficiently than ever before.

Why Applitools Autonomous Matters

Accelerated Development, Uncompromised Quality

In the current competitive digital environment, the ability to quickly launch new products and features is a significant advantage. However, the necessity for thorough testing has traditionally slowed this process, creating a tension between the need for speed and the demand for quality. Applitools Autonomous addresses this issue head-on by leveraging AI to automate test creation, execution, maintenance, and reporting, significantly reducing the time and resources required for comprehensive testing.

Frontend Excellence as a Differentiator

Today’s consumers expect not just functionality but excellence in design and user experience. Visual defects or poor UI/UX can severely damage a brand’s reputation. Applitools Autonomous enhances collaboration among designers, developers, and product teams, enabling the seamless integration of tools like Figma and Storybook to elevate frontend experiences and ensure they meet the highest standards of quality and design.

The Problem with Traditional Testing

Businesses face significant challenges in ensuring their web applications perform correctly across various screens and devices. The dynamic nature of web content, frequent updates, and the vast array of devices make comprehensive testing a daunting task. Traditional testing tools, designed for a less complex web environment, fall short in providing the necessary coverage and efficiency, leading to bugs slipping into production, reduced brand integrity, and slow testing cycles.

The Solution: Applitools Autonomous

Applitools Autonomous revolutionizes QA by replicating the intelligence and accuracy of the best QA practitioners at scale. It automates the entire testing process, from test creation to maintenance, using AI. This AI-driven approach allows teams to generate test cases with a single click, create end-to-end tests in plain English, and utilize Visual AI to increase test coverage while reducing maintenance efforts. By integrating seamlessly into CI/CD pipelines, Autonomous enables continuous testing and monitoring, ensuring that any changes or new bugs are detected and addressed promptly.

Key Features of Applitools Autonomous

  • Generative Testing: Automatically creates test cases for your site, improving test coverage instantly.
  • Natural Language Test Builder: Allows for the creation of robust tests using plain English, making QA accessible to more teams.
  • Contextual UI Testing: Enhances test reliability by leveraging contextual and semantic cues from the UI.
  • Visual AI: Validates thousands of UI elements instantly, improving test coverage and reducing manual testing efforts.
  • Intelligent Test Infrastructure: Features self-healing tests that adapt to UI changes, ensuring continuous operation.
  • Flexible Execution: Supports on-demand testing, scheduled tests, and integration with CI/CD pipelines.

Ideal Customer Profile

Applitools Autonomous is particularly beneficial for large websites and applications that are content-rich or frequently updated. This includes e-commerce platforms, media and publishing houses, educational institutions, financial institutions, travel and hospitality companies, healthcare providers, and government and NGO websites. These organizations face unique challenges in maintaining quality and functionality due to the dynamic nature of their digital content, making Autonomous an ideal solution.

Transform Your QA with Applitools Autonomous

Applitools Autonomous is not just a tool; it’s a paradigm shift in digital quality assurance. By automating the testing process and leveraging AI, businesses can now ensure their digital experiences are flawless, without the traditional bottlenecks of QA. Embrace the future of testing with Applitools Autonomous and deliver superior digital products with confidence and speed.

The post The Rise of Generative QA appeared first on Automated Visual Testing | Applitools.

]]>
Introducing The Intelligent Testing Platform https://applitools.com/blog/introducing-intelligent-testing-platform/ Wed, 07 Feb 2024 14:00:00 +0000 https://applitools.com/?p=54847 Introducing the Applitools Intelligent Testing Platform, a groundbreaking advancement in AI-powered test automation.

The post Introducing The Intelligent Testing Platform appeared first on Automated Visual Testing | Applitools.

]]>

We are thrilled to announce the launch of the Applitools Intelligent Testing Platform, a groundbreaking advancement in AI-powered test automation. As we step into a new era of quality assurance, Applitools is leading the charge with innovative solutions designed to revolutionize the way businesses approach testing across applications and documents. With the introduction of three powerful solutions—Autonomous, Eyes, and Preflight—our platform is redefining industry standards for flexibility, coverage, and ease of use.

Why Applitools? 

In the fast-paced world of digital innovation, ensuring the quality of web apps, mobile apps, and documents is mission-critical. Traditional testing tools led teams down a path of unsustainable quality. Where each unit of development required a unit or more of testing to validate the change. Applitools works differently at scale and makes validating your digital products remarkably intuitive and efficient, catering to a diverse range of testing requirements. Whether you’re a seasoned coder or someone with minimal testing experience, the platform empowers every team member to contribute to the quality assurance process.

Key Features of the Applitools Platform:

Dynamic Test Authoring: Say goodbye to the tedious aspects of test creation. Applitools allows for dynamic authoring of tests with AI, a codeless recorder, or your favorite framework. Integrate with popular tools like Selenium and Cypress to enable comprehensive ‘shift left’ testing directly from development.

Comprehensive Validation: With Visual AI, you can ensure your user interface works impeccably and looks exactly as intended. From functional and visual validation to accessibility and cross-browser testing, we cover every aspect to guarantee a seamless user experience.

Scalable Execution: Run your tests at an unprecedented scale with our cloud testing capabilities. Applitools’ self-healing locators and selectors correct tests on the fly, reducing maintenance and ensuring your tests evolve with your application.

Advanced Analysis & Maintenance: Dive deep into test analysis with automated grouping, root cause analysis, and powerful dashboards. Our predictive analytics help you stay one step ahead, ensuring that your testing strategy is as dynamic and innovative as your products.

Empowering Every Team Member 

One of the most significant advantages of the Applitools platform is its accessibility to a broad range of personas at your company. By reducing the reliance on coding expertise and offering various test creation and execution methods, we’re democratizing quality assurance. Now, everyone from QA professionals to digital marketers can efficiently build and maintain tests, contributing to a high-quality, reliable product.

By enabling product experts and other “users” of the application interface, we hope for tests to be more comprehensive, thoughtful, and robust. 

For the Future of Your Business

In the competitive landscape of digital products, the balance between speed, quality, and innovation is crucial. The Intelligent Testing Platform is more than a tool—it’s a strategic asset. For engineering and product teams, Applitools means reduced risk, improved delivery velocity, and superior digital experiences that align with consumer expectations and business goals.

As we launch the Applitools Intelligent Testing Platform, we invite you to join us in embracing the future of testing. With our commitment to innovation, community, and quality, we’re excited to partner with you in delivering excellence and driving success in your digital endeavors. Welcome to a new standard of testing—welcome to Applitools.

The post Introducing The Intelligent Testing Platform appeared first on Automated Visual Testing | Applitools.

]]>
What is Visual Testing? https://applitools.com/blog/visual-testing/ Fri, 26 Jan 2024 20:41:12 +0000 https://applitools.com/blog/?p=5069 Visual testing evaluates the visible output of an application and compares that output against the results expected by design. You can run visual tests at any time on any application with a visual user interface.

The post What is Visual Testing? appeared first on Automated Visual Testing | Applitools.

]]>
Visual testing

Learn what visual testing is, why visual testing is important, the differences between visual and functional testing and how you can get started with automated visual testing today.

Editor’s Note: This post was originally published in 2019, and has been recently updated for accuracy and completeness.

What is Meant By Visual Testing?

Visual testing evaluates the visible output of an application and compares that output against the results expected by design. In other words, it helps catch “visual bugs” in the appearance of a page or screen, which are distinct from strictly functional bugs. Automated visual testing tools, like Applitools, can help speed this visual testing up and reduce errors that are occur with manual verification.

You can run visual tests at any time on any application with a visual user interface. Most developers run visual tests on individual components during development, and on a functioning application during end-to-end tests.

In today’s world, in the world of HTML, web developers create pages that appear on a mix of browsers and operating systems. Because HTML and CSS are standards, frontend developers want to feel comfortable with a ‘write once, run anywhere’ approach to their software. Which also translates to “Let QA sort out the implementation issues.” QA is still stuck checking each possible output combination for visual bugs.

This explains why, when I worked in product management, QA engineers would ask me all the time, “Which platforms are most important to test against?” If you’re like most QA team members, your test matrix has probably exploded: multiple browsers, multiple operating systems, multiple screen sizes, multiple fonts — and dynamic responsive content that renders differently on each combination.

If you are with me so far, you’re starting to answer the question: why do visual testing?

Why is Visual Testing Important?

We do visual testing because visual errors happen — more frequently than you might realize. Take a look at this visual bug on Instagram’s app:

The text and ad are crammed together. If this was your ad, do you think there would be a revenue impact? Absolutely.

Visual bugs happen at other companies too: Amazon. GoogleSlack. Robin Hood. Poshmark. Airbnb. Yelp. Target. Southwest. United. Virgin Atlantic. OpenTable. These aren’t cosmetic issues. In each case, visual bugs are blocking revenue.

If you need to justify spending money on visual testing, share these examples with your boss.

All these companies are able to hire some of the smartest engineers in the world. If it happens to Google, or Instagram, or Amazon, it probably can happen to you, too.

Why do these visual bugs occur? Don’t they do functional testing? They do — but it’s not enough.

Visual bugs are rendering issues. And rendering validation is not what functional testing tools are designed to catch. Functional testing measures functional behavior.

Why can’t functional test cover visual issues?

Sure, functional test scripts can validate the size, position, and color scheme of visual elements. But if you do this, your test scripts will soon balloon in size due to checkpoint bloat.

To see what I mean, let’s look at an Instagram ad screen that’s properly rendered. There are 21 visual elements by my count — various icons, text. (This ignores iOS elements at the top like WiFi signal and time, since those aren’t controlled by the Instagram app.)


If you used traditional checkpoints in a functional testing tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium, you’d have to check the following for each of those 21 visual elements:

  1. Visible (true/false)
  2. Upper-left x,y coordinates
  3. Height
  4. Width
  5. Background color

That means you’d need the following number of assertions:

21 visual elements x 5 assertions per element = 105 lines of assertion code

Even with all this assertion code, you wouldn’t be able to detect all visual bugs. Such as whether a visual element can’t be accessed because it’s being covered up, which blocked revenue in the above examples from Yelp, Southwest, United, and Virgin Atlantic. And, you’d miss subtleties like the brand logo, or the red dot under the heart.

But it gets worse: if OS, browser, screen orientation, screen size, or font size changes, your app’s appearance will change as a result. That means you have to write another 105 lines of functional test assertions. For EACH combination of OS/browser/font size/screen size/screen orientation/font size.

You could end up with thousands of lines of assertion code — any of which might need to change with a new release. Trying to maintain that would be sheer madness. No one has time for that.

You need visual testing because visual errors occur. And you need visual testing because you cannot rely on functional tests to catch visual errors.

What is Manual Visual Testing?

Because automated functional testing tools are poorly suited for finding visual bugs, companies find visual glitches using manual testers. Lots of them (more on that in a bit).

For these manual testers, visual testing behaves a lot like this spot-the-difference game:

To understand how time-consuming visual testing can be, get out your phone and time how long it takes for you to find all six visual differences. I took a minute to realize that the writing in the panels doesn’t count. It took me about 3 minutes to spot all six. Or, you can cheat and look at the answers.

Why does it take so long? Some differences are difficult to spot. In other cases, our eyes trick us into finding differences that don’t exist.

Manual visual testing means comparing two screenshots, one from your known good baseline image, and another from the latest version of your app. For each pair of images, you have to invest time to ensure you’ve caught all issues. Especially if the page is long, or has a lot of visual elements. Think “Where’s Waldo”…

Challenges of manual testing

If you’re a manual tester or someone who manages them, you probably know how hard it is to visually test.

If you are a test engineer reading this paragraph, you already know this: web page testing only starts with checking the visual elements and their function on a single operating system, browser, browser orientation, and browser dimension combination. Then continue on to other combinations. And, that’s where a huge amount of test effort lies – not in the functional testing, but in the inspection of visual elements across the combination of an operating system, browser, screen orientation, and browser dimensions.

To put it in perspective, imagine you need to test your app on:

  • 5 operating systems: Windows, MacOS, Android, iOS, and Chrome.
  • 5 popular browsers: Chrome, Firefox, Internet Explorer (Windows only) Microsoft Edge (Windows Only), and Safari (Mac only).
  • 2 screen orientations for mobile devices: portrait and landscape.
  • 10 standard mobile device display resolutions and 18 standard desktop/laptop display resolutions from XGA to 4G.

If you’re doing the math, you think it’s the browsers running on each platform (a total of 21 combinations) multiplied by the two orientations of the ten mobiles (2×10)=20 added to the 18 desktop display resolutions.

21 x (20+18) = 21 x 38 = 798 Unique Screen Configurations to test

That’s a lot of testing — for just one web page or screen in your mobile app.

Except that it’s worse. Let’s say your app has 100 pages or screens to test.

798 Screen Configurations x 100 Screens in-app = 79,800 Screen Configurations to test

Meanwhile, companies are releasing new app versions into production as frequently as once a week, or even once a day.

How many manual testers would you need to test 79,800 screen configurations in a week? Or a day? Could you even hire that many people?

Wouldn’t it be great if there was a way to automate this crazy-tedious process?

Well, yes there is…

What is Automated Visual Testing?

Automated visual testing uses software to automate the process of comparing visual elements across various screen combinations to uncover visual defects.

Automated visual testing piggybacks on your existing functional test scripts running in a tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium. As your script drives your app, your app creates web pages with static visual elements. Functional testing changes visual elements, so each step of a functional test creates a new UI state you can visually test.

Automated visual testing evolved from functional testing. Rather than descending into the madness of writing assertions to check the properties of each visual element, automated visual testing tools visually check the visual appearance of an entire screen with just one assertion statement. This leads to test scripts that are MUCH simpler and easier to maintain.

But, if you’re not careful, you can go down an unproductive rat hole. I’m talking about Snapshot Testing.

What is Snapshot Testing?

First generation automated visual testing uses a technology called snapshot testing. With snapshot testing, a bitmap of a screen is captured at various points of a test run and its pixels are compared to a baseline bitmap.

Snapshot testing algorithms are very simplistic: iterate through each pixel pair, then check if the color hex code is the same. If the color codes are different, raise a visual bug.

Because they can be built relatively easily, there are a number of open-source and commercial snapshot testing tools. Unlike human testers, snapshot testing tools can spot pixel differences quickly and consistently. And that’s a step forward. A computer can highlight the visual differences in the Hocus Focus cartoon easily. A number of these tools market themselves as enabling “pixel perfect testing”.

Sounds like a good idea, right?

What are Problems With Snapshot Testing?

Alas, pixels aren’t visual elements. Font smoothing algorithms, image resizing, graphics cards, and even rendering algorithms generate pixel differences. And that’s just static content. The actual content can vary between any two interfaces. As a result, a tool that expects exact pixel matches between two images can be filled with pixel differences.

If you want to see some examples of bitmap differences affecting snapshot testing, take a look at the blog post we wrote on this topic last year.

Unfortunately, while you might think snapshot testing makes intuitive sense, practitioners like you are finding that the conditions for running successful bitmap comparisons require a stationary target, while your company continues to develop dynamic websites across a range of browsers and operating systems. You can try to force your app to behave a certain way – but you may not always succeed.

Can you share some details of Snapshot Testing Problems?

For example, when testing on a single browser and operating system:

  • Identify and isolate (mute) fields that change over time, such as radio signal strength, battery state, and blinking cursors.
  • Ignore user data that might otherwise change over time, such as visitor count.
  • Determine how to support testing content on your site that must change frequently – especially if you are a media company or have an active blog.
  • Consider how different hardware or software affects antialiasing.

When doing cross-browser testing, you must also consider:

  • Text wrapping, because you cannot guarantee the locations of text wrapping between two browsers using the same specifications. The text can break differently between two browsers, even with identical screen size.
  • Image rendering software, which can affect the pixels of font antialiasing as well as images and can vary from browser to browser (and even on a single browser among versions).
  • Image rendering hardware, which may render bitmaps differently.
  • Variations in browser font size and other elements that affect the text.

If you choose to pursue snapshot testing in spite of these issues, don’t be surprised if you end up joining the group of experienced testers who have tried, and then ultimately abandoned, snapshot testing tools.

Can I See Some Snapshot Testing Problems In Real Life?

Here are some quick examples of these real-life bitmap issues.

If you use pixel testing for mobile apps, you’ll need to deal with the very dynamic data at the top of nearly every screen: network strength, time, battery level, and more:

When you have dynamic content that shifts over time — news, ads, user-submitted content — where you want to check to ensure that everything is laid out with proper alignment and no overlaps. Pixel comparison tools can’t test for these cases. Twitter’s user-generated content is even more dynamic, with new tweets, like, retweet, and comment counts changing by the second.

Your app doesn’t even need to change to confuse pixel tools. If your baselines and test screenshots were captured on different machines with different display settings for anti-aliasing, that can turn pretty much the entire page into a false positive, like this:

Source: storybook.js.org

If you’re using pixel tools and you still have to track down false positives and expose false negatives, what does that say about your testing efficiency?

For these reasons, many companies throw out their pixel tools and go back to manual visual testing, with all of its issues.

There’s a better alternative: using AI — specifically computer vision — for visual testing.

How Do I Use AI for Automated Visual Testing?

The current generation of automated visual testing uses a class of artificial intelligence algorithms called computer vision as a core engine for visual comparison. Typically these algorithms are used to identify objects with images, such as with facial recognition. We call them visual AI testing tools.

AI-powered automated visual testing combines a learning algorithm to interpret the relationship between a rendered page and intended display of visual elements with actual visual elements and locations. Like pixel tools, AI-powered automated visual testing takes page snapshots as your functionally tests run. Unlike pixel-based comparators, AI-powered automated visual test tools use algorithms instead of pixels to determine when errors have occurred.

Unlike snapshot testers, AI-powered automated visual testing tools do not need special environments that remain static to ensure accuracy. Testing and real-world customer data show that AI testing tools have a high degree of accuracy even with dynamic content because the comparisons are based on relationships and not simply pixels.

Here’s a comparison of the kinds of issues that AI-powered visual testing tools can handle compared to snapshot testing tools:

Visual Testing Use CaseSnapshot TestingVisual AI
Cross-browser testingNoYes
Account balancesNoYes
Mobile device status barsNoYes
News contentNoYes
Ad contentNoYes
User submitted contentNoYes
Suggested contentNoYes
Notification iconsNoYes
Content shiftsNoYes
Mouse hoversNoYes
CursorsNoYes
Anti-aliasing settingsNoYes
Browser upgradesNoYes

Some AI-powered test tools have been tested at a false positive rate of 0.001% (or 1 in every 100,000 errors).

AI-Powered Test Tools In Action

An AI-powered automated visual testing tool can test a wide range of visual elements across a range of OS/browser/orientation/resolution combinations. Just running the first baseline of rendering and functional test on a single combination is sufficient to guide an AI-powered tool to test results across the range of potential platforms

Here are some examples of how AI-powered automated visual testing improves visual test results by awareness of content.

This is a comparison of two different USA Today homepage images. When an AI-powered tool looks at the layout comparison, the layout framework matters, not the content. Layout comparison ignores content differences; instead, layout comparison validates the existence of the content and relative placement. Compare that with a bitmap comparison of the same two pages (also called “exact comparison:):

Literally, every non-white space (and even some of the white space) is called out.

Which do you think would be more useful in your validation of your own content?

When Should I Use Visual Testing?

You can do automated visual testing with each check-in of front-end code, after unit testing and API testing, and before functional testing — ideally as part of your CI/CD pipeline running in Jenkins, Travis, or another continuous integration tool.

How often? On days ending with “y”. 🙂

Because of the accuracy of AI-powered automated visual testing tools, they can be deployed in more than just functional and visual testing pre-production. AI-powered automated visual testing can help developers understand how visual element components will render across various systems. In addition to running in development, test engineers can also validate new code against existing platforms and new platforms against running code.

AI-powered tools like Applitools allow different levels of smart comparison.

AI-powered visual testing tools are a key validation tool for any app or web presence that requires a regular change in content and format. For example, media companies change their content as frequently as twice per hour use AI-powered automated testing to isolate real errors that affect paying customers without impacting. And, AI-powered visual test tools are key tools in the test arsenal for any app or web presence going through brand revision or merger, as the low error rate and high accuracy lets companies identify and fix problems associated with major DOM, CSS and Javascript changes that are core to those updates.

Talk to Applitools

Applitools is the pioneer and leading vendor in AI-powered automated visual testing. Applitools has a range of options to help you become incredibly productive in application testing. We can help you test components in development. We can help you find the root cause of the visual errors you have encountered. And, we can run your tests on an Ultrafast Grid that allows you to recreate your visual test in one environment across a number of others on various browser and OS configurations. Our goal is to help you realize the vision we share with our customers – you need to create functional tests for only one environment and let Applitools run the validation across all your customer environments after your first test has passed. We’d love to talk testing with you – feel free to reach out to contact us anytime.

More To Read About Visual Testing

If you liked reading this, here are some more Applitools posts and webinars for you.

  1. Visual Testing for Mobile Apps by Angie Jones
  2. Visual Assertions – Hype or Reality? – by Anand Bagmar
  3. The Many Uses of Visual Testing by Angie Jones
  4. Visual UI Testing as an Aid to Functional Testing by Gil Tayar
  5. Visual Testing: A Guide for Front End Developers by Gil Tayar

Find out more about Applitools. Setup a live demo with us, or if you’re the do-it-yourself type, sign up for a free Applitools account and follow one of our tutorials.

The post What is Visual Testing? appeared first on Automated Visual Testing | Applitools.

]]>
Should We Fear AI in Test Automation? https://applitools.com/blog/should-we-fear-ai-in-test-automation/ Mon, 04 Dec 2023 13:39:00 +0000 https://applitools.com/?p=53216 Richard Bradshaw explores fears around the use of AI in test automation shared during his session—The Fear Factor—at Future of Testing.

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>

At the recent Future of Testing: AI in Automation event hosted by Applitools, I ran a session called ‘The Fear Factor’ where we safely and openly discussed some of our fears around the use of AI in test automation. At this event, we heard from many thought leaders and experts in this domain who shared their experiences and visions for the future. AI in test automation is already here, and its presence in test automation tooling will only increase in the very near future, but should we fear it or embrace it?

During my session, I asked the attendees three questions:

  • Do you have any fears about the use of AI in testing?
  • In one word, describe your feelings when you think about AI and testing.
  • If you do have fears about the use of AI in testing, describe them.

Do you have any fears about the use of AI in testing?

Where do you sit?

I’m in the Yes camp, and let me try to explain why.

Fear can mean many things, but one of them is the threat of harm. It’s that which concerns me in the software testing space. But that harm will only happen if teams/companies believe that AI alone can do a good enough job. If we start to see companies blindly trusting AI tools for all their testing efforts, I believe we’ll see many critical issues in production. It’s not that I don’t believe AI is capable of doing great testing—it’s more the fact that many testers struggle to explain their testing, so to have good enough data to train such a model feels distant to me. Of course, not all testing is equal, and I fully expect to see many AI-based tools doing some of the low-hanging fruit testing for us.

In one word, describe your feelings when you think about AI and testing.

It’s hard to disagree with the results from this question—if I were to pick two myself, I would have gone with ‘excited and skeptical.’ I’m excited because we seem to be seeing new developments and tools each week. On top of that, though, we are starting to see developments in tooling using AI outside of the traditional automation space, and that really pleases me. Combine that with the developments we are seeing in the automation space, such as autonomous testing, and the future tooling for testing looks rather exciting.

That said, though, I’m a tester, so I’m skeptical of most things. I’ve seen several testing tools now that are making some big promises around the use of AI, and unfortunately, several that are talking about replacing or needing fewer testers. I’m very skeptical of such claims. If we pause and look across the whole of the technology industry, the most impactful use of AI thus far is in assisting people. Various GPTs help generate all sorts of artifacts, such as code, copy, and images. Sometimes, it’s good enough, but the majority of the time is helping a human be more efficient—this use of AI and such messaging, excites me.

If you do have fears about the use of AI in testing, describe them here.

We got lots of responses to this question, but I’m going to summarise and elaborate on four of them:

  • Job security
  • Learning curve
  • Reliability & security
  • How it looks

Job Security

Several attendees shared they were concerned about AI replacing their jobs. Personally, I can’t see this happening. We had the same concern with test automation, and that never really materialized. Those automated tests don’t maintain themselves, or write themselves, or share the results themselves. The direction shared by Angie Jones in her talk Where Is My Flying Car?! Test Automation in the Space Age, and Tariq King in his talk, Automating Quality: A Vision Beyond AI for Testing, is AI that assists the human, giving them superpowers. That’s the future I hope, and believe we’ll see, where we are able to do our testing a lot more efficiently by having AI assist us. Hopefully, this means we can release even quicker, with higher quality for our customers.

Another concern shared was about skills that we’ve spent years and a lot of effort learning, suddenly being replaced by AI. Or significantly easier with AI. I think this is a valid concern but also inevitable. We’ve already seen AI have a significant benefit to developers with tools like GitHub Copilot. However, I’ve got a lot of experience with Copilot, and it only really helps when you know what to ask for—this is the same with GPTs. Therefore, I think the core skills of a tester will be crucial, and I can’t see AI replacing those.

Learning Curve

If we are going to be adding all these fantastic AI tools into our tool belts, I feel it’s going to be important we all have a basic understanding of AI. This concern was shared by the attendees. For me, if I’m going to be trusting a tool to do testing for me or generating test artefacts for me, I definitely want that basic understanding. So, that poses the question, where are we going to get this knowledge from?

On the flip side of this, what if we become over-reliant on these new AI tools? A concern shared by attendees was that the next generation of testers might not have some of the core skills we consider important today. Testers are known for being excellent thinkers and practitioners of critical thinking. If the AI tools are doing all this thinking for us, we run the risk of those skills losing their focus and no longer being taught. This could lead to us being over-reliant on such tools, but also the tools biassing the testing that we do. But given that the community is focusing on this already, I feel it’s something we can plan to mitigate and ensure this doesn’t happen.

Reliability & Security

Data, data, data. A lot of fears were shared over the use and collection of data. The majority of us work on applications where data, security, and integrity are critical. I absolutely share this concern. I’m no AI expert, but the best AI tools I’ve used thus far are ones that are contextual to my domain/application, and to do that, we need to train it on our data. These could lead to data bleeding and private data, and that is a huge challenge I think the AI space has yet to solve.

One of the huge benefits of AI tooling is that it’s always learning and, hopefully, improving. But that brings a new challenge to testing. Usually, when we create an automated test, we are codifying knowledge and behavior, to create something that is deterministic, we want it to do the same thing over and over again. This provides consistent feedback. However, with an AI-based tool it won’t always do the same thing over and over again—it will try and apply its intelligence, and here’s where the reliability issues come in. What it tested last week may not be the same this week, but it may give us the same indicator. This, for me, emphasizes the importance of basic AI knowledge but also that we use these tools as an assistant to our human skills and judgment.

How It Looks

Several attendees shared concerns about how these AI tools are going to look. Are they going to a completely black box, where we enter a URL or upload an app and just click Go? Then the tool will tell us pass or fail, or perhaps it will just go and log the bugs for us. I don’t think so. As per Angie’s and Tariq’s talk I mentioned before, I think it’s more likely these tools will focus on assistance. 

These tools will be incredibly powerful and capable of doing a lot of testing very quickly. However, what they’ll struggle to do is to put all the information they find into context. That’s why I like the idea of assistance, a bunch of AI robots going off and collecting information for me. It’s then up to me to process all that information and put it into the context of the product. The best AI tool is going to be the one that makes it as easy as possible to process the masses of information these tools are going to return.

Imagine you point an AI bot at your website, and within minutes, it’s reporting accessibility issues to you, performance issues, broken links, broken buttons, layout issues, and much more. It’s going to be imperative that we can process that information as quickly as possible to ensure these tools continue to support us and don’t drown us in information.

Visit the Future of Testing: AI in Automation archive

In summary, AI is here, and more is coming. It’s very exciting times in the software testing tooling space, and I’m really looking forward to playing with more new tools. I think we need to be curious with these new tools, try them, and see what sticks. The more tools we have in our tool belts, the more options we have to solve our ever-increasing complex testing challenges. 

The post Should We Fear AI in Test Automation? appeared first on Automated Visual Testing | Applitools.

]]>
Future of Testing: AI in Automation Recap https://applitools.com/blog/future-of-testing-ai-in-automation-recap/ Tue, 28 Nov 2023 13:13:00 +0000 https://applitools.com/?p=53155 Recap of the Future of Testing: AI in Automation conference. Watch the on-demand sessions to learn actionable steps to implement AI in your software testing strategy, key considerations around ethics and philosophical considerations, the importance of quality and security, and much more.

The post Future of Testing: AI in Automation Recap appeared first on Automated Visual Testing | Applitools.

]]>

The latest edition of the Future of Testing events, held on November 7, 2023, was nothing short of inspiring and thought-provoking! Focused on AI in Automation, attendees learned how to leverage AI in software testing with top industry leaders like Angie Jones, Tariq King, Simon Stewart, and many more. All of the sessions are available now on-demand, and below, we take a look back at these groundbreaking sessions to give you a sneak peek of what to expect before you watch.

Opening Remarks

Joe Colantonio from TestGuild and Dave Piacente from Applitools set the stage for a thought-provoking discussion on reimagining test automation with AI. As technology continues to evolve at a rapid pace, it’s important for software testing professionals to adapt and embrace new tools and techniques. Joe and Dave encouraged attendees to explore the potential of AI in test automation and how it can enhance their current processes. They also touch upon the challenges faced by traditional test automation methods and how AI-powered solutions can help overcome them.

Dave shared one of our latest updates – the integration of Applitools Eyes with Preflight! Learn more about Preflight.

Keynote—Reimagining Test Automation with AI by Anand Bagmar

In this opening session, Anand Bagmar explored how to reimagine your test automation strategies with AI at each stage of the test automation life cycle, including a live demo showcasing the power of AI in test automation with Applitools.

Anand first introduced the transition from Waterfall to Agile software delivery practices, and while we can’t imagine going back to a Waterfall way of working, he addressed the challenges Agile brings to the software testing life cycle. Each iteration brings more room for error across analysis, maintenance, and validation of tests. This is why testers should turn toward AI-powered test automation, with the help of tools like Applitools, to help ease the pain of Agile testing.

The session is aimed at helping testers understand the importance of leveraging AI technology for successful test automation, as well as empowering them to become more effective in their roles. Watch now.

From Technical Debt to Technical Capital by Denali Lumma

In this session, Denali Lumma from Modular dived into the concept of technical debt and proposed a new perspective on how we view it – technical capital. She walked attendees through key mathematical concepts that help calculate technical capital, as well as examples comparing Pytorch vs. TensorFlow, MySQL vs.Postgres, Frameworks vs. Code Editors, and more.

Attendees gained insights into calculating technical capital and how it can impact the valuation of a company. Watch now.

Automating Quality: A Vision Beyond AI for Testing by Tariq King

Tariq King of EPAM Systems took attendees on a journey through the evolution of software testing and how it has been impacted by generative AI. He shared his vision for the future of automated quality, one that looks beyond just AI to also prioritize creativity and experimentation. Tariq emphasized the need for quality and not just using AI to “go faster.” The more quality you have, the more productive you will be.

Tariq also dove into the ethical implications of using AI for testing and how it can be used for good or evil. Watch the full session.

Leveraging ChatGPT with Cypress for API Testing: Hands-On Techniques by Anna Patterson

In this session, Anna Patterson of EVERFI explored practical techniques and provided hands-on examples of how to harness the combined power of Cypress and ChatGPT to create robust API tests for your applications.

Anna guided us through writing descriptive and clear test prompts using HTML status codes, with a pet store website as an example. She showed in real-time how meaningful prompts in ChatGPT can help you create a solid API test suite, while also considering the security requirements of your company. Watch now.

PANEL—Testing in the AI Era: Opportunities, Hurdles, and the Evolving Role of Engineers

Joe Colantonio, Test Guild • Janna Loeffler, mParticle • Dave Piacente, Applitools • Stephen Williams, Accenture

As the use of AI in software development continues to grow, it is important for engineers and testers to stay ahead of the curve. In this panel discussion led by Joe Colantonio from Test Guild, Janna Loeffler from mParticle, Dave Piacente from Applitools, and Stephen Williams from Accenture came together to discuss the current state of AI implementation and its impact on testing.

They talked about how AI is still in its early stages of adoption and why there may always be some level of distrust in AI technology. The panel emphasized the importance of first understanding why you might implement AI in your testing strategy so that you can determine what the technology will help to solve vs. jumping in right away. Many more incredible takes and insights were shared in this interactive session! Watch now.

The Fear Factor with Richard Bradshaw

The Friendly Tester, Richard Bradshaw, addressed the common fears about AI and automation in testing. Attendees heard Richard’s open and honest discussion on the challenges and concerns surrounding AI and automation in testing. Ultimately, he calmed many fears around AI and gave attendees a better understanding of how they can begin to use it in their organization and to their own advantage. Watch now.

Tests Too Slow? Rethink CI! by Simon Stewart

Simon Stewart from the Selenium Project discussed the latest updates on how to speed up your testing process and improve the reliability of your CI runs. He shared insights into the challenges and tradeoffs involved in this process, as well as what is to come with Selenium and Bazel.
Attendees learned how to rethink their CI approach and use these tools to get faster feedback and more reliable testing results. Watch now.

Revolutionizing Testing: Empowering Manual Testers with AI-Driven Automation by Dmitry Vinnik

Dmitry Vinnik explored how AI-driven automation is revolutionizing the testing process for manual testers. He showed how Applitools’ Visual AI and Preflight help streamline test maintenance and reduce the need for coding.

Dmitry shared the importance of test maintenance, no code solutions for AI testing, and a first-hand look at Applitools Preflight. Watch this session to better understand how AI is transforming testing and empowering manual testers to become more effective in their roles. Watch the full session.

Keynote—Where Is My Flying Car?! Test Automation in the Space Age by Angie Jones

In her closing keynote, Angie Jones of Block took us on a trip into the future to see how science fiction has influenced the technology we have today. The Jetsons predicted many futuristic inventions such as robots, holograms, 3D printing, smart devices, and drones. We will explore these predictions and see how far we have come regarding automation and technology in the testing space.

As technology continues to evolve, it is important for testers to stay updated and adapt their strategies accordingly. Angie dove into the exciting world of tech innovation and imagined the future for test automation in the space age. Watch now.


Visit the full Future of Testing: AI in Automation on-demand archive to watch now and learn actionable steps to implement AI in your software testing strategy, key considerations before you start, other ideas around ethics and philosophical considerations, the importance of quality and security, and much more.

The post Future of Testing: AI in Automation Recap appeared first on Automated Visual Testing | Applitools.

]]>
AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take https://applitools.com/blog/ai-and-the-future-of-test-automation-with-adam-carmi/ Mon, 16 Oct 2023 18:23:49 +0000 https://applitools.com/?p=52314 We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of...

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>

We have a lot of great webinars and virtual events here at Applitools. I’m hoping posts like this give you a high-level summary of the key points with plenty of room for you to form your own impressions.

Dave Piacente

Curious if the software robots are here to take our jobs? Or maybe you’re not a fan of the AI hype train? During a recent session, The Future of AI-Based Test Automation, CTO Adam Carmi discussed—in practical terms—the current and future state of AI-based test automation, why it matters, and what you can do today to level up your automation practice.

  • He describes how AI can be used to overcome common everyday challenges in end-to-end test automation, how the need for skilled testers will only increase, and how AI-based tooling can help supercharge any automated testing practice.
  • He also puts his money where his mouth is by demonstrating how the neverending maintenance overhead of tests can be mitigated using AI-driven tooling which already exists today using concrete examples (e.g., visual validation and self-healing locators).
  • He also discusses the role that AI will play in the future, including the development of autonomous testing platforms. These platforms will be able to automatically explore applications, add validations, and fill gaps in test coverage. (Spoiler alert: Applitools is building one, and Adam shows a bit of a teaser for it using a real-time in-browser REPL to automate the browser which uses natural language similar to ChatGPT.)

You can watch the full recording and find the session materials here, and I’ve included a quick breakdown with timestamps for ease of reference.

  • Challenges with automating end-to-end tests using traditional approaches (02:34-10:22)
  • How AI can be used to overcome these challenges (10:23-44:56)
  • The role of AI in the future of test automation (e.g., autonomous testing) (44:57-58:56)
  • The role of testers in the future (58:57-1:01:47)
  • Q&A session with the speaker (1:01:48-1:12:30)

Want to see more? Don’t miss Future of Testing: AI in Automation.

The post AI and The Future of Test Automation with Adam Carmi | A Dave-reloper’s Take appeared first on Automated Visual Testing | Applitools.

]]>
Driving Successful Test Automation at Scale: Key Insights https://applitools.com/blog/driving-successful-test-automation-at-scale-key-insights/ Mon, 25 Sep 2023 13:30:00 +0000 https://applitools.com/?p=52139 Scaling your test automation initiatives can be daunting. In a recent webinar, Test Automation at Scale: Lessons from Top Performing Distributed Teams, panelists from Accenture, Bayer, and Eversana shared their...

The post Driving Successful Test Automation at Scale: Key Insights appeared first on Automated Visual Testing | Applitools.

]]>

Scaling your test automation initiatives can be daunting. In a recent webinar, Test Automation at Scale: Lessons from Top Performing Distributed Teams, panelists from Accenture, Bayer, and Eversana shared their insights for overcoming common challenges. Here are their top recommendations.

Establish clear processes for collaboration.
Daily standups, sprint planning, and retrospectives are essential for enabling communication across distributed teams. “The only way that you can build a quality product that actually satisfies the business requirements is [through] that environment where you’ve got the different teams coming together,” said Ariola Qeleposhi, Test Automation Lead at Accenture.

Choose tools that meet current and future needs.
Consider how tools will integrate and the skills required to use them. While a “one-size-fits-all” approach may seem appealing, it may not suit every team’s needs. Think beyond individual products to the overall solution, advised Anand Bagmar, Senior Solution Architect at Applitools. Each product team should have a test pyramid, and tests should run at multiple levels to get real value from your automation.

Start small and build a proof of concept.
Demonstrate how automation reduces manual effort and finds defects faster to gain leadership buy-in. “Proof of concepts will really help to provide a form of evidence in a way to say that, okay, this is our product, this is how we automate or can potentially automate, and what we actually save from that,” said Qeleposhi.

Consider a “quality strategy” not just a “test strategy.”
Involve all roles like business, product, dev, test, and DevOps. “When you think about it as quality, then the role does not matter,” said Bagmar.

Leverage AI and automation as “seatbelts,” not silver bullets.
They enhance human judgment rather than replace it. “Automation is a lot, at least in this instance, it’s like a seatbelt. You don’t think you’ll need it, but when you need it you better have it,” said Kyle Penniston, Senior Software Developer at Bayer.

Build, buy, and reuse.
Don’t reinvent the wheel. Use open-source tools and existing frameworks. “There will be great resources that you can use. Open-source resources, for example, frameworks that might be there that you can use to quickly get started and build on top of that,” said Bagmar.

Provide learning resources for new team members.
For example, Applitools offers Test Automation University with resources for developing automation skills.

Measure and track metrics to ensure value.
Look at reduced manual testing, faster defect finding, test coverage, and other KPIs. “You need to get some metrics really, and then you need to use that from an automation side of things,” said Qeleposhi.

The key to building a solid foundation for scaling test automation is taking an iterative, collaborative approach focused on delivering value and enhancing quality. With the right strategies and tools in place, teams can overcome common challenges and achieve automation success. Watch the full recording.

The post Driving Successful Test Automation at Scale: Key Insights appeared first on Automated Visual Testing | Applitools.

]]>
Functional Testing’s New Friend: Applitools Execution Cloud https://applitools.com/blog/functional-testings-new-friend-applitools-execution-cloud/ Mon, 11 Sep 2023 19:59:03 +0000 https://applitools.com/?p=51735 Dmitry Vinnik explores how the Execution Cloud and its self-healing capabilities can be used to run functional test coverage.

The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>
Execution Cloud main image

In the fast-paced and competitive landscape of software development, ensuring the quality of applications is of utmost importance. Functional testing plays a vital role in verifying the robustness and reliability of software products. With the increasing complexity of applications with a long list of use cases and the need for faster release cycles, organizations are challenged to conduct thorough functional testing across different platforms, devices, and screen resolutions. 

This path to a better quality of software products is where Applitools, a leading provider of functional testing solutions, becomes a must-have tool with its innovative offering, the Execution Cloud.

Applitools’ Execution Cloud is a game-changing platform that revolutionizes functional testing practices. By harnessing the power of cloud computing, the Execution Cloud eliminates the need for resource-heavy local infrastructure, providing organizations with enhanced efficiency, scalability, and reliability in their testing efforts. The cloud-based architecture integrates with existing testing frameworks and tools, empowering development teams to execute tests across various environments effortlessly.

This article explores how the Execution Cloud and its self-healing capabilities can be used to run our functional test coverage. We demonstrate this cloud platform’s features, like auto-fixing selectors caused by a change in the production code. 

Why Execution Cloud

As discussed, the Applitools Execution Cloud is a great tool to enhance any team’s quality pipeline.

One of the main features of this cloud platform is that it can “self-heal” our tests using AI. For example, if, during refactoring or debugging, one of the web elements had its selectors changed and we forgot to update related test coverage, the Execution Cloud would automatically fix our tests. This cloud platform would use one of the previous runs to deduce another relevant selector and let our tests continue running. 

This self-healing capability of the Execution Cloud allows us to focus on actual production issues without getting distracted by outdated tests. 

Functional Testing and Execution Cloud

It’s fair to say that Applitools has been one of the leading innovators and pioneers in visual testing with its Eyes platform. However, with the Execution Cloud in place, Applitools offers its users broader, more scalable test capabilities. This cloud platform lets us focus on all types of functional testing, including non-Visual testing.

One of the best features of the Execution Cloud is that it’s effortless to integrate into any test case with just one line. There is also no requirement to use the Applitools Eyes framework. In other words, we can run any functional test without creating screenshots for visual validation while utilizing the self-healing capability of the Execution Cloud.

Adam Carmi, Applitools CTO, demos the Applitools Execution Cloud and explores how self-healing works under the hood in this on-demand session.

Writing Test Suite

As we mentioned earlier, the Execution Cloud can be integrated with most test cases we already have in place! The only consideration is at the time of writing this post, the current version of the Execution Cloud only supports Selenium WebDriver across all languages (Java, JavaScript, Python, C#, and Ruby), WebdriverIO, and any other WebDriver-based framework. However, more test frameworks will be supported in the near future.

Fortunately, Selenium is a highly used testing framework, giving us plenty of room to demonstrate the power of the Execution Cloud and functional testing.

Setting Up Demo App

Our demo application will be a documentation site built using the Vercel Documentation template. It’s a simple app that uses Next.js, a React framework created by Vercel, a cloud platform that lets us deploy web apps quickly and easily.

To note, all the code for our version of the application is available here.

First, we need to clone the demo app’s repository: 

git clone git@github.com:dmitryvinn/docs-demo-app.git

We will need Node.js of version 10.13 to work with this demo app, which can be installed by following the steps here.

After we set up Node.js, we should open a terminal and run the following command to install the necessary dependencies:

npm install

The next step is to navigate into the project’s directory and start the app locally:

cd docs-demo-app

npm run dev

Now our demo app is accessible at ‘http://localhost:3000/’ and ready to be tested.

Docs Demo App 

Deploying Demo App

While the Execution Cloud allows us to run the tests against a local deployment, we will simulate the production use case by running our demo app on Vercel. The steps for deploying a basic app are very well outlined here, so we won’t spend time reviewing them. 

After we deploy our demo app, it will appear as running on the Vercel Dashboard:

Demo App Deployed on Vercel

Now, we can write our tests for a production URL of our demo application available at `https://docs-demo-app.vercel.app/`.

Setting Up Test Automation

Execution Cloud offers great flexibility when it comes to working with our tests. Rather than re-writing our test suites to run against this self-healing cloud platform, we simply need to update a few lines of code in the setup part of our tests, and we can use the Execution Cloud. 

For our article, our test case will validate navigating to a specific page and pressing a counter button. 

To make our work even more effortless, Applitools offers a great set of quickstart examples that were recently updated to support the Execution Cloud. We will start with one of these samples using JavaScript with Selenium WebDriver and Jest as our baseline.

We can use any Integrated Development Environment (IDE) to write tests like IntelliJ IDEA or Visual Studio Code. Since we use JavaScript as our programming language, we will rely on NPM for the build system and our test runner.

Our tests will use Jest as its primary testing framework, so we must add a particular configuration file called `jest.config.js`. We can copy-paste a basic setup from here, but in its shortest form, the required configurations are the following.

module.exports = {

    clearMocks: true,

    coverageProvider: "v8",

  };

Our tests will require a `package.json` file which should include Jest, Selenium WebDriver, and Applitools packages. Our dependencies’ part of the `package.json` file should eventually look like the one below:

"dependencies": {

      "@applitools/eyes-selenium": "^4.66.0",

      "jest": "^29.5.0",

      "selenium-webdriver": "^4.9.2"

    },

After we install the above dependencies, we are ready to write and execute our tests.

Writing the Tests

Since we are running a purely functional Applitools test with its Eyes disabled (meaning we do not have a visual component), we will need to initialize the test and have a proper wrap-up for it.

In `beforeAll()`, we can set our test batching and naming along with configuring an Applitools API key.

To enable Execution Cloud for our tests, we need to ensure that we activate this cloud platform on the account level. After that’s done, in our tests’ setup, we will need to initialize the WebDriver using the following code:

let url = await Eyes.getExecutionCloudUrl();

driver = new Builder().usingServer(url).withCapabilities(capabilities).build();

For our test case, we will open a demo app, navigate to another page, press a counter button, and validate that the click incremented the value of clicks by one.

describe('Documentation Demo App', () => {

…

    test('should navigate to another page and increment its counter', async () => {

       // Arrange - go to the home page

       await driver.get('https://docs-demo-app.vercel.app/');

       // Act - go to another page and click a counter button

        await driver.findElement(By.xpath("//*[text() = 'Another Page']")).click();

        await driver.findElement(By.className('button-counter')).click();

      // Assert - validate that the counter was clicked

        const finalClickCount = await driver.findElement(By.className('button-counter')).getText();

        await expect(finalClickCount).toContain('Clicked 1 times');

    }

…

Another critical aspect of running our test is that it’s a non-Eyes test. Since we are not taking screenshots, we need to tell the Execution Cloud when a test begins and ends. 

To start the test, we should add the following snippet inside the `beforeEach()` that will name the test and assign it to a proper test batch:

await driver.executeScript(

            'applitools:startTest',

            {

                'testName': expect.getState().currentTestName,

                'appName': APP_NAME,

                'batch': { "id": batch.getId() }

            }

        )

Lastly, we need to tell our automation when the test is done and what were its results. We will add the following code that sets the status of our test in the `afterEach()` hook:

await driver.executeScript('applitools:endTest', 

       { 'status': testStatus })

Now, our test is ready to be run on the Execution Cloud.

Running test

To run our test, we need to set the Applitools API key. We can do it in a terminal or have it set as a global variable:

export APPLITOOLS_API_KEY=[API_KEY]

In the above command, we need to replace [API_KEY] with the API key for our account. The key can be found in the Applitools Dashboard, as shown in this FAQ article.

Now, we need to navigate to the location where our tests are located and run the following npm test command in the terminal:

npm test

It will trigger the test suite that can be seen on the Applitools Dashboard:

Applitools Dashboard with Execution Cloud enabled

Execution Cloud in Action

It’s a well-known fact that apps go through a lifecycle. They get created, get bugs, change, and ultimately shut down. This ever-changing lifecycle of any app is what causes our tests to break. Whether it’s due to a bug or an accidental regression, it’s widespread for a test to fail after a change in an app.

Let’s say a developer working on a counter button component changes its class name to `button-count` from the original `button-counter`. There could be many reasons this change could happen, but nevertheless, these modifications to the production code are extremely common. 

What’s even more common is that the developer who made the change might forget or not find all the tests using the original class name, `button-counter`, to validate this component. As a result, these outdated tests would start failing, distracting us from investigating real production issues, which could significantly impact our users.

Execution Cloud and its self-healing capabilities were built specifically to address this problem. This cloud platform would be able to “self-heal” our tests that were previously running against a class name `button-counter`, and rather than failing these tests, the Execution Cloud would find another selector that hasn’t changed. With this highly scalable solution, our test coverage would remain the same and let us focus on correcting issues that are actually causing a regression in production.

Although we are running non-Eyes tests, the Applitools Dashboard still allows us to see several valuable materials, like a video recording of our test or to export WebDriver commands! 

Want to see more? Request a free trial of Applitools Execution Cloud.

Conclusion

Whether you are a small startup that prioritizes quick iterations, or a large organization that focuses on scale, Applitools Execution Cloud is a perfect choice for any scenario. It offers a reliable way for tests to become what they should be – the first line of defense in ensuring the best customer experience for our users.

With the self-healing capabilities of the Execution Cloud, we get to focus on real production issues that actively affect our customers. With this cloud platform, we are moving towards a space where tests don’t become something we accept as constantly failing or a detriment to our developer velocity. Instead, we treat our test coverage as a trusted companion that raises problems before our users do. 

With these functionalities, Applitools and its Execution Cloud quickly become a must-have for any developer workflow that can supercharge the productivity and efficiency of every engineering team.

The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on Automated Visual Testing | Applitools.

]]>
Welcome Back, Selenium Dave! https://applitools.com/blog/welcome-back-selenium-dave/ Tue, 05 Sep 2023 18:53:47 +0000 https://applitools.com/?p=51615 Let me tell you a story. It’s one I haven’t told before. But to do it, let’s first get acquainted. Hi – I’m Dave Piacente. You may know me from...

The post Welcome Back, Selenium Dave! appeared first on Automated Visual Testing | Applitools.

]]>
Dave Piacente

Let me tell you a story. It’s one I haven’t told before. But to do it, let’s first get acquainted.

Hi – I’m Dave Piacente. You may know me from a past life when I went by the name Dave Haeffner and my past works with Selenium. I’m the new DevRel and Head of Community at Applitools—Andy’s moved on to a tremendous bucket-list job opportunity elsewhere, and we wish him all the best! I’ve been working closely with him behind the scenes to learn the ropes to help make this a smooth transition and to ensure that all of the great work he’s done and the community he’s grown will continue to flourish. And to loosely paraphrase Shakespeare – A DevRel (or a Dave) by any other name would be just as sweet.

Now, about that story…

I used to be known for a thing – “Selenium Dave” as they would say. I worked hard to earn that rep. I had one aim, to be helpful. I was trying to solve a problem that vexed me early on in my career in test automation (circa 2009) when open-source test automation and grid providers were on a meteoric rise. The lack of clear and concise guidance on how to get started and grow into a mature test automation practice was profound. But the fundamentals weren’t that challenging to master (once you knew what they were), and the number of people gnashing their teeth as they white-knuckled their way through it was eye-popping.

So, back in 2011, after working in the trenches at a company as an SDET (back before that job title was a thing), I left to start out on my own with a mission to help make test automation simpler. It started simply enough with consulting. But then the dominos began to fall when I started organizing a local test automation meetup.

While running the meetup I realized I kept getting asked the same questions and offering the same answers, so I started jotting them down and putting them into blog posts which later became a weekly tip newsletter (Elemental Selenium, which eventually grew to a readership of 30,000 testers). Organically, that grew into enough content (and confidence) to write a book, The Selenium Guidebook.

I then stepped out of meetup organization and into organizing the Selenium conference, where I became the conference chair from 2014 to 2017. My work on the conference opened the door for me to become part of the Selenium core team. From there it was a hop-skip-and-a-jump to working full-time as a contributor on Selenium IDE at Applitools.

Underpinning all of this, I was doing public speaking at meetups and conferences around the world (starting with my first conference talk back in 2010). I felt like I had summited the mountain—I was in the best possible position to be the most helpful. And I truly felt like I was making a difference in the industry.

But then I took a hard right turn and stopped doing it all. I felt like I had accomplished what I’d set out to do – I had helped make testing simpler (at least for people using Selenium). So I stepped down from the Selenium project, I stopped organizing the Selenium conference, I stopped doing public speaking, I sold my content business (e.g., the newsletter & book) to a third party, and I even changed my last name (from Haeffner to Piacente – although for reasons unrelated to my work). By all marks, I had closed that chapter of my life and was happily focusing on being a full-time Software Developer in the R&D team at Applitools.

While I was doing that, the test automation space continued to grow and evolve as I watched from the sidelines. Seemingly every enterprise was now shifting left (not just the more progressive ones), alternative open-source test automation frameworks to Selenium continued to gain ground in adoption, some new-and-noteworthy entrants started popping up, and the myriad of companies selling their wares in test automation seemed to grow exponentially. And then, Generative AI waltzed into the public domain like the Kool-Aid man busting through a wall. “Oh yeah!”

I started to realize that the initial problem I had strived to make a dent in—making testing simpler—was a moving target. Some things are far simpler now than when I started out, but some are more complex. There are new problems constantly emerging, and the ground underneath our feet is shifting.

So perhaps my work is not done. Perhaps there is more that I can do to help make test automation simpler. To return to public speaking and content creation. To return to being helpful. But this time, with the full weight of a company behind me, instead of as just as a one-man show.

I’m thrilled to be back, and I’m excited for what’s to come!

The post Welcome Back, Selenium Dave! appeared first on Automated Visual Testing | Applitools.

]]>