Fact-checked by Grok 2 weeks ago

Graphical user interface testing

testing is a form of system-level that verifies the functionality, , and reliability of the graphical front-end of applications by simulating user interactions, such as clicks, keystrokes, and drags on widgets like , menus, and text fields, to ensure correct event handling and state transitions. GUI testing plays a critical role in because graphical interfaces are ubiquitous in modern applications, from desktop programs to mobile apps, and faults in GUIs often account for a substantial portion of reported software defects, impacting and system reliability. The process addresses the event-driven nature of GUIs, where user inputs trigger complex behaviors that must align with expected outputs, and it typically consumes 20-50% of total development costs due to the need for thorough validation. Effective GUI testing helps prevent issues like incorrect responses, inconsistencies, or failures that could lead to broader system errors. Key techniques in GUI testing include for exploratory validation, capture-and-replay that records and replays user actions for checks, and model-based approaches that generate test sequences from abstract models of the GUI's state and events to achieve higher coverage. Capture-and-replay tools, such as those using scripting for event simulation, are widely adopted in for their simplicity, while model-based methods, supported by tools like GUITAR, dominate research for handling the of possible interactions. Advanced variants incorporate visual recognition to test cross-platform GUIs without relying on underlying code, enabling language-agnostic . Despite these advancements, GUI testing faces significant challenges, including the vast, potentially infinite space of event sequences that leads to incomplete coverage, high maintenance efforts for automated scripts amid frequent UI changes, and difficulties in defining reliable test oracles to verify outcomes. Maintenance alone can consume up to 60% of automation time, influenced by factors like test complexity and tool stability, often resulting in a return on investment only after multiple project cycles. Ongoing research emphasizes hybrid techniques, such as AI-driven exploration and , to mitigate these issues and improve scalability for evolving platforms like and applications; as of 2025, this includes machine learning-based self-healing tests and large language model-assisted for enhanced defect detection and coverage.

Overview

Definition and Scope

Graphical user interface (GUI) testing is the process of systematically evaluating the front-end of software applications to ensure that their graphical elements function correctly, provide an intuitive , and align with visual and standards. This form of testing verifies interactions with components such as buttons, menus, windows, icons, and dialog boxes, confirming that user inputs produce expected outputs without errors in layout, , or . The scope of GUI testing encompasses functional validation—ensuring that interface actions trigger appropriate application behaviors—usability assessments to evaluate ease of navigation and user satisfaction, and compatibility checks across devices, operating systems, and screen resolutions. It focuses exclusively on the client-side presentation and interaction layers, deliberately excluding backend logic, database operations, or server-side processing, which are addressed in other testing phases like unit or integration testing. Unlike unit testing, which isolates and examines individual code modules for internal correctness, GUI testing adopts a black-box approach centered on the end-user perspective, simulating real-world scenarios to detect issues arising from the integration of UI components with the underlying system. GUI testing originated in the 1980s alongside the proliferation of graphical windowing systems, beginning with experimental platforms like the workstation developed in 1973 at Xerox PARC, which introduced concepts such as windows, icons, and mouse-driven interactions. This evolution accelerated with commercial releases, including the in 1981 and Apple's Macintosh in 1984, necessitating dedicated methods to validate the reliability and consistency of these novel interfaces in production software.

Importance and Challenges

Graphical user interface (GUI) testing plays a pivotal role in software development by ensuring user satisfaction through the detection and prevention of UI bugs, which constitute a significant portion of user-reported issues. Studies indicate that UI issues represent approximately % of the most common bugs encountered by users in mobile applications. Furthermore, in analyses of functional bugs in apps, UI-related defects account for over 60% of cases, including display issues like missing or distorted elements and interaction problems such as unresponsive components. This emphasis on GUI testing is crucial for maintaining , as it verifies compliance with standards for users with disabilities, such as screen reader compatibility, and ensures seamless cross-device compatibility amid diverse hardware and operating systems. From a business perspective, rigorous GUI testing reduces the incidence of post-release defects, which are substantially more expensive to address than those identified pre-release. Fixing a defect after product release can cost up to 30 times more than resolving it during the design phase, according to data, due to factors like user impact, deployment efforts, and potential revenue loss. By integrating GUI testing into agile and cycles, organizations can achieve faster iteration and continuous validation, enabling automated UI checks within pipelines to support rapid releases without compromising quality. This approach not only minimizes defect leakage but also aligns with the demands of modern development practices for timely market delivery. Despite its value, GUI testing faces several key challenges that complicate its implementation. One major obstacle is test fragility, where even minor UI changes, such as updates to selectors or DOM structures, can cause automated tests to fail, leading to high maintenance overhead; empirical studies show an average of 5.81 modifications per test across GUI suites. Platform variability exacerbates this, as rendering differences across operating systems—like Windows versus —demand extensive cross-environment validation to ensure consistent . Additionally, handling dynamic s, such as animations or asynchronously loading content, introduces flakiness and non-determinism, making reliable verification difficult in evolving applications. These issues highlight the need for robust strategies to sustain effective GUI testing amid frequent updates.

Test Design and Generation

Manual Test Case Creation

Manual test case creation in (GUI) testing is a human-led where testers analyze requirements, such as user stories and functional specifications, to design detailed, step-by-step scenarios that simulate real user interactions with the interface. This involves identifying key GUI elements—like buttons, forms, and menus—and outlining actions such as "click the login button, enter valid credentials, and verify successful navigation to the ," ensuring the scenarios cover both positive and negative outcomes. Prioritization occurs based on , where test cases targeting critical paths, such as payment processing in an e-commerce app, receive higher focus to maximize defect detection efficiency. Common techniques for manual GUI test case design include , which allows testers to dynamically investigate the interface without predefined scripts, fostering ad-hoc discovery of issues and unexpected behaviors in dynamic environments like web applications. Another key method is , a black-box technique that targets edge cases, such as entering maximum-length text in a form or submitting invalid characters in input validation, to uncover errors at the limits of acceptable inputs. Best practices emphasize creating checklists to ensure comprehensive coverage of all UI elements, navigation workflows, and cross-browser compatibility, while documenting cases in structured tools like Excel spreadsheets or for traceability and reuse. Test cases should remain concise, with 5-10 steps per scenario, incorporating preconditions and expected results to facilitate clear execution and review. This approach offers advantages in capturing nuanced user behaviors and intuitive insights that scripted methods might overlook, particularly for complex visual layouts or features. However, it is time-intensive, prone to subjectivity from tester experience, and scales poorly for repetitive testing across multiple platforms. For instance, in testing a dropdown menu, a case might involve selecting options in various browsers to verify correct loading and display without truncation, highlighting issues early. These cases can transition to automated scripts for enhanced in larger projects.

Automated Test Case Generation

Automated test case generation in (GUI) testing involves programmatic techniques to create executable test scripts systematically, leveraging rule-based and data-driven methods to improve efficiency and repeatability over manual approaches. These methods focus on separating test logic from data and actions, enabling scalable generation of s for web, desktop, and mobile GUIs without relying on exploratory human input. Data-driven testing separates test data from the core script, allowing variations in inputs—such as user credentials or form values—to be managed externally, often via spreadsheets or CSV files, to generate multiple test instances from a single script template. This approach facilitates rapid iteration for boundary value analysis or equivalence partitioning in GUI elements like input fields, reducing redundancy in test maintenance. For instance, a spreadsheet might define positive and negative input sets for a login form, with the script iterating through each row to simulate submissions and validate outcomes. Keyword-driven frameworks build test cases by composing reusable keywords that represent high-level actions, such as "click" on a , "enter text" into a , or "verify text" in a dialog, stored in tables or scripts for easy assembly without deep programming knowledge. These keywords map to underlying code implementations, promoting modularity and collaboration between testers and developers; for example, a test for checkout might sequence keywords like "select item," "enter shipping details," and "confirm payment" to cover end-to-end flows. Tools like integrate such keywords to automate interactions across platforms. Integration with tools like for web GUIs and for mobile applications enables script-based generation, where locators and actions are defined programmatically to simulate user events without AI assistance. Selenium scripts, for example, use WebDriver APIs to navigate DOM structures and execute sequences, while Appium extends this to native and hybrid apps via similar command patterns. complements these by deriving test paths from formal models, such as state diagrams representing GUI transitions (e.g., from login screen to dashboard), to automatically generate sequences that exercise valid and invalid flows. The process typically begins by parsing UI models, such as DOM trees for web applications, to identify interactable elements and possible event sequences, then applying rules to generate paths that achieve coverage goals like 80% of state transitions or event pairs. Generated cases are executed via the integrated tools, with assertions verifying expected GUI states, such as element visibility or text content. A specific example is using locators in to auto-generate click sequences for form validation: an like //input[@name='email'] targets the email field, followed by sequential locators for password and submit button, iterating data-driven inputs to test validation errors like "invalid format." Despite these benefits, automated generation requires significant upfront scripting effort to define rules and models, often demanding domain expertise for accurate UI representation. It also struggles with non-deterministic UIs, where timing issues, asynchronous loads, or dynamic (e.g., pop-ups) cause flaky tests that fail intermittently despite identical inputs. Simple GUI changes can necessitate 30-70% script modifications, rendering many cases obsolete. These methods can be briefly enhanced by planning systems for handling complex, interdependent scenarios.

Advanced Techniques

Planning Systems

Planning systems in graphical user interface (GUI) testing employ formal planning techniques to sequence test actions, framing the testing process as a within a state space where GUI elements represent states and user interactions denote transitions between them. This approach automates the generation of test sequences by defining states, states, and operators that model possible actions, enabling the planner to derive paths that achieve coverage objectives while minimizing . By treating design as a planning , these systems reduce manual effort and improve thoroughness compared to ad-hoc scripting. The historical development of planning systems for GUI testing traces back to 1990s advancements in AI planning research, such as the Iterative Partial-Order Planning (IPP) algorithm, which was adapted for software testing contexts. Early applications to GUIs emerged around 2000, with tools like PATHS (Planning Assisted Tester for grapHical user interface Systems) integrating planning to automate test case creation for complex interfaces. Commercial tools, such as TestOptimal, further popularized model-driven planning variants by the early 2000s, leveraging state-based models to generate execution paths. These evolutions built on foundational AI work to address the combinatorial explosion in GUI state spaces. Key planning paradigms include (HTN) planners, which decompose high-level UI tasks into sub-tasks for efficient handling of hierarchical structures, and , which produces flexible sequences by establishing only necessary ordering constraints among actions. In HTN, GUI events are modeled as operators at varying abstraction levels—for instance, a high-level "open file" task decomposes into primitive actions like menu navigation and dialog confirmation—allowing planners to resolve conflicts and generate concise plans. Partial-order planning complements this by enabling non-linear test paths that account for parallel or conditional GUI behaviors, producing multiple linearizations from a single partial plan to enhance coverage. These systems optimize for requirements like event-flow coverage by searching state-transition graphs derived from the GUI model. In application to GUIs, planning systems model the interface as a graph of states (e.g., screen configurations) and transitions (e.g., button clicks), then generate optimal test paths that traverse critical edges to verify functionality. For example, to test a multi-step workflow such as navigating a menu, selecting an option, and confirming a dialog, an HTN planner might decompose the goal into subtasks, yielding a sequence like "click File > New > OK" while pruning invalid paths to avoid redundant actions and ensure minimal test length. This method has demonstrated scalability, reducing operator counts by up to 10:1 in benchmarks on applications like Microsoft WordPad, facilitating by isolating affected subplans.

AI-Driven Methods

Artificial intelligence-driven methods in graphical user interface (GUI) testing leverage techniques to predict and target failure-prone UI elements, enhancing the efficiency of prioritization. By analyzing historical test data, UI layouts, and interaction logs, models identify components susceptible to defects, such as buttons or menus prone to logical errors due to event handling issues. For instance, algorithms trained on datasets of GUI screenshots and failure reports can classify elements by risk level, allowing testers to focus on high-probability failure areas and reduce overall testing effort by up to 30% in empirical studies. Reinforcement learning (RL) approaches enable dynamic exploration of GUI states by treating test generation as a sequential process, where an learns optimal actions (e.g., clicks, swipes) to maximize coverage or fault detection rewards. In RL-based frameworks, the consists of the GUI's state space, with actions simulating user interactions and rewards based on newly discovered states or detected bugs; deep Q-networks or policy gradient methods adapt the 's policy over episodes to handle non-deterministic UI behaviors like pop-ups or animations. This method has demonstrated superior state coverage compared to traditional random exploration, achieving 20-50% more unique paths in apps. Genetic algorithms (GAs) apply evolutionary principles to optimize test sequence generation, initializing a of candidate test scripts and iteratively evolving them through selection, crossover, and to improve . In GUI contexts, chromosomes represent sequences of UI events, with evaluated to balance coverage and fault revelation; a common formulation is \text{Fitness} = \alpha \cdot \text{Coverage} + \beta \cdot \text{Fault Detection}, where \alpha and \beta are tunable weights emphasizing exploration versus bug finding. This population-based search has been effective for repairing and generating feasible test suites, increasing fault detection rates by evolving diverse interaction paths in complex applications. Convolutional neural networks (CNNs) facilitate visual UI analysis by processing screenshots as images to detect and locate interactive , enabling the generation of image-based tests that bypass traditional accessibility tree dependencies. These networks extract features like edges and textures to identify widgets or layout anomalies, supporting end-to-end where actions are predicted from visual inputs alone. In mobile GUI testing, CNN-driven models have improved robustness against UI changes, achieving over 85% accuracy in element localization for dynamic interfaces. Post-2020 advancements integrate large language models (LLMs) for natural language-driven test scripting, where prompts describe user intents (e.g., "navigate to settings and adjust privacy") to generate executable GUI test scripts via code synthesis. These LLMs combine textual understanding with to produce adaptive tests, outperforming rule-based generators in handling ambiguous scenarios. As of 2025, integrations with advanced LLMs, such as those in updated frameworks like TestGPT, have enhanced script generation for web and cross-platform GUIs. A notable 2023 example involves RL-augmented adaptive for mobile GUIs, where LLMs guide exploration to target rare states, boosting bug discovery in real-world apps by 40%. Recent 2024-2025 developments, including ICSE 2025 papers on LLM-RL hybrids, report up to 50% improvements in coverage for evolving mobile apps. Execution of these AI-generated cases often integrates with tools for validation. AI-driven methods also face challenges, including potential biases in training data that may overlook diverse UI designs (e.g., accessibility features in non-Western languages), leading to incomplete fault detection. Mitigation strategies, such as diverse dataset augmentation and fairness audits, are increasingly emphasized in recent as of to ensure equitable testing outcomes.

Test Execution

User Interaction Simulation

User interaction simulation in (GUI) testing involves programmatically mimicking human actions such as clicks, drags, and keystrokes to exercise the interface as a real user would during automated test execution. This approach ensures that tests can replicate end-to-end workflows without intervention, enabling reliable validation of GUI functionality across various platforms. By leveraging application programming interfaces (), testers can inject events directly into the system, bypassing the need for physical interactions while maintaining fidelity to actual user behaviors. Core methods for simulation include sending events for clicks and drags, inputs for text entry, and gesture simulations for touch-based interfaces. For instance, clicks are emulated by dispatching down and up events at specific coordinates or elements, while drags involve sequential move events between start and end points. Keystrokes are simulated by generating key down and up events with corresponding character codes. In mobile contexts, interactions, such as pinches or two-finger swipes, are handled through APIs that coordinate multiple contact points simultaneously. These techniques rely on underlying libraries like for visual targeting in image-based tools, ensuring precise event delivery even in dynamic layouts. Platform-specific implementations adapt these methods to native APIs for optimal performance and compatibility. On desktop systems, particularly Windows, the Win32 UI Automation framework exposes control patterns that allow scripts to invoke actions like button clicks or list selections by navigating the UI element tree and applying patterns such as Invoke or Selection. For web applications, JavaScript's UI Events API dispatches synthetic events like MouseEvent for clicks or KeyboardEvent for typing directly on DOM elements, enabling browser-based automation tools to trigger handlers without altering the page source. In mobile testing, Android's (ADB) facilitates simulations via shell commands, such as input tap x y for single touches or input swipe x1 y1 x2 y2 for gestures, often integrated with frameworks like for cross-device execution. iOS equivalents use XCTest or XCUITest for similar event injection. Synchronization is critical to handle asynchronous behaviors in modern GUIs, where elements may load dynamically via or network calls. Explicit waits involve fixed delays, such as sleeping for a set duration (e.g., 2 seconds) after an action to allow UI updates, though this can lead to inefficiencies in variable-response scenarios. Implicit waits, conversely, poll for conditions like element visibility or presence until a timeout, using mechanisms such as checking DOM readiness or attribute changes. Dynamic synchronization techniques, like those in , adaptively wait for state changes, reducing execution time by up to 87% compared to static delays while minimizing flakiness in test runs. Polling until an element appears, for example, repeatedly queries the UI tree at intervals until the target is locatable. These methods address key challenges, particularly timing issues in asynchronous UIs where unsynchronized events can cause tests to fail prematurely or interact with stale states. For instance, in single-page applications, a simulation might precede content rendering, leading to missed interactions; mitigates this by ensuring readiness before proceeding. An example script snippet using pywinauto for a simulation on a Windows demonstrates this:
python
from pywinauto import Application
app = Application().connect(title="Notepad")
window = app.Notepad
button = window.child_window(title="OK", control_type="Button")
button.click_input()  # Simulates left mouse click
This code connects to the application, locates the via its properties, and invokes a native click, with implicit waits handled by the library's polling. The of simulation traces from rudimentary 1990s capture-replay recorders, which scripted basic and events for static s, to sophisticated 2020s AI-assisted approaches that generate natural, context-aware behaviors like exploratory swipes or adaptive gestures. Early tools focused on simple event logging and playback, limited by platform silos, but the 2000s saw model-based expansions using event-flow graphs for scalable simulations across and web apps. By the 2010s, mobile proliferation drove ADB and integrations for touch simulation, while recent advancements incorporate and for robust, vision-based interactions resilient to layout changes. This progression, documented in over 744 publications from 1990 to 2020, reflects a shift toward automated, intelligent execution that parallels GUI complexity growth.

Event Capture and Verification

Event capture in testing involves monitoring and recording user interactions and system responses to ensure accurate replay and analysis during automated validation. Techniques typically into underlying event streams provided by operating systems, such as using the function GetCursorPos to retrieve the current mouse cursor position in screen coordinates, which is essential for validating interactions like drag-and-drop operations where precise positioning must be confirmed. In systems employing the , event queues are manipulated using functions like XWindowEvent to search for and extract specific events matching a target window and , thereby preserving sequence integrity for complex GUI behaviors. These captured events, often logged as sequences of primitive actions (e.g., clicks, hovers), form the basis for replay analysis, as demonstrated in event-flow models where tools like GUI Ripper reverse-engineer applications to build graphs of event interactions. Verification follows capture by asserting that the GUI reaches expected states post-interaction, completing the test execution cycle. Common methods include checking UI element properties such as text content matching via the Name property or visibility through the IsOffscreen property, leveraging accessibility APIs like for robust, programmatic access to these states without relying on brittle screen coordinates. For visual fidelity, pixel-level comparison compares screenshots of baseline and current GUI renders to detect regressions, a technique that gained prominence in the with the rise of pipelines and tools addressing dynamic content challenges. Assertions on captured data yield pass/fail outcomes, with studies showing visual regression tools achieving up to 97.8% accuracy in fault detection, though flakiness from timing or environmental variances necessitates retries in empirical analyses to stabilize results without masking underlying issues. To mitigate capture inconsistencies, such as asynchronous event processing, testing frameworks integrate retries and caching mechanisms from APIs like UI Automation, ensuring reliable state checks even in flaky environments. Overall, these practices emphasize comprehensive event traces for post-execution review, with event-flow models enabling coverage metrics where each event is verified multiple times across generated test cases.

Tools and Frameworks

Capture-Replay Tools

Capture-replay tools are software utilities designed to automate (GUI) testing by recording user interactions, such as clicks, inputs, and other events, and then generating scripts that replay those actions to verify application . These tools facilitate the creation of automated tests without requiring extensive programming knowledge, making them accessible for testers to simulate user sessions on , , or applications. By capturing events during manual exploration, the tools produce scripts that can be replayed repeatedly to detect regressions or inconsistencies in the GUI. Prominent examples include Selenium IDE, an open-source tool originating in the mid-2000s for web-based testing, which allows users to record browser interactions and export them as code in languages like or . Another is Sikuli, an image-based automation tool developed in the early 2010s that uses to identify and interact with elements via screenshots, proving useful for applications where traditional locators fail, such as legacy systems or those with dynamic visuals. For mobile environments, stands out as a cross-platform framework supporting , , and hybrid apps, enabling record-replay of touch gestures and device-specific events through a unified . Emerging tools like , released in 2020, enhance capture-replay for web applications with improved cross-browser support and integration into pipelines, as of 2025. The typical begins with the recording phase, where testers perform actions on the while the tool logs events and element identifiers; this generates a raw script that can then be edited to add parameters, loops, or conditional logic. Replay involves executing the script against the application, often incorporating assertions to validate outcomes like element visibility or text content, which supports of tests for or exploratory validation. This approach excels in scenarios requiring quick setup, as it bridges with automation, allowing non-developers to contribute to test suites efficiently. Despite their ease of use, capture-replay tools suffer from , as scripts tied to specific UI layouts or coordinates often break with even minor interface changes, such as element repositioning or styling updates. Maintenance overhead is significant, requiring frequent script revisions to adapt to evolving applications, which can negate initial time savings and limit for complex or long-running tests. These tools remain widely adopted for automated GUI testing in the , with empirical studies showing their prevalence in open-source projects for straightforward web and mobile validation, though adoption patterns highlight a shift toward approaches for robustness.

Model-Based and AI Tools

Model-based testing tools leverage formal models, such as state transition diagrams or graphs, to systematically generate and execute test cases for graphical user interfaces (GUIs), enabling comprehensive coverage of user interactions without manual scripting of every scenario. GraphWalker, an open-source tool, facilitates this by interpreting directed graph models to produce test paths that simulate GUI workflows, often integrated with automation frameworks like Selenium for web applications. These tools can generate test paths directly from UML diagrams, such as state machines, ensuring that transitions between GUI states are validated against expected behaviors. AI-powered tools advance GUI testing by incorporating machine learning to enhance reliability and reduce maintenance overhead, particularly in dynamic environments where UI elements frequently change. Testim employs ML algorithms for self-healing locators that automatically detect and adapt to modifications in element attributes or positions, minimizing test failures due to UI evolution. Applitools utilizes visual AI to perform pixel-perfect comparisons of GUI screenshots, identifying layout discrepancies through computer vision techniques that go beyond traditional pixel matching. Mabl, developed post-2015, orchestrates end-to-end testing with AI-driven insights, including predictive analytics for test prioritization and automated healing of brittle scripts across web and mobile platforms. As of 2025, advancements in agentic AI are integrating autonomous test agents into these tools for more adaptive exploration in complex GUIs. Key features of these tools include automatic adaptation to UI changes via self-healing mechanisms, where AI models retrain on updated DOM structures or visual cues to maintain locator stability. For visual validation, AI employs perceptual hashing algorithms to compute differences between screenshots, such as generating a hash value based on structural similarities (e.g., Hash = perceptual diff of edge-detected images), which tolerates minor variations like font rendering while flagging significant layout shifts. In the 2020s, these tools have increasingly integrated with pipelines, enabling seamless automated testing within workflows and supporting mobile-specific challenges, such as cross-platform GUIs in apps where assists in generating device-agnostic test scenarios. This integration addresses gaps in traditional testing by handling dynamic mobile layouts, with tools like Mabl providing cloud-based execution that scales across emulators and real devices. A notable involves adapting s to GUI contexts for repairing and evolving test suites. In one approach, a framework repairs broken GUI tests by evolving locators and sequences through and selection, applied to seven synthetic programs mimicking common GUI constraints, achieving 99-100% feasible coverage with minimal human intervention. This method demonstrates how search-based techniques can optimize test maintenance in evolving GUIs.

References

  1. [1]
    [PDF] Graphical user interface (GUI) testing
    Context: GUI testing is system testing of a software that has a graphical-user interface (GUI) front-end. Because system testing entails that the entire ...
  2. [2]
    [PDF] An Empirical study on Visual GUI Testing - arXiv
    Feb 3, 2016 · Mantyla, “Benefits and limitations of automated software testing: Systematic literature review and practitioner survey,” in Automation of ...
  3. [3]
    [PDF] Systematic Testing of a GTK Graphical User Interface Abstract
    We describe a scheme for systematically testing the operation of a graphical user interface. The scheme provides a capability for generating event logs, ...
  4. [4]
    Graphical user interface (GUI) testing: Systematic mapping and ...
    Context: GUI testing is system testing of a software that has a graphical-user interface (GUI) front-end. Because system testing entails that the entire ...
  5. [5]
    Visual testing of Graphical User Interfaces: An exploratory study ...
    Graphical User Interface (GUI) testing literature emphasizes testing a system's functionality through its GUI, rather than testing visual aspects of the GUI ...
  6. [6]
    Software testing for extended reality applications: a systematic ...
    Jun 3, 2025 · 2.2.2 GUI testing. System testing is crucial for GUI apps, complementing unit testing by focusing on user interactions to ensure the software ...
  7. [7]
    How the Graphical User Interface Was Invented - IEEE Spectrum
    More than 1200 of the experimental Alto, developed in 1973 by the Xerox Palo Alto Research Center, were distributed to test its windows, menus, and mouse. Xerox ...
  8. [8]
    (PDF) The Xerox Star: A Retrospective - ResearchGate
    Aug 5, 2025 · A description is given of the Xerox 8010 Star information system, which was designed as an office automation system.Missing: 1980s | Show results with:1980s
  9. [9]
    7 must-know statistics about app bugs - Shake
    Sep 11, 2024 · 58% of users report user interface issues are the most common type of bug they encounter ... This statistic comes from Perfecto Mobile, a company ...
  10. [10]
    [PDF] An Empirical Study of Functional Bugs in Android Apps - Ting Su
    Jul 21, 2023 · Among these 2,482 bug reports, we identified that 1,623 bug reports (65.4% ≈1,623/2,482) lead to functional bugs, 767 bug reports (30.9%) lead ...
  11. [11]
  12. [12]
    Configure for UI testing - Azure Pipelines | Microsoft Learn
    Jul 15, 2025 · You may need a special configuration in order to run UI tests such as Selenium, Appium, or Coded UI tests. This article describes the typical considerations ...
  13. [13]
    Investigating the adoption and maintenance of web GUI testing
    Oct 17, 2025 · This challenges the common belief that GUI tests are inherently fragile, showing instead that they can be highly durable when properly ...
  14. [14]
    A Complete Guide to GUI Testing: Tools, Test Plans, Techniques
    Oct 2, 2025 · Best practices for writing GUI test cases. Applying “best practices” to your test case design can help improve the quality of your tests.
  15. [15]
    UI Testing: Guide to Techniques, Tools, & Best Practices - TestGrid
    Sep 28, 2024 · Discover the essentials of UI Testing with our comprehensive step-by-step guide. Learn the benefits, tools, strategies, and best practices.UI Testing Techniques · Tools for UI Testing · Best Practices for UI Testing
  16. [16]
    Advanced Strategies for Manual Software Testing - TestRail
    Jul 24, 2024 · Test case design. Effective manual testing involves designing test cases that address complex scenarios and system behaviors.
  17. [17]
    Exploratory Testing: A Detailed Guide - BrowserStack
    Exploratory testing is an approach where testers actively explore the software to identify issues and assess user experience without relying on predefined test ...
  18. [18]
    How to Use Boundary Value Analysis for Software Testing - Ranorex
    Oct 10, 2023 · Boundary value analysis (BVA), or boundary value testing, is a technique in software testing that finds errors within and near ranges of different data sets.
  19. [19]
    Writing Test Cases: Examples, Best Practices & Test Case Templates
    Our original guide to writing test cases. Improve your test cases and test case design by learning from our examples, test case templates and best ...<|control11|><|separator|>
  20. [20]
    Best practices on writing great Test cases - Xray Server + DC
    Jul 17, 2025 · This document provides some guidelines for addressing the specification of test cases during testing. Here you may find some recommended tips to perform better ...
  21. [21]
    Advantages and Disadvantages of Manual Testing - GeeksforGeeks
    Jul 23, 2025 · Manual testing advantages include flexibility and cost-effectiveness, while disadvantages include subjectivity, less reliability, and non- ...
  22. [22]
    Advantages and Disadvantages of Manual Testing | GAT
    Manual testing advantages include human judgment and flexibility. Disadvantages include time-consuming nature and human error.
  23. [23]
    What is GUI Testing? (Types & Best Practices) - BrowserStack
    Best Practices for GUI testing​​ Separate the unnecessary test data from your test case. Also, create separate data for the test environment.Types Of Gui Testing · 2. Usability Testing · 4. Accessibility TestingMissing: creation | Show results with:creation
  24. [24]
    UI Testing: Essentials & Best Practices - TestRail
    Jun 19, 2025 · Manual UI Testing. Manual UI testing involves human testers executing test cases to explore the application and verify its interface components.
  25. [25]
    Vision-Based Mobile App GUI Testing: A Survey
    Data-driven testing frameworks separate data from test cases, enabling rapid test case generation. Keyword-driven frameworks involve writing test cases in ...
  26. [26]
    Study on the Automatic Test Framework Based on Three-tier Data ...
    The influential automatic test framework includes the. Control Synchronized Data Driven Testing (CSDDT) proposed by Archer Group [5] and the Data Driven ...
  27. [27]
    [PDF] A Keyword Driven Framework for Testing Web Applications
    The goal of this paper is to explore the use of Keyword driven testing for automated testing of web application by creating modular, reusable test ...Missing: seminal | Show results with:seminal
  28. [28]
    Model-Based Testing with a General Purpose Keyword-Driven Test ...
    This paper focuses upon the integration of the TEMA model-based ... GUI application and discuss early experiences in using the framework in testing Web GUIs.
  29. [29]
    Automation of GUI testing using a model-driven approach
    This paper describes an ongoing research on test case generation based on Unified Modeling Language (UML). The described approach builds on and combines ...
  30. [30]
    Automated model-based Android GUI testing using multi-level GUI ...
    We focus on model-based Android GUI testing that utilizes a GUI model for systematic test generation and effective debugging support.Missing: diagrams | Show results with:diagrams
  31. [31]
    The Challenges of Testing in a Non-Deterministic World
    Jan 9, 2017 · This SEI Blog post discusses the challenges of testing in a non-deterministic world, where system behavior may vary even with identical ...
  32. [32]
    Evidence and perceptions on GUI test automation - An Exploratory ...
    A study at Accenture revealed that even simple changes to GUIs lead to 30% to 70% modifications to the test scripts making 74% of the test cases unusable during ...Missing: driven | Show results with:driven<|separator|>
  33. [33]
    [PDF] A Planning-based Approach to GUI Testing
    The key idea of using planning as the core of PATHS is that the GUI test de- signers will often find it easier to specify typical goals that users of the GUI ...
  34. [34]
    [PDF] Plan Generation for GUI Testing - Computer Science
    Graphical user interfaces (GUIs) have become nearly ubiquitous as a means of interacting with software systems. GUIs are typically highly com- plex pieces of ...<|separator|>
  35. [35]
    TestOptimal Model-Based Testing
    Algorithm-based path generation to test and automate processes. Easy All-in-One IDE / Studio for modeling, scripting, debugging and test execution.Missing: GUI planning
  36. [36]
    [PDF] Hierarchical GUI test case generation using automated planning
    HTN planning focuses on resolving conflicts among alternative methods of decomposition at each level. The GUI test case generation problem is unusual in that, ...
  37. [37]
    Machine Learning Based User Interface Testing to Predict Defects ...
    Dec 28, 2022 · The proposed work focuses on finding the logical defects in graphical user interfaces. Training data set is constructed from the various GUI ...
  38. [38]
    Deep Reinforcement Learning for Automated Web GUI Testing - arXiv
    Apr 27, 2025 · In this paper, leveraging the capability of deep reinforcement learning, we propose WebRLED, an effective approach for automated GUI testing of ...
  39. [39]
    Automating GUI Testing with Image-Based Deep Reinforcement ...
    We propose a deep-reinforcement-learning-based (DRL) solution for automated and adaptive GUI testing. Specifically, we propose and evaluate the performance of ...
  40. [40]
    Deeply Reinforcing Android GUI Testing with ... - ACM Digital Library
    This paper presents DQT, a novel automated Android GUI testing approach based on deep reinforcement learning.
  41. [41]
    Using genetic algorithms for test case generation and selection ...
    In this research, we will use the concept of genetic algorithms to optimize the generation of test cases from the application user interfaces.
  42. [42]
    [PDF] Repairing GUI Test Suites Using a Genetic Algorithm
    In this paper we develop a method to automatically repair GUI test suites, generating new test cases that are feasible. We use a genetic algorithm to evolve new ...
  43. [43]
    Many-objective test case generation for graphical user interface ...
    Dec 1, 2022 · In this study, we show how to combine search-based (optimisation) with model-based testing to generate test cases for GUI applications taking into account the ...
  44. [44]
    Real Time Detection of Mobile Graphical User Interface Elements ...
    In this work, we model the Graphical User Interface (GUI) detection challenge as an object detection problem from Computer Vision (CV) domain.
  45. [45]
    [2506.05079] LLM-Guided Scenario-based GUI Testing - arXiv
    Jun 5, 2025 · Building on this capability, we propose ScenGen, an LLM-guided scenario-based GUI testing framework employing multi-agent collaboration to ...
  46. [46]
    Intention-Based GUI Test Migration for Mobile Apps using Large ...
    Jun 22, 2025 · Automated Test Transfer across Android Apps using Large Language Models · Automated test migration for mobile apps · ExtRep: a GUI test repair ...
  47. [47]
  48. [48]
    [PDF] MIT Open Access Articles GUI Testing Using Computer Vision
    Our visual testing framework initiates an opportunity for GUI designers and programmers to engage in this good practice of software engineering. Even in small ...
  49. [49]
    Using UI Automation for Automated Testing - Win32 apps
    Jul 14, 2025 · This overview describes how Microsoft UI Automation can be useful as a framework for programmatic access in automated testing scenarios.Missing: GUI | Show results with:GUI
  50. [50]
    UI Events - Web APIs | MDN
    Apr 28, 2025 · The UI Events API handles user interactions like mouse and keyboard input, including events fired on specific actions and event interfaces.Concepts and Usage · Interfaces · Events · Examples
  51. [51]
    Android Debug Bridge (adb) | Android Studio
    Sep 29, 2025 · Android Debug Bridge (adb) is a versatile command-line tool that lets you communicate with a device. The adb command facilitates a variety of device actions.Debug a Wear OS app · Update the IDE and SDK tools · Enable Developer options
  52. [52]
    [PDF] Comparing Static and Dynamic Synchronization of GUI-based tests
    Based on the result, we found that test scripts using dynamic syn- chronization improved execution efficiency and maintenance costs without sacrificing the test ...
  53. [53]
    Pywinauto Tutorial to Automate GUI Testing of Windows Apps - Apriorit
    Feb 28, 2023 · Pywinauto is a set of Python libraries for automating Windows GUI testing, emulating user actions like clicks and key presses.
  54. [54]
    30 Years of Automated GUI Testing: A Bibliometric Analysis
    Aug 7, 2025 · Objective: To visualise how the field of automated GUI testing has evolved by studying the growth of the field; types of publications; ...Missing: simulation 2020s
  55. [55]
    GetCursorPos function (winuser.h) - Win32 apps | Microsoft Learn
    Nov 18, 2022 · The input desktop must be the current desktop when you call GetCursorPos. Call OpenInputDesktop to determine whether the current desktop is the ...
  56. [56]
    Xlib Programming Manual: XWindowEvent - Christophe Tronche
    The XWindowEvent() function searches the event queue for an event that matches both the specified window and event mask.
  57. [57]
    [PDF] An event-flow model of GUI-based applications for testing
    Jan 2, 2007 · This modal dialogue consists of a modal window X and a set of modeless windows that have been invoked, either directly or indirectly from X.
  58. [58]
    UI Automation and Active Accessibility - Win32 apps | Microsoft Learn
    Jul 14, 2025 · Microsoft UI Automation is the new accessibility model for Windows and is intended to address the needs of assistive technology products and automated testing ...
  59. [59]
    [PDF] Visual Regression Testing in Practice: Problems, Solutions, and ...
    Jul 26, 2024 · Visual Regression Testing (VRT) automates identifying unintended visual changes in software, ensuring visual integrity of web applications and ...
  60. [60]
    An Empirical Analysis of UI-Based Flaky Tests - ACM Digital Library
    The findings made in this work can provide a foundation for the development of detection and prevention techniques for flakiness arising in UI tests. I.
  61. [61]
    GUI Capture & Replay Tools
    GUI capture & replay tools have been developed as a mechanism for testing the correctness of interactive applications with graphical user interfaces.
  62. [62]
    Record and Playback Testing: Tools, Benefits, and Limitations
    Sep 29, 2025 · Record-playback testing is a low-code approach where a tool captures user actions on an app's UI and converts them into reusable test scripts.Overview · 1. Browserstack Low Code... · 2. Katalon Studio
  63. [63]
    Capture/Replay Tool - Tutorials Point
    Using a capture and replay tool, testers can run an application and record the interaction between a user and the application. The Script is recorded with all ...
  64. [64]
    Selenium IDE · Open source record and playback test automation for ...
    Simple, turn-key solution to quickly author reliable end-to-end tests. Works out of the box for any web app.Legacy IDE · Getting Started · Getting Started with Plugins · Commands
  65. [65]
    RaiMan's SikuliX
    SikuliX automates anything you see on the screen of your desktop computer running Windows, Mac or some Linux/Unix. It uses image recognition powered by OpenCV ...
  66. [66]
    Welcome - Appium Documentation
    Appium is an open-source project and ecosystem of related software, designed to facilitate UI automation of many app platforms, including mobile ... to get set up ...Appium in a Nutshell · Install Appium · Appium Blog · Appium Drivers
  67. [67]
    Record and Replay Testing vs Scripting: Navigating the Pros and ...
    Jan 20, 2023 · Record and Replay testing is an automated software testing approach that enables new testers to record their application user journey and associated test steps.
  68. [68]
    A Brief History of Selenium IDE - Functionize
    Dec 23, 2017 · Selenium IDE provided several benefits, including but not limited to its quick setup, intuitive interface, and rapid results.
  69. [69]
    GraphWalker
    GraphWalker is an open source Model-based testing tool for test automation. It is designed to make it easy to design your tests using graphs.Missing: GUI SpecFlow UML diagrams
  70. [70]
    Model-based testing with GraphWalker and Selenium – part 1
    Oct 23, 2015 · In this post I'd like to make a start exploring the possibilities and drawbacks that model-based testing (MBT) can offer to test automation ...Missing: GUI SpecFlow UML diagrams
  71. [71]
    10 Top Model-based Testing Tools to Work With - Testsigma
    Feb 1, 2024 · Model-Based Testing (MBT) uses graph models to design, automate, and execute tests. This blogs talks about top 10 model-based testing tools.Missing: SpecFlow | Show results with:SpecFlow
  72. [72]
    Model Based Testing Tools | BrowserStack
    Model-Driven Development: When a system is developed using model-driven approaches, where UML diagrams serve as the basis for both design and testing.Missing: SpecFlow | Show results with:SpecFlow
  73. [73]
    Automated UI and Functional Testing - AI-Powered Stability - Testim.io
    Smart locators understand your app, identify elements, and automatically self-heal to keep tests working even as applications change.Test Automation Tool · AI · Testim overview · Testim Mobile
  74. [74]
    Top 10 Visual Testing Tools - Applitools
    Aug 13, 2024 · It improves testing accuracy by recognizing subtle issues like layout shifts or color changes, while ignoring minor, non-impactful differences.Missing: GUI perceptual hashing
  75. [75]
    AI Test Automation with Self-Healing - Mabl
    Mabl's auto-healing tests use AI to automatically adapt to UI changes, reducing maintenance and helping teams deliver faster with greater test resilience.How Does It Work? · Test Coverage · Try Mabl Free For 14 Days!
  76. [76]
    UI Testing: The Secret to Optimizing Customer Experience
    Traditional algorithms like SSIM and pHash are efficient for detecting visual differences but often struggle with semantic changes—modifications in layout, ...
  77. [77]
    CI/CD for Flutter development - CircleCI
    Feb 21, 2025 · A well-designed CI/CD pipeline automates testing across devices, validates platform integrations, and prevents release delays. Common ...
  78. [78]
    AI-assisted Testing for Flutter Apps - Vibe Studio
    Oct 6, 2025 · This tutorial explains how to integrate AI into Flutter mobile development testing: set up a testable architecture, use AI to generate unit ...
  79. [79]
    [PDF] Exploiting Common Object Usage in Test Case Generation - EvoSuite
    We adapt test case generation to follow patterns of common object usage, as mined from code examples. Our experiments show that generated tests thus (a) reuse ...