Fact-checked by Grok 2 weeks ago

User interface

A user interface (UI) is the component of an interactive system that enables communication between a and a , such as a computer, software application, or electronic device, by facilitating the input of commands and the presentation of output through sensory channels like sight, sound, or touch. It represents the only portion of the system directly perceptible to the , serving as the boundary where intentions are translated into actions and vice versa. User interfaces encompass a range of elements designed to support effective interaction, often modeled through frameworks like DIRA, which breaks them down into four core components: devices (hardware for input/output, such as keyboards or screens), interaction techniques (methods like clicking or swiping), representations (visual or auditory depictions of data), and assemblies (how these elements combine into cohesive structures). Effective UI design prioritizes , consistency, and reduced , guided by principles such as placing the user in control, minimizing memory demands through intuitive cues, and ensuring uniformity across interactions to prevent errors and enhance efficiency. The evolution of user interfaces parallels the , originating with rudimentary batch processing systems in the mid-20th century, where users submitted jobs via punched cards without direct feedback. By the , command-line interfaces (CLI) emerged as the dominant form, allowing text-based input and output on terminals, as seen in systems like those developed for mainframes. The 1970s marked a pivotal shift with the advent of graphical user interfaces (GUI), pioneered by researchers including Douglas Engelbart's demonstration of the mouse and windows in 1968, and further advanced at Xerox PARC through innovations like icons, menus, and direct manipulation by and his team. These developments laid the foundation for modern GUIs commercialized in the 1980s by products like the Apple Macintosh and Windows, transforming from expert-only tools to accessible platforms for broad audiences. Contemporary user interfaces extend beyond traditional GUIs to include diverse types tailored to specific contexts and technologies. Graphical user interfaces (GUI) rely on visual elements like windows and icons for mouse- or keyboard-driven navigation, while touchscreen interfaces enable direct manipulation via fingers on mobile devices. Voice user interfaces (VUI), powered by , allow hands-free interaction as in virtual assistants like , and gesture-based interfaces use body movements for control in immersive environments like . Command-line interfaces (CLI) persist in technical domains for precise, scriptable operations, and menu-driven interfaces guide users through hierarchical options in embedded systems. Recent advancements, including multimodal and adaptive UIs, integrate multiple input methods and personalize experiences based on user context, reflecting ongoing research in human-computer interaction (HCI) to improve accessibility and inclusivity.

Fundamentals

Definition and Scope

A user interface (UI) serves as the medium through which a human user interacts with a , , or device, enabling the bidirectional exchange of information to achieve intended tasks. This interaction point encompasses the mechanisms that translate user intentions into machine actions and vice versa, forming the foundational layer of human-machine communication. The scope of user interfaces is broad, spanning digital computing environments—such as software applications and hardware peripherals—and extending to non-digital contexts, including physical controls on everyday appliances like stoves and washing machines, as well as instrument panels in vehicles that provide drivers with essential operational feedback. Over time, UIs have evolved from predominantly physical affordances, such as mechanical switches and dials, to increasingly and forms that support seamless integration across these domains. At its core, a UI comprises input methods that capture user commands, including traditional devices like keyboards and pointing devices, as well as modern techniques such as touch gestures and voice recognition; output methods that deliver system responses, ranging from visual displays and textual readouts to auditory cues and tactile vibrations; and iterative feedback loops that confirm user actions, highlight discrepancies, or guide corrections to maintain effective dialogue between user and system. While closely related, UI must be distinguished from (UX), which addresses the holistic emotional, cognitive, and behavioral outcomes of interaction; UI specifically denotes the concrete, perceivable elements and pathways of engagement that users directly manipulate.

Key Terminology

In user interface (UI) design, refers to the perceived and actual properties of an object or element that determine possible actions a user can take with it, such as a button appearing clickable due to its raised appearance or shadow. This concept, originally from and adapted to HCI by Donald Norman, emphasizes how design cues signal interaction possibilities without explicit instructions. A in UI design is a conceptual that leverages familiar real-world analogies to make abstract digital interactions intuitive, such as the where files appear as icons that can be dragged like physical documents. This approach reduces by transferring users' existing knowledge to the interface, as outlined in foundational HCI literature. An interaction paradigm describes a fundamental style of user engagement with a system, exemplified by direct manipulation, where users perform operations by directly acting on visible representations of objects, such as resizing a by dragging its edge, providing immediate visual . Coined by in 1983, this paradigm contrasts with indirect methods like command-line inputs and has become central to graphical interfaces. UI-specific jargon includes widget, an interactive control element in graphical user interfaces, such as buttons, sliders, or menus, that enables input or displays dynamic information. Layout denotes the spatial arrangement of these elements on the screen, organizing content hierarchically to guide attention and navigation, often using grids or flow-based systems for . State represents the current of the interface, encompassing , , and properties of elements that dictate rendering and behavior at any moment, such as a loading spinner indicating ongoing processing. Key distinctions in UI discourse include UI versus UX, where UI focuses on the tangible elements users interact with—the "what" of buttons, layouts, and visuals—while UX encompasses the overall emotional and practical experience—the "how it feels" in terms of ease, satisfaction, and efficiency. Similarly, front-end refers to the client-facing layer of development handling UI rendering via technologies like , CSS, and , whereas back-end manages server-side logic, data storage, and security invisible to users. The Xerox Alto computer, developed at Xerox PARC in 1973, introduced overlapping resizable windows as a core component of its pioneering graphical user interface, enabling multitasking through spatial organization of content.

Historical Evolution

Early Batch and Command-Line Interfaces (1945–1980s)

The earliest user interfaces in computing emerged during the post-World War II era with batch processing systems, which dominated from the mid-1940s to the 1960s. These systems relied on punched cards or tape as the primary input medium for programs and data, processed offline in non-real-time batches on massive mainframe computers. The ENIAC, completed in 1945 as the first general-purpose electronic digital computer, used plugboards and switches for configuration, but subsequent machines like the UNIVAC I (delivered in 1951) standardized punched cards for job submission, where operators would queue decks of cards representing entire programs for sequential execution without user intervention during runtime. This approach maximized hardware efficiency on expensive, room-sized machines but enforced a rigid, one-way interaction model, with output typically printed on paper after hours or days of processing. The transition to command-line interfaces began in the 1960s with the advent of systems, enabling multiple users to interact interactively via teletype terminals connected to a central mainframe. The (CTSS), developed at in 1961 under Fernando Corbató, ran on an and allowed up to 30 users to edit and execute programs concurrently through typed commands, marking a shift from batch queues to real-time responsiveness. This model influenced subsequent systems, culminating in UNIX, initiated in 1969 at by and as a lightweight, multi-user operating system written initially in . UNIX's command-line paradigm emphasized a for interpreting text-based commands, fostering modular tools like for chaining operations, which streamlined programmer workflows on and later minicomputers. Key advancements in the further refined command-line access, including the , introduced in 1977 by Stephen Bourne at as part of UNIX Version 7. This shell provided structured scripting with variables, control structures, and job control, serving as the default interface for issuing commands like file manipulation (e.g., for listing directories) and process management, thereby standardizing interactive sessions across UNIX installations. DARPA's , operational since its first connection in 1969, extended remote access by linking university and research computers over packet-switched networks, allowing users to log in from distant terminals and execute commands on remote hosts via protocols like , which democratized access to shared resources beyond local facilities. Despite these innovations, early batch and command-line interfaces suffered from significant limitations, including a profound lack of visual —users received no immediate graphical confirmation of actions, relying instead on text output or printouts that could take extended periods to appear, often leading to delays. Error proneness was rampant due to the unforgiving nature of punched cards, where a single misalignment or invalidated an entire job deck, necessitating manual re-entry and resubmission in batch systems. Command-line errors, such as mistyped in CTSS or UNIX shells, provided terse like "command not found," exacerbating issues without intuitive aids, and required users to memorize opaque without on-screen hints. In social context, these interfaces were explicitly designed for expert programmers and engineers rather than general end-users, reflecting the era's view of computers as specialized tools for scientific and computation. High learning curves stemmed from the need for deep knowledge of machine architecture and low-level syntax, with interactions optimized for batch efficiency or throughput over accessibility—non-experts were effectively excluded, as systems like demanded physical reconfiguration by technicians, and even prioritized resource allocation for skilled operators. This programmer-centric focus, prevalent through the , underscored a where was secondary to raw computational power, limiting broader adoption until subsequent interface evolutions.

Emergence of Graphical and Text-Based Interfaces (1960s–1990s)

The emergence of graphical user interfaces (GUIs) in the 1960s marked a pivotal shift from purely text-based interactions, enabling direct manipulation of visual elements on screens. Ivan Sutherland's system, developed in 1963 as part of his doctoral thesis, introduced foundational concepts such as interactive drawing with a , constraint-based object manipulation, and zoomable windows, laying the groundwork for modern and user-driven design tools. This innovation demonstrated how users could intuitively create and edit diagrams, influencing subsequent research in human-computer interaction. Building on this, Douglas Engelbart's oN-Line System (NLS) at the Stanford Research Institute in 1968 showcased the "Mother of All Demos," featuring a mouse-driven interface with hypertext links, shared screens, and collaborative editing capabilities that foreshadowed networked computing environments. The 1970s saw further advancements at Xerox PARC, where the Alto computer, released in 1973, integrated windows, icons, menus, and a pointer—core elements of the emerging (windows, icons, menus, pointer) paradigm—allowing users to manage multiple applications visually on a display. Developed by researchers like and , the Alto emphasized and desktop metaphors, such as file folders represented as icons, which made abstract computing tasks more accessible to non-experts. These systems, though experimental and limited to research labs, proved GUIs could enhance productivity by reducing reliance on memorized commands. Parallel to graphical innovations, text-based interfaces evolved toward greater standardization in the 1980s to improve consistency across applications. Microsoft's , introduced in 1981 for the PC, provided a command-line environment with rudimentary text menus and , enabling early personal computing but still requiring users to type precise syntax. 's Systems Application Architecture (SAA), launched in 1987, addressed fragmentation by defining common user interface standards for menus, dialogs, and keyboard shortcuts across its DOS, , and mainframe systems, promoting interoperability in enterprise software like early word processors such as . This framework influenced text UIs in productivity tools, making them more predictable without full graphical overhead. The commercialization of accelerated in the mid-1980s, with Apple's computer in 1983 introducing the first affordable GUI for office use, featuring pull-down menus, icons, and a on a . Despite its high cost of $9,995, the Lisa's bitmapped screen and drew from innovations to support drag-and-drop file management. The Apple Macintosh, released in 1984 at a more accessible $2,495, popularized these elements through its "1984" advertisement and intuitive design, rapidly expanding GUI adoption among consumers and small businesses. The WIMP paradigm, refined at PARC and implemented in these systems, became the dominant model, emphasizing visual feedback and pointer-based navigation over text commands. Despite these breakthroughs, early GUIs faced significant challenges from hardware constraints and adoption hurdles. Low-resolution displays, such as the Alto's 72 dpi bitmap screen or the Macintosh's 72 dpi , limited visual fidelity and made complex interactions cumbersome, often requiring users to tolerate jagged graphics and slow redraws. In enterprise settings, resistance stemmed from the high cost of GUI-capable —exemplified by the Lisa's failure to sell beyond 100,000 units due to pricing—and entrenched preferences for efficient text-based systems that conserved resources on mainframes. Outside the West, Japan's research contributed uniquely; for instance, NEC's PC-8001 series in the late 1970s incorporated early graphical modes for word processing with support, adapting GUI concepts to handle complex scripts amid the rise of dedicated Japanese text processors like the QX-10 in 1979. These developments helped bridge cultural and linguistic barriers, fostering GUI experimentation in during the personal boom.

Modern Developments (2000s–Present)

The 2000s marked a transformative era for user interfaces with the advent of mobile and touch-based systems, shifting interactions from physical keyboards and styluses to direct, intuitive finger inputs. Apple's , released in 2007, pioneered a display that supported like tapping, swiping, and multi-finger pinching, enabling users to manipulate on-screen elements in a fluid, natural manner without intermediary tools. This innovation drew from earlier research in capacitive touch sensing but scaled it for consumer devices, fundamentally altering by prioritizing gesture over command-line or button-based navigation. Google's platform, launched in 2008, complemented this by introducing an open-source ecosystem that emphasized UI customization, allowing users to modify home screens, widgets, and themes through developer tools and app integrations, which democratized interface across diverse hardware. The transition from stylus-reliant devices, such as PDAs in the , to gesture-based exemplified this evolution; the pinch-to-zoom , popularized on the iPhone, permitted effortless content scaling via two-finger spreading or pinching, reducing and enhancing for visual tasks like map navigation or photo viewing. Entering the 2010s, web user interfaces evolved toward responsiveness and dynamism, driven by standards and frameworks that supported seamless cross-device experiences. The specification, finalized as a W3C Recommendation in 2014, introduced native support for multimedia, canvas rendering, and real-time communication via APIs like WebSockets, eliminating reliance on plugins like and enabling interactive elements such as drag-and-drop and video playback directly in browsers. This facilitated responsive design principles, where UIs adapt layouts fluidly to screen sizes using CSS , a cornerstone for mobile-first web applications. Concurrently, Facebook's framework, open-sourced in 2013, revolutionized single-page applications (SPAs) by employing a for efficient updates, allowing developers to build component-based interfaces that render dynamically without full page refreshes, thus improving performance and user engagement on platforms like and sites. The 2020s have integrated and capabilities into user interfaces, fostering adaptive and context-aware interactions that anticipate user needs. Apple's , debuted in 2011 as an voice assistant, leveraged to handle queries via speech, marking an early step toward conversational UIs; by 2025, it had evolved into a system incorporating voice, text, visual cues, and device sensors for integrated responses across apps and ecosystems. In November 2025, Apple reportedly planned to integrate a custom version of Google's AI model to further enhance Siri's reasoning, context awareness, and processing while maintaining through on-device and private compute. In parallel, augmented and interfaces advanced with zero-touch paradigms, as seen in Apple's Vision Pro headset launched in 2024, which uses eye-tracking, hand gestures, and voice controls for —allowing users to manipulate 3D content through natural movements without physical controllers, blending digital overlays with real-world environments for immersive productivity and entertainment. Overarching trends in this period include machine learning-driven , where algorithms analyze user data to tailor interfaces—such as recommending layouts or content based on behavior—enhancing relevance but amplifying privacy risks through pervasive tracking. Ethical concerns have intensified around manipulative designs known as dark patterns, which exploit cognitive biases to nudge users toward unintended actions like excessive data sharing or subscriptions; these practices prompted regulatory responses, including the European Union's (GDPR) enacted in 2018, which enforces transparent consent interfaces and prohibits deceptive UIs to safeguard user autonomy in digital interactions.

Types of Interfaces

Command-Line and Text-Based Interfaces

Command-line interfaces (CLIs) and text-based user interfaces (TUIs) represent foundational paradigms for interacting with computer systems through textual input and output, primarily via keyboard commands processed in a environment. In CLIs, users enter commands in a sequential, line-by-line format, which the system interprets and executes, returning results as streams to the console. This mechanic enables direct, precise control over system operations without reliance on visual metaphors or pointing devices. For instance, the Bourne Again SHell (), developed by Brian Fox for the GNU Project and first released in 1989, exemplifies this approach by providing an interactive shell for systems that processes typed commands and supports command history and editing features. Similarly, Microsoft PowerShell, initially released in November 2006 as an extensible automation engine, extends CLI mechanics to Windows environments, allowing object-oriented scripting and integration with .NET for administrative tasks. These interfaces remain integral to modern computing as of 2025, powering routine operations in distributions and management. The advantages of CLIs and TUIs lie in their efficiency for experienced users, minimal resource demands, and robust support for through scripting. Expert operators can execute complex sequences rapidly by typing concise commands, often outperforming graphical alternatives in speed and precision for repetitive or remote tasks. Unlike graphical interfaces, which require rendering overhead, text-based systems consume fewer computational resources, making them suitable for resource-constrained environments and enabling operation on headless servers. A key enabler of scripting is the mechanism in systems, invented by Douglas McIlroy and introduced in Version 3 Unix in 1973, which chains command outputs as inputs to subsequent commands (e.g., ls | [grep](/page/Grep) file), facilitating modular, composable workflows without intermediate files. This of small, specialized tools connected via pipes promotes reusable scripts, enhancing productivity in programming and system administration. Variants of text-based interfaces include terminal emulators and TUIs that add structure to the basic CLI model. Terminal emulators simulate hardware terminals within graphical desktops, providing a windowed environment for text I/O; , created in 1984 by Mark Vandevoorde for the , was an early example, emulating DEC VT102 terminals to run legacy applications. TUIs build on this by incorporating pseudo-graphical elements like menus, windows, and forms using text characters, often via libraries such as . Originating from the original curses library developed around 1980 at the , to support screen-oriented games like , (as a modern, portable implementation) enables developers to create interactive, block-oriented layouts in terminals without full graphical support. These variants maintain text-only constraints while improving usability for configuration tools and editors. In contemporary applications, CLIs and TUIs dominate practices and embedded systems due to their automation potential and reliability in non-interactive contexts. Tools like the AWS Command Line Interface (AWS CLI), generally available since September 2, 2013, allow developers to manage cloud resources programmatically, integrating with pipelines for tasks such as deploying . In workflows, AWS CLI commands enable scripted orchestration of services like EC2 and S3, reducing manual intervention and supporting scalable automation. For embedded systems, CLIs provide lightweight debugging and control interfaces over serial connections, allowing engineers to test features without graphical overhead; for example, UART-based shells in microcontrollers facilitate real-time diagnostics and configuration in resource-limited devices like IoT sensors. These uses underscore the enduring role of text-based interfaces in high-efficiency, backend-oriented computing as of 2025.

Graphical User Interfaces

Graphical user interfaces (GUIs) represent a in human-computer interaction that employs visual elements to facilitate user engagement with digital systems, primarily through desktop operating systems and web browsers. Originating from at PARC in the 1970s, GUIs shifted computing from text-based commands to direct manipulation via graphical metaphors, enabling users to interact with on-screen representations of objects and actions. This approach, formalized as the model—standing for windows, icons, menus, and pointer—became the foundational structure for modern visual interfaces, allowing intuitive navigation without requiring memorized syntax. The core structure of GUIs revolves around elements designed to support efficient multitasking and object-oriented interaction. Windows provide resizable, overlapping frames for running multiple applications simultaneously, enabling users to organize and switch between tasks seamlessly. Icons serve as visual shortcuts to files, folders, or programs, allowing quick selection and manipulation through point-and-click actions. Menus offer hierarchical lists of options, typically accessed via pull-down or context mechanisms, to present commands in a structured manner. The pointer, controlled by devices like the , acts as the primary selection tool, translating physical gestures into precise on-screen movements for dragging, dropping, and highlighting. Prominent examples of WIMP-based GUIs include Microsoft's , released in 2021, which enhances multitasking with features like Layouts for arranging windows in predefined grids and virtual desktops for workspace segregation. Similarly, desktop environment, initially developed in 1997 as part of the GNU Project, embodies WIMP principles in distributions; its 2025 update in version 49 introduces refined window management, adaptive theming, and improved pointer interactions for high-resolution displays. These implementations demonstrate how WIMP elements persist as the backbone of desktop GUIs, adapting to contemporary hardware and user needs. In web environments, GUIs evolved through browser technologies that extended concepts to distributed applications. Cascading Style Sheets (CSS), standardized by the W3C in 1996, enabled the separation of visual presentation from content, allowing developers to create icon-like elements, window-resembling panels, and menu structures using layout properties. , introduced in 1995 by , added dynamic interactivity, powering pointer-driven events such as hover effects and drag-and-drop functionalities that mimic desktop behaviors. Responsive design principles further advanced web GUIs by ensuring adaptability across devices; for instance, Bootstrap, launched in 2011 by engineers, provides a mobile-first grid system and component library that facilitates consistent -style interfaces on varying screen sizes. GUIs offer distinct advantages, particularly in accessibility for novice users, by leveraging visual feedback and spatial metaphors that align with human perceptual strengths, reducing the cognitive effort needed to learn and perform tasks compared to command-line alternatives. This intuitiveness stems from direct manipulation, where users see immediate results of actions like resizing windows or selecting icons, fostering a sense of control and reducing error rates in routine operations. Hardware enablers have been crucial: the , invented by in 1964 at , provided precise pointer control essential for WIMP interactions, though it gained widespread adoption only in the with the Apple Macintosh's integration of graphical displays. High-DPI screens, popularized since Apple's Retina displays in 2010, enhance visual clarity by rendering finer icons and text, improving feedback precision on modern devices without straining user eyesight. Despite these benefits, face challenges such as , where dense arrangements of windows, icons, and can overwhelm users with excessive visual stimuli, leading to slower decision-making and higher during complex tasks. Achieving across platforms remains problematic; for example, macOS employs a dock-based menu paradigm with uniform window controls, while Linux environments like allow extensive customization that can result in divergent pointer behaviors and icon placements, complicating user transitions between systems. These issues underscore the need for streamlined design to balance expressiveness with in evolving GUI ecosystems.

Emerging and Multimodal Interfaces

Emerging user interfaces extend beyond traditional visual and textual paradigms by incorporating diverse input modalities such as touch, voice, gestures, and even neural signals, enabling more natural and intuitive interactions. Touch-based interfaces gained prominence with the introduction of capacitive screens in the in 2007, which allowed direct manipulation through finger gestures on a responsive , revolutionizing . Swipe gestures, enabling fluid navigation like scrolling or dismissing content, became standard in mobile applications following early implementations in devices from 2009 onward, with apps like popularizing left-right swipes for decision-making in 2012. To enhance feedback, provides tactile responses; Apple's Taptic Engine, debuted in 2015 with the and , uses linear resonant actuators to deliver precise vibrations simulating button presses or textures, improving user confirmation without visual cues. Voice and conversational interfaces leverage (NLP) to facilitate hands-free interactions, shifting from rigid commands to fluid dialogues. Amazon's , launched in 2014 with the device, pioneered widespread voice-activated control for tasks like music playback and smart , processing billions of interactions weekly by 2019 through cloud-based NLP. Advancements in generative AI have further evolved these systems; integrations of models like , starting in 2023, enable context-aware conversations in apps for productivity tools and , allowing users to query complex information via natural speech rather than structured inputs. Immersive interfaces immerse users in augmented or virtual environments, blending digital overlays with the physical world or creating fully synthetic spaces. (VR) headsets like the , crowdfunded in 2012, introduced head-tracked 3D interfaces for and simulation, using stereoscopic displays and motion sensors to simulate presence. (AR) and have advanced with devices like Meta's Quest series; the 2025 Horizon OS update for Quest 3 introduces an evolved spatial with passthrough camera enhancements, allowing seamless blending of real-world vision with virtual elements for intuitive navigation via gaze and hand tracking. Brain-computer interfaces (BCIs) represent a frontier in direct neural interaction; 's prototypes achieved the first human implant in January 2024, enabling thought-controlled cursor movement for paralyzed individuals through wireless electrode arrays decoding brain signals. As of November 2025, has implanted devices in at least 12 individuals, with users demonstrating advanced capabilities such as controlling computers for and communication. Multimodal interfaces fuse multiple input types for richer experiences, such as combining voice commands with gestures in smart home systems, where users might say "dim the lights" while waving to adjust intensity, as seen in integrated platforms like with compatible hubs. This fusion enhances accessibility and efficiency but introduces challenges, particularly privacy risks in always-on systems that continuously monitor audio, video, or , potentially leading to unauthorized without robust and mechanisms.

Design Principles

Core Principles of Interface Quality

Core principles of interface quality form the foundation for designing user interfaces that are intuitive, reliable, and effective in supporting user tasks. These principles, derived from human-computer interaction , prioritize user needs by ensuring interfaces are predictable, unobtrusive, and responsive. Key among them are , , with error prevention, and immediate , which collectively reduce user frustration and enhance task completion rates. Consistency ensures uniform behavior and appearance across interface elements, allowing users to apply learned interactions without relearning. For instance, standard icons, menu structures, and response patterns—such as using the same for throughout an application—minimize cognitive effort and errors. This principle, articulated in Jakob Nielsen's usability heuristics, promotes adherence to platform conventions and internal standards to foster familiarity. Immediate complements consistency by providing clear, real-time responses to user actions, such as visual confirmations of presses or progress indicators during operations, which reassure users that their are recognized and processed. Without such feedback, users may repeat actions unnecessarily, leading to inefficiency. Simplicity focuses on minimizing by presenting only essential information and controls, thereby avoiding overwhelming users with extraneous details. Techniques like progressive disclosure achieve this by initially showing basic features and revealing advanced options only when needed, such as expanding a collapsed for expert users. This approach, rooted in minimalist principles, has been shown to reduce task completion time by deferring complexity and preventing , particularly in complex software environments. Efficiency and error prevention enable seamless interaction by accommodating varying user expertise while safeguarding against mistakes. For expert users, interfaces incorporate shortcuts like keyboard accelerators or customizable workflows to accelerate routine tasks, aligning with Nielsen's heuristic for flexibility and efficiency. To prevent errors, designs include forgiving mechanisms such as confirmation dialogs for destructive actions and functions, which allow recovery without penalty. A quantitative foundation for efficiency in pointing-based interactions is provided by , which models the time required to acquire a with a . The law states that movement time T is given by T = a + b \log_2 \left( \frac{D}{W} + 1 \right), where a and b are empirically determined constants, D is the distance to the , and W is the 's width. This principle guides sizing and placement in graphical interfaces, ensuring larger or closer elements are easier and faster to select, thereby optimizing usability in touch and mouse-driven environments.

User-Centered Design Models

User-centered design models emphasize frameworks that integrate psychological insights and iterative processes to align interfaces with users' cognitive processes, expectations, and needs. These models shift focus from technical specifications to , ensuring interfaces facilitate intuitive interactions and minimize . Key contributions from seminal works in the 1980s onward provide structured approaches to achieve this alignment. The Principle of Least Astonishment (POLA), also known as the Principle of Least Surprise, posits that user interfaces should behave in ways that match users' preconceived expectations to avoid confusion or unexpected outcomes. This principle advocates for designs that do not "astonish" users by adhering to familiar conventions, thereby enhancing predictability and trust in the system. For instance, in menu systems, options should respond in expected manners, such as confirming deletions only when explicitly requested, to prevent erroneous actions. Don 's contributions to introduce psychological models that address how users perceive and interact with interfaces. In his 1988 book , Norman describes affordances as properties of objects that suggest possible actions, such as a button's raised edge implying it can be pressed, drawing from to guide user intuition. Complementing affordances are signifiers, which provide explicit cues about how to use those affordances, like icons or labels that clarify functionality and prevent misinterpretation. These elements support habit formation by making interfaces self-evident, allowing users to develop reliable interaction patterns over time. Norman's model further incorporates the Gulf of Execution and Gulf of Evaluation to explain interaction challenges. The Gulf of Execution represents the gap between a user's intentions and the actions required by the , bridged by clear mappings and constraints that translate goals into executable steps. Conversely, the Gulf of Evaluation covers the difficulty in interpreting system feedback, addressed through immediate and unambiguous responses that confirm outcomes. By minimizing these gulfs, designs promote seamless cycles of action and assessment, fostering user confidence and reducing errors in habitual use. This framework, rooted in , underscores the need for interfaces to mirror users' mental models. Peter Morville's UX Honeycomb, introduced in 2004, offers a multifaceted framework for evaluating and designing user experiences beyond mere . Represented as a hexagonal , it outlines seven interconnected facets that collectively define a robust UX: useful (solving real needs), usable (easy to navigate), desirable (emotionally engaging), findable (locatable content), accessible (inclusive for diverse users), credible (trustworthy presentation), and valuable (delivering business or personal worth). Morville emphasizes that these facets are interdependent, requiring balanced attention to create holistic experiences; for example, a highly usable but non-credible may fail to retain users. This model serves as a diagnostic tool for designers, highlighting trade-offs and priorities in user-centered projects. The Double Diamond model, developed by the British Design Council in 2003, provides an iterative framework for thinking, visualized as two diamonds representing divergent and convergent phases. It consists of four stages: Discover (exploring user needs and insights through ), Define (synthesizing findings to frame problems), Develop (ideating and prototyping solutions), and Deliver (testing, refining, and implementing). This non-linear process encourages cycles of , allowing teams to revisit earlier stages based on user feedback, thereby ensuring designs evolve in response to real behaviors and contexts. Widely adopted in UX practice, it promotes empathy-driven innovation while accommodating complexity in modern interfaces.

Evaluation and Usability

Usability Metrics and Testing

Usability metrics provide quantitative and qualitative measures to evaluate the effectiveness of user interfaces, focusing on how well they enable users to achieve goals. Key metrics include task success rate, which assesses the percentage of users who complete intended tasks without assistance, often serving as a foundational indicator of overall . Error rates measure the frequency of user mistakes, such as incorrect inputs or navigation failures, highlighting potential interface flaws that lead to frustration or inefficiency. Task completion time quantifies the duration required to finish a task, revealing whether the interface supports efficient ; shorter times generally indicate better , though context like task complexity must be considered. Satisfaction metrics capture subjective user perceptions, with the (SUS) being a widely adopted tool consisting of a 10-item scored from 0 to 100, where higher scores reflect greater perceived ease of use. Developed in 1986, SUS offers a quick, reliable benchmark for comparing interfaces across studies. International standards like ISO 9241-11 (2018) formalize usability as the extent to which a product can be used to achieve specified goals with effectiveness (accuracy and completeness of task achievement), efficiency (resources expended in relation to accuracy), and satisfaction (comfort and acceptability) in a specified context. These standards guide metric selection by emphasizing balanced evaluation of objective performance and . Testing methods complement metrics by identifying issues through structured approaches. Heuristic evaluation involves experts reviewing interfaces against established principles, such as Jakob Nielsen's 10 usability heuristics from 1994, which include visibility of system status, user control and freedom, and error prevention to detect potential problems efficiently without user involvement. compares two interface variants by exposing user groups to each and measuring performance differences, often using metrics like task success or engagement to determine the superior design quantitatively. Eye-tracking, advanced with accessible tools in the post-2000s era, records gaze patterns to visualize attention distribution, fixations, and saccades, uncovering mismatches between user focus and interface elements like overlooked buttons or confusing layouts. Practical tools facilitate these evaluations, particularly for digital interfaces. Google Analytics, launched in 2005, tracks web usability through metrics like bounce rates, session duration, and conversion paths, enabling indirect assessment of navigation efficiency and user drop-off points. Remote testing platforms such as UserTesting, founded in 2007, allow unmoderated studies where participants record sessions, providing video, audio, and think-aloud feedback to analyze real-time interactions and compute metrics like error rates remotely. These tools democratize , supporting iterative improvements aligned with ISO standards and core design principles.

Accessibility and Inclusivity

Accessibility in user interfaces ensures that digital systems and applications can be perceived, operated, understood, and robustly interacted with by people with disabilities, promoting equal access to information and services. This is critical given that an estimated 1.3 billion people, or 16% of the global population, experience significant disabilities, a figure projected to rise due to aging populations and chronic health conditions. The primary international standard for web accessibility is the Web Content Accessibility Guidelines (WCAG) 2.2, developed by the World Wide Web Consortium (W3C), which outlines success criteria across four core principles known as POUR: Perceivable (content must be presented in ways users can perceive), Operable (interfaces must be navigable and usable), Understandable (information and operation must be comprehensible), and Robust (content must work with current and future technologies, including assistive tools). These guidelines apply to a wide range of disabilities, including visual, auditory, motor, cognitive, and neurological impairments, and emphasize techniques such as sufficient color contrast, keyboard navigation support, alt text for images, and captions for multimedia. Inclusivity in broadens to address diverse user needs beyond disabilities, incorporating variations in age, culture, , , , and situational contexts to create equitable experiences for all. is defined as an approach that proactively recognizes potential exclusions, learns from diverse perspectives, and solves specific challenges to benefit broader audiences—a method encapsulated in Microsoft's three foundational principles: recognize exclusion, learn from diversity, and solve for one, extend to many. This mindset overlaps with by ensuring interfaces are flexible and adaptable; for instance, features like resizable text or multilingual support not only aid those with low vision or non-native speakers but also enhance overall user satisfaction across demographics. Unlike , which often focuses on legal compliance and accommodations for disabilities, inclusivity emphasizes proactive and , such as avoiding biased algorithms in AI-driven interfaces or designing for low-bandwidth environments in global contexts. Key practices for achieving and inclusivity involve user-centered testing with diverse participants, including those with disabilities, and adhering to established frameworks. For example, WCAG conformance levels (A, , AAA) guide implementation, with being the common target for most websites to ensure broad usability without overwhelming complexity. Inclusive design principles further recommend providing comparable experiences across devices, prioritizing essential content, and offering user control over preferences like animation speeds or input methods. Real-world applications include the , which uses modular components for customizable input, benefiting gamers with motor impairments while appealing to hobbyists seeking personalization. Similarly, curb cuts in —originally for wheelchair users—illustrate how inclusive solutions extend utility to parents with strollers, delivery workers, and cyclists, a concept paralleled in UI by features like voice navigation that assist not just the visually impaired but also hands-free users in vehicles. Challenges in implementing these aspects include balancing innovation with compliance, as emerging technologies like may introduce new barriers for users with vestibular disorders, necessitating ongoing updates to standards like WCAG. Legal mandates, such as the Americans with Disabilities Act (ADA) in the and the , reinforce these practices by requiring accessible digital interfaces in public and commercial sectors. Ultimately, integrating accessibility and inclusivity from the design phase—rather than as an afterthought—yields more robust, marketable products, as evidenced by studies showing that accessible websites rank higher in search engines and reduce support costs through intuitive navigation.

References

  1. [1]
    Introduction to user interfaces
    Aug 21, 2025 · A user interface can be broken down into four constitutive elements according to the devices, interaction techniques, representations, and ...
  2. [2]
    DIRA: A model of the user interface - ScienceDirect.com
    Similarly, in a textbook on UI design, Lauesen (2005) wrote that “the user interface is the part of the system that you see, hear and feel” (p. 4). These ...
  3. [3]
    None
    ### Extracted and Summarized Content on User Interface Design
  4. [4]
    (PDF) User interface history - ResearchGate
    Apr 5, 2008 · User Interfaces have been around as long as computers have existed, even well before the field of Human-Computer Interaction was established.
  5. [5]
    A review of existing and potential computer user interfaces for ...
    May 16, 2018 · In the 1960s, the command-line interface (CLI) was the only way to communicate with computers.
  6. [6]
    (PDF) User Friendly: A Short History of the Graphical User Interface
    This paper examines the historical development of the graphical user interface (GUI) in the United States from 1970 to 1993, culminating in the release of ...
  7. [7]
    User interface history | CHI '08 Extended Abstracts on Human ...
    User Interfaces have been around as long as computers have existed, even well before the field of Human-Computer Interaction was established.
  8. [8]
    What Is a User Interface (UI)? | Definition from TechTarget
    Apr 30, 2024 · Types of user interfaces. The various types of UI include the following: Graphical user interface (GUI). Web UIs and other digital products ...
  9. [9]
    What is User Interface (UI)? (Types & Features) - BrowserStack
    Here are different types of UI design: 1. Command-line Interface (CLI): CLI is a UI to interact with computers- run programs, manage files, etc. ex. command ...
  10. [10]
    What is User Interface (UI)? Meaning & Types | Simpplr
    What is a user interface? A user interface (UI) is the point of interaction between a user and a digital product, such as a website, application or software.Types of user interfaces · History of user interface · Important elements of a user...Missing: sources - - | Show results with:sources - -<|control11|><|separator|>
  11. [11]
    Defining Recommendations to Guide User Interface Design - NIH
    Sep 30, 2022 · This study aimed to analyze and synthesize existing user interface design recommendations and propose a practical set of recommendations that guide the ...
  12. [12]
  13. [13]
    Human-Machine Interaction - an overview | ScienceDirect Topics
    In a complex control system, the human–machine interface attempts to give users the means to perceive and manipulate huge quantities of information under ...
  14. [14]
    The Definition of User Experience (UX) - NN/G
    Aug 8, 1998 · "User experience" (UX) encompasses all aspects of the end-user's interaction with the company, its services, and its products.
  15. [15]
    [PDF] Conceptual Models & Interface Metaphors - Stanford HCI Group
    Feb 16, 2022 · Definition ? “The transference of the relation between one set of ... • We use metaphor in UI design to leverage existing conceptual models.
  16. [16]
    The Role of Metaphors in User Interface Design - ScienceDirect.com
    Interface metaphors help establish user expectations and encourage predictions about system behavior. A good example is the desktop metaphor. This metaphor ...
  17. [17]
    [PDF] Direct Manipulation: - UMD Computer Science
    and users can concentrate on their tasks. Direct Manipulation: A Step Beyond Programming. Languages. Ben Shneiderman, University of Maryland.
  18. [18]
    Direct manipulation: A step beyond programming languages ...
    Direct manipulation involves three interrelated techniques:1. Provide a physically direct way of moving a cursor or manipulating the objects of interest.2.Missing: paradigm | Show results with:paradigm
  19. [19]
    What is a widget? – Definitions from TechTarget.com
    Nov 16, 2022 · A widget is an element of a graphical user interface that displays information or provides a specific way for a user to interact with the operating system (OS) ...
  20. [20]
    State holders and UI state | App architecture - Android Developers
    Sep 3, 2025 · UI state is not a static property, as application data and user events cause UI state to change over time. Logic determines the specifics of the ...
  21. [21]
    What Is User Experience (and What Is It Not)? - Nielsen Norman Group
    Nov 15, 2024 · Don Norman and Jakob Nielsen summed up the distinction between UX and UI nicely with this example: “Consider a website with movie reviews.Missing: source | Show results with:source
  22. [22]
    Front End vs Back End - Difference Between Application Development
    Frontend development focuses on creating fully functional, responsive, and well-designed user interfaces. Backend development involves creating reliable ...
  23. [23]
    50 Years Later, We're Still Living in the Xerox Alto's World
    Mar 1, 2023 · The Alto was a wild departure from the computers that preceded it. It was built to tuck under a desk, with its monitor, keyboard, and mouse on top.Missing: origin | Show results with:origin<|control11|><|separator|>
  24. [24]
    Punch Cards for Data Processing | Smithsonian Institution
    Punch cards became the preferred method of entering data and programs onto them. They also were used in later minicomputers and some early desktop calculators.
  25. [25]
    1961 | Timeline of Computer History
    CTSS was developed by the MIT Computation Center under the direction of Fernando Corbató and was based on a modified IBM 7090, then later 7094, mainframe ...
  26. [26]
    [PDF] The Evolution of the Unix Time-sharing System* - Nokia
    However, Unix was born in 1969 not 1974, and the account of its development makes a little-known and perhaps instructive story. This paper presents a technical ...
  27. [27]
    [PDF] An Introduction to the UNIX Shell - CL72.org
    Nov 1, 1977 · The shell is a command programming language that provides an interface to the UNIX† operating system. Its features include control-flow ...Missing: Stephen | Show results with:Stephen
  28. [28]
    [PDF] A History of the ARPANET: The First Decade - DTIC
    Apr 1, 1981 · The techniques for remote control of computers in the field developed within the ARPANET project are probably more broadly applicable to the ...
  29. [29]
    [PDF] The Compatible Time-Sharing System - Bitsavers.org
    Fano. In November 1961 an experimental time- sharing system, which was an early version of CTSS, was demonstrated at MIT, and in May 1962 a paper describing it ...
  30. [30]
    [PDF] THE HISTORICAL CONTINUITY OF INTERFACE DESIGN - Microsoft
    Early CRTs cost over. $10,000. The first text editors were line-oriented editors designed for programmers; general use of computers for word processing was ...
  31. [31]
    How the Graphical User Interface Was Invented - IEEE Spectrum
    Sep 1, 1989 · Sketchpad, created in 1962 by Ivan Sutherland at Massachusetts Institute of Technology's Lincoln Laboratory in Lexington, is considered the ...
  32. [32]
    (PDF) The Xerox Star: A Retrospective - ResearchGate
    Aug 5, 2025 · PDF | A description is given of the Xerox 8010 Star information system, which was designed as an office automation system.<|separator|>
  33. [33]
    Programmer's Technical Reference for MSDOS and the IBM PC
    This manual is intended to replace the various (expensive) references needed to program for the DOS environment.
  34. [34]
    [PDF] IBM Systems Application Architecture (SAA)
    The version of Cobol implemented for SAA systems is based largely on IBM's understanding of the. ANSI Standard X3. ... DOS based as well as. OS/2 EE based.
  35. [35]
    The Lisa: Apple's Most Influential Failure - Computer History Museum
    Jan 19, 2023 · Key elements of the WIMP GUI paradigm, especially overlapping windows and popup menus, were invented by Alan Kay's Learning Research Group ...
  36. [36]
    Macintosh: 25 Years - NN/G
    characterized by windows, icons, menus, and a user-controlled pointer (that is, WIMP) — was also not new.Missing: 1983 paradigm
  37. [37]
    A History of the GUI - Ars Technica
    May 4, 2005 · I'll be presenting a brief introduction to the history of the GUI. The topic, as you might expect, is broad, and very deep.Other Guis During The 1980s · More Guis Of The 1980s · The 1990s And Beyond<|separator|>
  38. [38]
    Brief History-Computer Museum
    Early Japanese input used teletypes and multilevel keyboards. Later, tablet interfaces and kana-kanji conversion methods were developed. Word processors grew ...
  39. [39]
    Apple Reinvents the Phone with iPhone
    Jan 9, 2007 · iPhone introduces an entirely new user interface based on a large multi-touch display and pioneering new software, letting users control iPhone ...
  40. [40]
    HTML5 Recommendation - W3C
    Oct 28, 2014 · This specification defines the 5th major revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML).
  41. [41]
    Community Round-up #5 – React Blog
    Jul 23, 2013 · We launched the React Facebook Page along with the React v0.4 launch. 700 people already liked it to get updated on the project :) ...Missing: single- | Show results with:single-
  42. [42]
    Digital Transformation
    The Future Directions Committee (FDC) Initiative on Digital Reality felt there is a need to look at the. Digital Transformation from the point of view of a ...<|separator|>
  43. [43]
    Apple Vision Pro
    Featuring the new powerful M5 chip and comfortable Dual Knit Band, Apple Vision Pro seamlessly blends digital content with your physical space.Apple (AU) · Apple (CA) · Apple (SG) · Apple (UK)Missing: zero- touch
  44. [44]
    Apple announces new accessibility features, including Eye Tracking
    May 15, 2024 · Users can control Vision Pro with any combination of their eyes, hands, or voice, with accessibility features including Switch Control, Sound ...Missing: zero- touch AR/ VR
  45. [45]
    Dark Patterns and the Legal Requirements of Consent Banners
    These new legal standards have brought with them new opportunities to define unethical or unlawful design decisions, alongside new requirements that impact both ...
  46. [46]
    Circumvention by design - dark patterns in cookie consent for online ...
    Oct 26, 2020 · The analysis uncovered a variety of strategies or dark patterns that circumvent the intent of GDPR by design. We further study the presence and ...
  47. [47]
    Bash - GNU Project - Free Software Foundation
    GNU Bash. Bash is the GNU Project's shell—the Bourne Again SHell. This is an sh-compatible shell that incorporates useful features from the Korn shell (ksh) ...
  48. [48]
    PowerShell Support Lifecycle - Microsoft Learn
    Windows PowerShell release history ; Windows PowerShell 5.0, Feb-2016, Released in Windows Management Framework (WMF) 5.0 ; Windows PowerShell 4.0, Oct-2013 ...Getting support · Supported platforms
  49. [49]
    Top 4 advantages of a command-line interface - TechTarget
    May 29, 2018 · Using detailed commands through a command-line interface can be faster and more efficient than scrolling across GUI tabs and dialogs. This ...
  50. [50]
    Graphical user interface (GUI) vs command line interface (CLI)
    Rating 3.0 (2) Sep 15, 2025 · CLI is lighter and more efficient than GUI. It does not need a graphical interface and can run with minimal resources. While CLI can be ...
  51. [51]
    When was pipelining introduced? - Unix & Linux Stack Exchange
    May 22, 2016 · The pipeline concept was invented by Douglas McIlroy and first described in the man pages of Version 3 Unix. McIlroy noticed that much of the ...How did UNIX programs interact with each other, before the ...View a range of bash history - Unix & Linux Stack ExchangeMore results from unix.stackexchange.com
  52. [52]
    Unix Is Born and the Introduction of Pipes - CSCI-E26
    Pipes had been created by the time the Version 3 Unix Manual appeared in February 1973. The date listed for the creation of pipes is January 15, 1973(41). Not ...
  53. [53]
    XTERM – Terminal emulator for the X Window System
    It was originally developed in the mid-1980s to provide DEC VT102 and Tektronix 4014 compatible terminals for programs that cannot use the window system ...
  54. [54]
    LJ 17: ncurses: Portable Screen-Handling for Linux
    The first curses library was hacked together at the University of California at Berkeley in about 1980 to support a screen-oriented dungeon game called rogue.
  55. [55]
    aws/aws-cli: Universal Command Line Interface for Amazon ... - GitHub
    The AWS CLI version 1 was made generally available on 09/02/2013 and is currently in the full support phase of the availability life cycle. For information ...Issues 471 · Pull requests 127 · Discussions · Actions
  56. [56]
    AWS CLI task - AWS Toolkit for Microsoft Azure DevOps
    The AWS CLI uses a multipart structure on the command line. It starts with the base call to AWS. The next part specifies a top-level command, ...Description · Parameters
  57. [57]
    Embedded Command Line Interfaces and Why You Need Them
    Oct 23, 2024 · A CLI allows developers and manufacturers to exercise various features and functions without the need for debuggers or deep software knowledge.
  58. [58]
    [PDF] The GUI and the Rise of Microsoft
    At Xerox PARC, a research team codified the WIMP. (windows, icons, menus and pointing device) paradigm, which eventually appeared commercially in the Xerox ...
  59. [59]
    16.1 Xerox PARC – Computer Graphics and Computer Animation
    The idea for GUI was actually first developed by Alan Kay from the University of Utah who went to work at PARC on the Alto project in 1970. Kay and Ed Cheadle ...
  60. [60]
    Firsts: The Mouse - Doug Engelbart Institute
    Doug Engelbart invented the computer mouse in the early 1960s in his research lab at Stanford Research Institute (now SRI International).
  61. [61]
    Introducing Windows 11 | Windows Experience Blog
    Jun 24, 2021 · New in Windows 11, we're introducing Snap Layouts, Snap Groups and Desktops to provide an even more powerful way to multitask and stay on top of ...
  62. [62]
    GNOME Release Calendar
    Release dates # ; 48.6, old-stable, 2025-10-11 ; 49.2, stable, 2025-11-22 ; 48.7, old-stable, 2025-11-22 ; 50.alpha, unstable, 2026-01-03.
  63. [63]
    20 Years of CSS - W3C
    On December 17, 1996, W3C published the first standard for CSS. And thus from December 17, 2016 until one year later, CSS is 20 years old. 2 August 2002 First ...
  64. [64]
    Introduction - JavaScript - MDN Web Docs - Mozilla
    Jul 19, 2025 · JavaScript is a cross-platform, object-oriented scripting language used to make webpages interactive (eg, having complex animations, clickable buttons, popup ...What Is Javascript? · Javascript And Java · Javascript And The...
  65. [65]
    About - Bootstrap
    Originally released on Friday, August 19, 2011, we've since had over twenty releases, including two major rewrites with v2 and v3. With Bootstrap 2, we added ...
  66. [66]
    Graphical user interfaces | Introduction to Human-Computer Interaction
    Aug 21, 2025 · This chapter presents the graphical user interface, its history, and common design objectives for graphical user interfaces. It introduces the ...
  67. [67]
    The computer mouse and interactive computing - SRI International
    In 1964, SRI International's Douglas Engelbart invented the computer mouse as part of a system for organizational learning & global collaboration.
  68. [68]
    [PDF] The Effect of Interface Consistency and Cognitive Load on User ...
    The study found interactions between interface consistency and cognitive load, suggesting consistency's effects depend on task difficulty.
  69. [69]
    In-app Gestures and Mobile App Usability | by Nick Babich | UX Planet
    Mar 7, 2016 · Tinder changed industry with the swipe gesture. They literally teach the whole world what swiping right means. This interface choice is about as ...
  70. [70]
    Apple's 'force touch' and 'taptic engine' explained - The Guardian
    Mar 11, 2015 · The new trackpad also incorporates a new type of haptic feedback – a physical response to a virtual action typically on a touchscreen. In this ...
  71. [71]
    Alexa at five: Looking back, looking forward - Amazon Science
    From Echo's launch in November 2014 to now, we have gone from zero customer interactions with Alexa to billions per week. Customers now interact with Alexa in ...
  72. [72]
    Will ChatGPT-like interfaces ever replace graphical user interfaces?
    Jun 11, 2023 · The coexistence and integration of different interface paradigms will likely continue to shape the future of user interaction. Sources: “The ...
  73. [73]
    Oculus Rift: Step Into the Game - Kickstarter
    Jan 30, 2016 · Oculus Rift is a new virtual reality (VR) headset designed specifically for video games that will change the way you think about gaming forever.
  74. [74]
    Quest v83 PTC Has The Evolved Horizon OS UI Meta Teased At ...
    Oct 28, 2025 · Now, with Horizon OS v83 PTC, Meta is rolling out the evolved version of Navigator which it teased at Connect 2025. The evolved Navigator has a ...
  75. [75]
    A Year of Telepathy | Updates - Neuralink
    Feb 5, 2025 · The implant, or Link, is our fully implantable, cosmetically invisible, wireless brain-computer interface (BCI) designed to restore autonomy to ...Missing: prototypes | Show results with:prototypes
  76. [76]
    Multimodal Design: Elements, Examples and Best Practices - UXtweak
    Mar 22, 2024 · A great example of a multimodal design is a voice-controlled smart home assistant. The user entering their home can give voice commands to ...
  77. [77]
    Multimodal Interfaces: Importance, Effects & Examples - Ramotion
    Nov 1, 2023 · Smart home speakers focus on voice interaction and enable users to perform various home tasks with voice commands. Users can perform tasks ...
  78. [78]
    10 Usability Heuristics for User Interface Design - NN/G
    Apr 24, 1994 · Jakob Nielsen's 10 general principles for interaction design. They are called "heuristics" because they are broad rules of thumb and not specific usability ...Jakob Nielsen · Usability Heuristic 9 · Jakob's Law of Internet User... · Video Games
  79. [79]
    Progressive Disclosure - NN/G
    Dec 3, 2006 · In a system designed with progressive disclosure, the very fact that something appears on the initial display tells users that it's important.
  80. [80]
    The Eight Golden Rules of Interface Design - Ben Shneiderman
    1. Strive for consistency. · 2. Seek universal usability. · 3. Offer informative feedback. · 4. Design dialogs to yield closure. · 5. Prevent errors. · 6. Permit ...
  81. [81]
    The Design of Everyday Things, Revised and Expanded Edition
    Applying Affordances, Signifiers, and Constraints to Everyday Objects. The Problem with Doors; The Problem with Switches; Activity-Centered Controls.
  82. [82]
    The Two UX Gulfs: Evaluation and Execution - NN/G
    Mar 11, 2018 · The gulf of evaluation and the gulf of execution describe two major challenges that users must overcome to successfully interact with any device.
  83. [83]
    User Experience Design - Semantic Studios
    Jun 21, 2004 · User Experience Honeycomb Article by Peter Morville. See More. Information Architect Article by Peter Morville. The Elements of User ...
  84. [84]
    The Double Diamond - Design Council
    The Double Diamond is a visual representation of the design and innovation process. It's a simple way to describe the steps taken in any design and innovation ...
  85. [85]
    Success Rate: The Simplest Usability Metric - NN/G
    Jul 20, 2021 · Success rate is the percentage of users who complete a task, representing the UX bottom line, and is a simple binary metric.Missing: sources | Show results with:sources
  86. [86]
    10 Essential Usability Metrics - MeasuringU
    1. Completion Rates: Often called the fundamental usability metric, or the gateway metric, completion rates are a simple measure of usability.Missing: sources | Show results with:sources
  87. [87]
    5 Essential Usability Metrics in UX Research - Userlytics
    How to Calculate the Error Rate: (Total Number of Misinterpretations / Total Number of Attempts) x 100. For example, if 25 errors occur during 200 task attempts ...Missing: sources | Show results with:sources
  88. [88]
    (PDF) SUS: A quick and dirty usability scale - ResearchGate
    This chapter describes the System Usability Scale (SUS) a reliable, low-cost usability scale that can be used for global assessments of systems usability.
  89. [89]
    ISO 9241-11:1998 - Ergonomic requirements for office work with ...
    ISO 9241-11:1998 Ergonomic requirements for office work with visual display terminals (VDTs)Part 11: Guidance on usability. Withdrawn (Edition 1, 1998).
  90. [90]
    ISO 9241-11:1998(en), Ergonomic requirements for office work with ...
    ISO 9241-11 defines usability and explains how to identify the information which is necessary to take into account when specifying or evaluating usability of a ...
  91. [91]
    A/B Testing 101 - NN/G
    Aug 30, 2024 · A/B testing is a quantitative research method that tests two or more design variations with a live audience to determine which variation performs best.What Is A/B Testing? · Why Conduct an A/B Test?
  92. [92]
    Introduction to Eyetracking: Seeing Through Your Users' Eyes
    Dec 6, 2005 · Eyetracking can show which parts of your user interfaces users see and which parts seem to be invisible to them—not just by observing users and ...
  93. [93]
    Google Analytics - Web Design Museum
    In April 2005, Google took over the Urchin Software Corporation, which was developing a tool for the acquisition of statistical data about website users.
  94. [94]
    Human Insights, artificial intelligence - UserTesting
    UserTesting Timeline. 2007. Founded by Dave Garr and Darrell Benatar (July). 2012. Received $3 million Series B funding from Kern Whelan Capital (September) ...
  95. [95]
    Disability - World Health Organization (WHO)
    Mar 7, 2023 · An estimated 1.3 billion people – or 16% of the global population – experience a significant disability today. This number is growing.10 Facts on disabilityWorld report onWHO Policy on disability
  96. [96]
    WCAG 2 Overview | Web Accessibility Initiative (WAI) - W3C
    The WCAG 2.2 has 13 guidelines. The guidelines are organized under 4 principles: perceivable, operable, understandable, and robust. For each guideline, there ...WCAG 2 at a Glance · What’s New in WCAG 2.1 · The WCAG 2 Documents
  97. [97]
  98. [98]
    Accessibility, Usability, and Inclusion - W3C
    There are guidelines, standards, and techniques for web accessibility, such as the Web Content Accessibility Guidelines (WCAG, which is the international ...Distinctions And Overlaps · Accessibility And Usability · Accessible Design
  99. [99]
    What is Inclusive Design? — updated 2025
    ### Summary of Inclusive Design
  100. [100]
    Guide to Accessible Web Design & Development - Section508.gov
    This guide recaps relevant Web Content Accessibility Guidelines (WCAG)requirements and calls out specific considerations for content, design, and development.
  101. [101]
    Accessibility and Inclusivity: Study Guide - NN/G
    Sep 17, 2023 · Accessibility and inclusivity are branches of usability. They are mindsets rather than collections of procedures, regulations, or checklists.