Fact-checked by Grok 2 weeks ago

Input device

An input device is a hardware peripheral that enables users to interact with a computer by sensing physical actions—such as keystrokes, movements, or touches—and converting them into or signals that the can process. These devices form a critical bridge in human-computer interaction (HCI), allowing input of commands, text, images, or audio to facilitate tasks ranging from basic to complex simulations. Common examples of input devices include the , which detects key presses to input alphanumeric characters and symbols; the , a pointing device that tracks relative cursor movement on a surface; touchscreens, which register direct finger or stylus contact for intuitive navigation; and microphones, which capture audio for voice commands or recording. Other notable types encompass joysticks for directional control in gaming or simulations, scanners for digitizing printed images, and trackballs for precise cursor manipulation in compact spaces. The design and performance of these devices are often evaluated using principles like Fitts' Law, which quantifies the time required for pointing tasks based on target distance and size, influencing modern ergonomics and efficiency. The evolution of input devices traces back to early computing milestones, beginning with punch cards and teletypewriters in the mid-20th century for batch data entry, followed by interactive innovations like the light pen in 1957 for direct screen interaction at MIT's Lincoln Lab. A pivotal advancement occurred in 1964 when Douglas Engelbart and his team at SRI International developed the first computer mouse, a wooden prototype with wheels that revolutionized graphical user interfaces by enabling precise pointing and selection. Subsequent developments included the first capacitive touchscreen in 1965 by E.A. Johnson and the ball mouse in 1968, paving the way for widespread adoption in personal computing during the 1980s. Today, input devices continue to advance with multi-touch gestures, gesture recognition via cameras, and AI-enhanced voice interfaces, adapting to ubiquitous and mobile computing environments while prioritizing accessibility and low-latency performance to minimize user frustration.

Fundamentals

Definition and Principles

An input device is any hardware component or peripheral that enables users or environmental sources to transmit data, commands, or signals to a computer or electronic system for processing and interpretation. These devices serve as the between the physical world and , facilitating the entry of information in forms such as text, coordinates, or readings that the system can manipulate. In essence, input devices bridge human intent or external stimuli with machine-readable formats, forming a critical layer in for applications ranging from simple control to complex simulations. The fundamental principle of input devices involves , the process by which physical actions or phenomena are converted into electrical or signals suitable for computational processing. This conversion typically occurs through sensors or mechanisms that detect changes in , , light, or , transforming analog physical inputs into via analog-to-digital converters. Input modalities can be categorized as , producing binary or event-based outputs like on/off states, or continuous, capturing analog variations such as positional gradients over time. During , signals undergo sampling at a specified rate to capture temporal details without loss, followed by quantization to map continuous values to finite levels, ensuring in . Key characteristics of input devices include , which measures the delay between physical input and system recognition; , denoting the of detectable increments; and accuracy, reflecting how closely the device mirrors the actual input. addresses user comfort and fatigue reduction through design, while ensures seamless integration with operating systems via standardized protocols. These attributes directly influence , with low and high being essential for interactions, though trade-offs often arise in balancing against cost and complexity. In human-computer interaction (HCI), input devices embody the "input" phase of the classic input-process-output model, where they empower to exert agency by directing system behavior and providing contextual data for processing. This role extends beyond mere data entry to enable iterative dialogue between user and machine, supporting tasks from command issuance to and fostering intuitive control in software ecosystems. By facilitating diverse interaction paradigms, input devices underpin the adaptability of systems to varied user needs and contexts.

Historical Evolution

The development of input devices traces back to 19th-century precursors that enabled manual data entry and mechanical processing. Telegraph keys, introduced in the mid-1800s for Morse code transmission, served as early binary input mechanisms, allowing operators to encode messages through on-off switches that influenced later digital signaling techniques. Punch cards emerged as a significant advancement, inspired by Joseph Marie Jacquard's 1801 loom but refined for data tabulation by Herman Hollerith in his 1890 electromechanical tabulating machine, which automated the U.S. Census processing by reading punched holes to represent information. The and 1950s marked the birth of electronic computing, where input relied on rudimentary hardware. The , completed in 1945, used plugboards, switches, and patch panels for programming and data entry, requiring physical reconfiguration for each task. Teleprinters, evolved from 1920s teletypewriters, became standard inputs by the late , incorporating keyboards to punch paper tape or directly transmit data to computers like the in 1951. Into the 1960s, these devices persisted alongside early keyboards on systems like the , bridging mechanical and electronic eras. The 1970s and 1980s saw the rise of personal computing, standardizing interactive inputs. The keyboard layout, patented in 1873 by Christopher Sholes for typewriters to prevent jamming, gained prominence in PCs with the in 1975 and in 1981. invented the in 1964 at Stanford Research Institute, demonstrating it publicly in 1968 during "," which showcased collaborative computing interfaces. PARC advanced these in the 1970s with the workstation, integrating the mouse into graphical user interfaces that influenced Apple's Lisa and Macintosh. Joysticks proliferated for gaming, popularized by the in 1977, enabling analog control in titles like . From the 1990s to , technological shifts emphasized precision and mobility. Optical mice emerged in the early 1980s, with receiving a key patent in 1983 that enabled the first commercial by Mouse Systems Corporation. popularized LED-based optical models in 1999, improving reliability over mechanical designs. Touchscreens entered commercial use with the HP-150 computer in 1983, employing infrared grids, but widespread adoption followed capacitive multitouch in devices like the 2007 . Wireless inputs emerged, exemplified by 's 1998 cordless mouse using , enabling untethered operation in laptops and peripherals. The 2010s to 2025 integrated and multimodal sensing, expanding beyond traditional hardware. Microsoft's , launched in 2010 for , introduced via depth-sensing cameras, transforming body motion into game inputs. Apple's , debuted in 2011 on the , pioneered voice assistants by processing natural language queries through and cloud . Haptic matured, with vibration motors in smartphones providing tactile confirmation since the early , evolving to advanced actuators in wearables for nuanced simulations. Brain-computer interfaces advanced with Neuralink's prototypes in the early , implanting electrodes to decode neural signals for direct thought-based , though still experimental.

Text Entry Devices

Keyboards

Keyboards serve as primary input devices for text entry in systems, enabling users to input alphanumeric characters, symbols, and commands through physical or key presses. They consist of an of keys arranged in a , where each key activates a switch to register input. The structure typically includes keycaps mounted on switches, supported by a controller board that processes signals and communicates with the host device. Common switch types include membrane switches, which use rubber domes and conductive membranes to complete electrical circuits upon depression, providing a cost-effective but softer feel; switches, featuring individual spring-loaded mechanisms like Cherry MX for tactile or clicky feedback and greater durability; and scissor switches, employing a scissor-like for stability and shallow travel, often found in keyboards for compact design. When a key is pressed, it generates a unique scan code via the switch closure, which the keyboard's translates into a standardized report—such as under the USB (HID) protocol—sent to the computer for mapping to characters based on the active and modifiers. Standard keyboard layouts dictate key positioning to optimize typing for specific languages and . The layout, originating from Christopher Latham Sholes's 1870s designs and commercialized by Remington in 1874, arranges keys to separate frequently used letter pairs—such as "t" and "h"—reducing mechanical jamming in early typewriters by slowing rapid successive strikes on adjacent keys. The Simplified Keyboard, patented in 1936 by , prioritizes efficiency by placing the most common English letters on the home row, minimizing finger travel and reportedly allowing mastery in one-third the time of based on 1930s experiments with students. International variants adapt these for non-English languages, such as in and , which swaps Q and A for accented characters like and , and in and , replacing Y and Z to align with . Keyboards vary in connectivity and form to suit different use cases. Wired models connect via USB for low-latency, reliable input without batteries, while wireless variants use for multi-device pairing or 2.4 GHz RF receivers for dedicated, interference-resistant links. Ergonomic designs, such as split keyboards that angle halves to align with natural hand positions or curved layouts like the Wave Keys, reduce wrist strain during prolonged typing. Virtual on-screen keyboards, displayed via software on touchscreens, simulate key presses through taps, serving as alternatives for mobile devices though less efficient for extended text entry. Specialized keyboards cater to niche needs, enhancing functionality or . Gaming keyboards incorporate programmable keys for complex command sequences, per-key RGB for visual , and rapid trigger switches for responsive input in competitive play. Compact variants include tenkeyless (TKL) layouts, omitting the to save desk space while retaining and function keys—popular among gamers for mouse movement freedom—and 60% layouts, further reducing size by integrating secondary functions into layers accessed via modifier keys. keyboards, designed for visually impaired users, feature eight-dot cells (using keys like F, D, S for dots 1-3) to input Grade 2 characters, with models like the Perkins Brailler or Bluetooth-enabled Orbit Writer providing haptic feedback and connectivity to computers or mobile devices for independent text entry. As of 2025, keyboards dominate text input on desktops and laptops, with and its variants comprising the predominant layout on personal computers due to entrenched standards and .

Alternative Keyless Methods

Handwriting recognition enables text input through stylus-based writing on touch-sensitive surfaces, where algorithms interpret the strokes and convert them to digital characters. A prominent early example is the system introduced with the Palm Pilot in 1996, which used a simplified, single-stroke to reduce ambiguities and improve recognition rates compared to natural . This approach relied on pattern-matching techniques to achieve accuracies often exceeding 95% for trained users, though it required learning a proprietary symbol set distinct from standard cursive or print styles. For broader of existing , optical character recognition (OCR) processes scanned documents by segmenting and classifying glyphs, with modern implementations employing convolutional neural networks (CNNs) for enhanced accuracy on varied scripts. Seminal work in this area includes LeCun et al.'s 1998 framework, which demonstrated CNNs achieving error rates below 1% on the MNIST dataset of handwritten digits by learning hierarchical features through . Gesture-based and swipe keyboards represent another keyless alternative, allowing users to paths across an on-screen to words without lifting their finger, leveraging predictive engines to disambiguate traces. , commercialized in 2010 by , pioneered this method for mobile devices, enabling entry speeds up to 50 by modeling paths against a dictionary of word shapes. These systems integrate for , where recurrent neural networks or transformers analyze partial inputs and user history to suggest completions, adapting to individual styles over time for reduced errors. For instance, Google's employs a neural to resolve gesture ambiguities, supporting multilingual swipes with context-aware predictions that boost efficiency on touchscreens. As of 2025, advancements include integration of large language models for more accurate next-word predictions in swipe typing. Chorded keyboards facilitate compact text entry by requiring simultaneous presses of multiple keys to represent characters or syllables, minimizing physical size for portable or wearable use. Adaptations of machines, originally designed for rapid transcription, have been integrated into since the late , with systems like the Twiddler—a one-handed device from the —enabling chording for mobile phones at rates approaching 20 after training. In adaptations, such as open-source tools for programmers, users chord phonetic outlines on a reduced keyset (typically keys), achieving speeds over 200 in expert scenarios by mapping strokes to linguistic units rather than individual letters. Projection and holographic keyboards project a virtual key layout onto any surface using laser or LED technology, detecting inputs via infrared sensors that capture finger proximity or motion without physical contacts. Commercial models emerged in the early , such as those from Canesta, combining a mini-projector with optical sensing to simulate a full interface on desks or walls, supporting entry speeds of around 38 . These systems process interruptions in projected beams to register "keystrokes," offering portability for laptops or tablets while avoiding mechanical wear, though accuracy depends on ambient light and surface flatness. For users with motor impairments, accessibility-focused keyless methods include eye-tracking text entry and switch-based scanning. Eye-tracking systems, like those from Dynavox, use cameras to monitor pupil movements, enabling gaze-directed selection on on-screen keyboards for individuals with limited dexterity, with dwell times or blinks confirming choices at rates of 5-10 . Switch-based scanning presents characters or word groups sequentially, activated by a single switch (e.g., or head switch), allowing motor-impaired users to build text progressively; techniques like row-column scanning achieve 1-2 but can incorporate predictive models to accelerate selection. These adaptations prioritize reliability over speed, often integrating error correction to support independent communication.

Pointing and Cursor Control Devices

Mechanical Pointing Devices

Mechanical pointing devices are hardware input tools that rely on physical rolling, tracking, or mechanisms to control cursor movement on a computer screen, primarily through mechanical sensors that detect motion in two dimensions. These devices emerged as essential components for graphical user interfaces, enabling precise navigation without direct screen contact. Unlike later touch-based alternatives, mechanical devices emphasize durable, tangible interaction via moving parts or strain gauges, though they can suffer from wear over time. The , a foundational , was invented by and his colleague in 1964 at the Stanford Research Institute (SRI), where the prototype consisted of a wooden block housing two perpendicular metal wheels connected to potentiometers for tracking X and Y movements. Early models used a ball-and-roller , in which a rubber ball rotated against two rollers to translate surface movement into electrical signals via quadrature encoding, a technique employing two out-of-phase signals from optical interrupters to determine both direction and distance. This design dominated until the 1990s, when optical mice supplanted it; these use an LED or to illuminate the surface, capturing images with a to compute motion without physical rollers, offering greater durability as ball mice often accumulated dirt on rollers, leading to erratic tracking and requiring periodic cleaning. Modern mechanical mice typically include 2-12 programmable buttons for actions like clicking or macros, a —popularized by Microsoft's in 1996 for vertical document navigation—and adjustable DPI () settings ranging from 800 to 45,000 (as of 2025), allowing users to fine-tune sensitivity for tasks from precise editing to rapid movements. Recent models, as of 2025, feature high polling rates up to 8,000 Hz for minimal input lag in competitive . Trackballs represent an inverted variation of the mouse, where users manipulate a stationary ball with their thumb or fingers to control the cursor, reducing desk space needs and wrist motion. Logitech's , introduced in 1989, featured a thumb-operated ball for ergonomic comfort, positioning it as a breakthrough for prolonged use in design work. These devices excel in precision applications like (CAD), as the stationary base minimizes arm extension and allows fine adjustments without lifting the device; models vary between thumb-trackballs for intuitive one-handed control and finger-operated versions for multi-finger dexterity in detailed modeling. Other mechanical pointing options include the TrackPoint, developed by and debuted in 1992 on the 700 series laptops as a small rubber nipple embedded in the keyboard center. It uses strain gauges to detect pressure and tilt, translating subtle finger pushes into cursor movement, enabling navigation without removing hands from the home row keys. Joysticks, adapted from arcade controls, provide analog axes through potentiometers or sensors for proportional input, making them suitable for gaming genres like first-person shooters () where variable speed and direction—such as aiming in titles like Doom—enhance immersion. At their core, these devices employ quadrature encoding in ball-based systems, where slotted wheels interrupt infrared beams to generate phase-shifted pulses, allowing microcontrollers to resolve movement resolution down to sub-millimeter accuracy. Electromagnetic variants, like those in some joysticks, use for non-contact sensing, but traditional mice face wear from ball slippage and roller debris buildup, contrasting with optical mice's superior longevity due to fewer mechanical contacts. Mechanical pointing devices find broad applications in desktop navigation for , where facilitate drag-and-drop operations, and in for precise aiming requiring low-latency tracking. Ergonomic studies highlight their role in RSI prevention; for instance, using a with arm support maintains neutral wrist postures, potentially reducing upper limb musculoskeletal disorders by up to 50% in office settings through adjusted positioning and breaks.

Touch-Based Pointing Devices

Touch-based pointing devices enable users to control cursors and interact with interfaces through direct contact with touch-sensitive surfaces, primarily utilizing capacitive or resistive sensing technologies to detect finger positions or inputs. These devices have become integral to laptops, smartphones, and tablets, offering intuitive navigation via gestures such as swiping and pinching. Unlike mechanical predecessors, they rely on electrical properties rather than physical movement, allowing for compact designs and seamless integration into portable electronics. Touchpads, commonly found on laptops, consist of a flat surface that detects finger position and movement through a of electrodes monitoring changes in caused by the conductive properties of . Synaptics introduced a pioneering commercial touchpad in , revolutionizing input by replacing bulkier trackballs with a sleek, integrated interface that supports precise cursor control and . Modern touchpads incorporate multi-finger , enabling actions like two-finger for vertical or horizontal navigation and three-finger swipes for task switching, enhancing productivity without dedicated buttons. Touchscreens embed pointing functionality directly into display surfaces, allowing users to interact with on-screen elements via touch. Resistive touchscreens, prevalent in early personal digital assistants (PDAs) from the 1990s, operate on pressure-based detection where two flexible conductive layers connect upon touch to register input, offering durability in glove-compatible scenarios but limited to single-point interaction. In contrast, capacitive touchscreens, which sense disruptions in an electrostatic field from a finger's electrical charge, support capabilities and were popularized by Apple's in 2007, enabling gesture-based interfaces on larger displays through projected capacitance technology that scans grids for precise multi-point detection. Stylus support extends touch-based input for applications requiring finer control, such as digital drawing. Passive styluses function like a fingertip by conducting electricity to capacitive surfaces, providing basic pointing without additional hardware. Active styluses, however, employ electromagnetic resonance (EMR) technology, as developed by since the in graphic tablets, where a powered pen tip interacts with a digitizer grid to deliver high-resolution input independent of screen capacitance. These active systems offer pressure up to 8,192 levels, allowing variable line thickness and opacity in creative software by measuring tip force via internal sensors. Gesture recognition in touch-based devices involves software algorithms that interpret sequences of touch events—such as taps, swipes, and rotations—into actionable commands, often processed at the operating system level for consistency across applications. For instance, Microsoft's Windows platform utilizes the , which handles WM_GESTURE messages to detect and respond to patterns like pinch-to-zoom for scaling content or rotate for . This layer abstracts raw data into intuitive controls, supporting up to ten simultaneous touch points in advanced implementations. Recent advancements have enhanced the tactile and form-factor aspects of touch-based pointing. Haptic feedback, integrated post-2010 in devices like smartphones and touchpads, provides vibrational or force responses to simulate button presses or textures, improving user confirmation through linear resonant actuators that deliver precise, low-latency vibrations upon touch events. In the 2020s, screens have adapted capacitive touch layers to flexible organic light-emitting diode () substrates, maintaining responsiveness across bending hinges in devices like Samsung's Galaxy Z series, which support uninterrupted gestures on unfolded 7-8 inch displays.

Imaging and Visual Input Devices

Scanners

Scanners are input devices that convert physical documents, images, or objects into formats by capturing optical data through sensors, enabling the of printed for , , or . These devices primarily focus on static capture, distinguishing them from tools, and are essential for transforming analog content into searchable digital assets. Common in offices, archives, and homes, scanners vary in to handle different sizes and types, with advancements improving speed, accuracy, and integration with software ecosystems. Flatbed scanners, the most ubiquitous type, feature a flat glass platen where documents are placed face-down for scanning. They employ either Charge-Coupled Device (CCD) or Contact Image Sensor (CIS) technology to capture images. CCD sensors, which use a lens to focus reflected light onto a distant sensor array, offer superior image quality, deeper depth of field (up to 10 times that of CIS), and better color fidelity, making them suitable for high-end applications like photo reproduction. In contrast, CIS sensors contact the document directly via a row of closely spaced detectors, resulting in cheaper, more compact, and energy-efficient designs but with lower resolution and shallower depth of field, often limiting them to thinner documents. Flatbed scanners typically achieve optical resolutions up to 4800 dots per inch (DPI), sufficient for detailed prints and photographs, though interpolated resolutions can reach 6400 DPI for enhanced clarity. Color depth standards include 24-bit RGB for standard true-color capture, representing over 16 million colors, with higher-end models supporting 48-bit input for professional editing. The operation of a flatbed scanner involves a moving scan head beneath the glass that illuminates the document with a light source, such as light-emitting diodes (LEDs) in modern models for energy efficiency and instant-on capability, or earlier cold-cathode fluorescent lamps. Light reflects off the document and is captured line-by-line by the sensor as the head traverses the platen, converting the intensity of reflected light into digital pixels via analog-to-digital conversion. This process builds a complete image raster, which software then processes for output in formats like JPEG or PDF. Integration with applications occurs through standards like the TWAIN protocol, an industry API that allows scanners to communicate directly with software for seamless image acquisition and control of settings such as resolution and color mode. Beyond flatbeds, scanners include specialized types tailored to workflow needs. Sheet-fed scanners incorporate automatic document feeders () to handle multi-page documents sequentially, ideal for bulk processing in offices, though they may struggle with fragile or bound materials due to mechanical feeding. Handheld scanners, portable and battery-powered, allow manual sweeping over documents for on-the-go but often yield lower accuracy from user motion inconsistencies. 3D scanners, emerging prominently since the early 2000s, use triangulation—projecting a laser line or pattern onto an object and calculating surface geometry from the distortion observed by a camera—to create three-dimensional models, useful for object replication or analysis. Scanners find wide applications in digitizing and managing , particularly for archiving historical or legal documents into searchable databases, where they preserve originals while enabling efficient retrieval. Integration with (OCR) software, such as , enhances this by converting scanned images to editable text, supporting features like area selection and format reconstruction via TWAIN-compatible drivers. In desktop medical settings, scanners digitize patient records, insurance cards, and forms for (EHR) systems, ensuring compliance and quick access without compromising security. The evolution of scanners traces back to drum scanners in the 1950s, such as the 1957 model developed by at the National Bureau of Standards, which rotated cylindrical documents under a for early computer image input. By the , flatbed designs proliferated with the adoption of USB connectivity following the 1996 USB 1.0 standard, enabling plug-and-play operation and powering compact, affordable units for consumer use. As of 2025, trends emphasize mobile scanning applications that leverage cameras for document capture, often with AI-enhanced OCR for instant , bridging traditional with portable software solutions.

Cameras and Video Capture

Cameras and video capture devices serve as essential input mechanisms for computers and digital systems by converting into electrical signals that represent visual , enabling applications from image processing to real-time analysis. These devices capture dynamic scenes through lens-based , distinguishing them from static scanning methods, and have evolved to support high-resolution inputs for computing tasks such as and . Digital cameras primarily utilize (Complementary Metal-Oxide-Semiconductor) s, which dominate modern designs due to their lower power consumption, faster readout speeds, and integrated analog-to-digital conversion compared to older (Charge-Coupled Device) s that require separate processing chips. Megapixel ratings in these cameras have advanced significantly, with 2025 models like the GFX100RF featuring a 100MP medium-format for detailed input, while smartphone-integrated cameras range from 12MP in budget devices to 108MP in flagships like those in recent series for preprocessing. Lenses in digital cameras, often variable zooms or fixed primes, focus light onto the using multi-element glass constructions to minimize aberrations, with systems employing phase-detection or contrast-based methods to achieve sharp captures in milliseconds. formats allow uncompressed data input, preserving 12-14 bits per channel for post-capture adjustments in software like , unlike processed JPEGs. Webcams, designed for continuous video input, typically connect via USB interfaces for plug-and-play compatibility, though higher-end models support for direct display integration. The C920, introduced in 2012, established resolution as a standard for webcams, delivering Full HD video at 30 frames per second () with a 78° to capture individuals or small groups effectively. Frame rates commonly range from 30 at to 60 at , ensuring smooth input for interactive applications without excessive demands. Video input from these devices relies on codecs for compression, evolving from H.264 (AVC), which became the in the 2000s for its balance of quality and efficiency in Blu-ray and streaming, to , a introduced in 2018 that achieves 30% better compression for + inputs while reducing bandwidth in surveillance systems. Streaming protocols like RTSP () facilitate low-latency transmission over networks, commonly used in video calls via platforms like and in surveillance for live feeds from IP cameras. These inputs support applications such as remote conferencing, where real-time video enables , and security systems, where motion-triggered captures feed into analytics software. Advanced features in cameras and include support for (3840x2160) and 8K (7680x4320) resolutions, standardized by the UHD Alliance in 2016 and 2019 respectively, allowing high-fidelity input for professional and rendering. enhancements, emerging post-2010s with frameworks like , enable on-device by processing video frames to identify and track elements such as vehicles or persons in . and cameras provide non-visible spectrum input using uncooled sensors, such as the 640x512 in FLIR PT-Series models, for detecting heat signatures in low-light or . Integration of cameras extends to augmented and virtual reality systems, where devices like the (launched 2019) use multiple wide-angle cameras for inside-out positional tracking and passthrough video to blend real-world visuals with digital overlays. In facial recognition preprocessing, camera inputs capture biometric features like landmarks and textures, which algorithms analyze for authentication in systems like those in iOS , reducing computational load by filtering irrelevant frames on-device.

Audio Input Devices

Microphones

Microphones are electroacoustic transducers that convert sound waves into electrical signals, serving as essential input devices for capturing audio in , communication, and systems. They operate by detecting acoustic pressure variations through a or similar mechanism, producing an that is typically amplified and digitized for . This captured audio enables applications ranging from commands to professional recording, with design variations optimized for sensitivity, directionality, and environmental robustness. The primary transducer types in microphones include dynamic, condenser, and microelectromechanical systems (MEMS). Dynamic microphones employ a attached to a suspended in a , generating voltage via ; they are robust and handle high levels well, making them suitable for live performances and field use. Condenser microphones, also known as capacitor microphones, use a charged and backplate to form a , where sound-induced capacitance changes produce the signal; these are highly sensitive and accurate, ideal for studio environments due to their wide and low noise. MEMS microphones integrate miniature capacitive sensors fabricated on chips, enabling compact, low-power designs prevalent in smartphones and wearables, where space constraints demand high performance in small form factors. Microphone directionality is defined by polar patterns, which describe to from different angles, influencing rejection and pickup focus. patterns capture equally from all directions, useful for ambient recording but prone to background interference. Cardioid patterns exhibit heart-shaped , prioritizing from the front while attenuating sides and rear, providing effective rejection in directional scenarios like interviews. These patterns are tailored to the audible of approximately 20 Hz to 20 kHz, ensuring comprehensive capture of speech and music without or . For digital integration, microphones interface with analog-to-digital converters (ADCs) to sample the signal, with 44.1 kHz serving as the standard rate for CD-quality audio to faithfully represent frequencies up to 22.05 kHz per the Nyquist theorem. Noise cancellation enhances signal quality through active techniques implemented via (DSP), where algorithms generate anti-phase waveforms to suppress ambient sounds in . Specialized types include lavalier microphones, which are small clip-on units for hands-free operation in presentations and , shotgun microphones with interference tubes for hyper-cardioid focus in to isolate distant sources, and microphone arrays employing —such as in the device introduced in 2014—to spatially filter and enhance voice signals from multiple elements. In applications, microphones preprocess audio for speech-to-text systems by providing clean input signals that improve recognition accuracy through and feature extraction. They are also fundamental in music recording, where choice affects tonal fidelity and . As of 2025, trends emphasize AI-driven noise suppression, leveraging models like denoising autoencoders integrated with for adaptive filtering, particularly in setups post-pandemic to mitigate echo and in virtual meetings.

Digital Voice Recorders

Digital voice recorders represent standalone devices designed for capturing, storing, and managing audio input, transitioning from analog tape-based systems prevalent in the to fully digital formats in the 1980s. This shift began with 's pioneering (PCM) technology, which digitized audio signals for improved fidelity and reliability over analog methods. In 1974, developed the X-12DTC, its first PCM digital audio recorder using 2-inch tape and a fixed-head system, primarily for internal testing and demonstration at audio fairs. By 1977, the commercial PCM-1 processor allowed via consumer VCRs, marking the initial consumer accessibility of capture. The evolution continued in 1987 with 's introduction of (DAT), a format achieving CD-equivalent sound quality through stationary-head on compact cassettes. In 1992, extended DAT principles to voice-specific applications with the NT-1 digital microcassette recorder, which used postage-stamp-sized tapes for portable audio storage and playback.) Contemporary digital voice recorders incorporate integrated hardware tailored for efficient audio input and retention, including built-in omnidirectional or stereo microphones for direct capture, non-volatile or expandable /MicroSD card slots supporting capacities up to 128 for thousands of hours of recordings, and power sources such as rechargeable lithium-ion batteries or cells offering 17 to 110 hours of continuous depending on and settings. Audio is typically encoded in uncompressed for or compressed for extended storage, with bit rates adjustable from 192 kbps to 3072 kbps to balance quality and duration. These devices often weigh under 100 grams for portability, featuring durable or metal casings resistant to everyday handling. Key features enhance for spontaneous and extended recording sessions, such as one-button operation to initiate capture instantly without menu navigation, voice-operated recording (VOR) that automatically starts upon detecting sound above a and pauses during silence to conserve space, and timestamping that embeds date and time into files for easy organization and reference. Basic onboard editing capabilities include trimming excess audio segments, adjusting playback speed for review, and A-B repeat functions for focused listening, often accessible via intuitive button interfaces or companion software. algorithms further refine input by suppressing background , ensuring clearer voice during use. Digital voice recorders are categorized into portable handheld models suited for personal or mobile applications and professional variants for demanding environments. Handheld examples from the , like Sony's ICD-UX series, emphasize compactness with capacities such as 2 GB internal storage in the ICD-UX200 model, built-in USB connectivity, and microphones for everyday dictation or notes, as seen in the ICD-UX200's support for up to 535 hours in low-bitrate modes. Professional models, such as the Zoom H5 introduced in 2014, offer advanced inputs including XLR/TRS combo jacks for external , four-track simultaneous recording at up to 24-bit/96 kHz resolution, and interchangeable capsule systems for versatile or mid-side configurations, catering to needs. Smartphone attachments, like clip-on modules with integrated mics and / interfaces, extend functionality for hybrid mobile recording without dedicated hardware. The typical workflow involves direct audio capture on the device, followed by seamless file transfer to computers or cloud services via USB 2.0 ports acting as drives or wireless pairing for quick exports in / formats. Post-transfer, recordings integrate with AI-driven transcription platforms; for instance, has enabled uploading and automated conversion of audio files to searchable text since its launch in 2018, generating summaries and speaker identification for efficient review. This process supports archival in digital libraries or editing in software like , streamlining from input to output. In modern applications, digital voice recorders facilitate journalism by enabling accurate capture of interviews without note-taking distractions, preserving nuances for verbatim transcription and ethical verification. They are also widely used in academic settings to record lectures, allowing students to revisit content for study or accessibility purposes, with features like VOR minimizing file bloat from pauses.

Sensor and Environmental Input Devices

Motion and Position Sensors

Motion and position sensors detect physical movement, orientation, and location to enable user input in computing systems, translating real-world dynamics into digital signals. These devices are essential for applications requiring spatial awareness, such as navigation, gaming, and human-computer interaction, by measuring linear acceleration, angular velocity, or absolute positioning. Accelerometers and gyroscopes form the core of many motion sensors, utilizing microelectromechanical systems () technology to provide precise 3-axis measurements. Accelerometers quantify linear acceleration along x, y, and z axes in units of g-forces (where ≈ 9.81 m/s²), detecting changes in due to or motion. Gyroscopes measure angular rotation rates in degrees per second across the same axes, enabling orientation tracking. The integration of accelerometers in Apple's iPhone 3G in 2008 marked a pivotal advancement, allowing tilt-based interfaces like screen and motion . Gyroscopes were added in the iPhone 4 in 2010. Inertial measurement units (IMUs) combine accelerometers and gyroscopes with magnetometers for comprehensive 6-degree-of-freedom (6DOF) tracking, fusing data via algorithms like the to estimate position and reduce noise. Global Positioning System (GPS) receivers complement IMUs by providing absolute location with typical accuracy of 5 meters under open-sky conditions, using satellite signals for triangulation. For indoor or GPS-denied environments, dead reckoning techniques in IMUs predict position by integrating acceleration and velocity over time, though they accumulate errors without periodic corrections. Notable examples include the Wii Remote, released in 2006, which paired a 3-axis with (IR) camera tracking for pointer-based motion input in . In virtual reality, Rift's 2016 Constellation system employed external IR sensors and internal IMUs for 6DOF head and controller tracking, achieving sub-millimeter precision. sensors, used in modern optical computer mice since the late , detect surface motion by imaging sequential frames and computing displacement, supporting cursor control at resolutions up to 20,000 DPI. These sensors enable diverse applications, including gesture-based control. In fitness tracking, devices like Fitbit's 2009 launch used accelerometers to monitor steps and activity via algorithms. Automotive systems incorporate steering angle sensors and for input to advanced driver-assistance systems (ADAS), measuring yaw rates for stability control. is crucial to maintain accuracy, involving drift correction through periodic resets and sensitivity thresholds to filter noise from environmental vibrations. As of 2025, edge advancements allow real-time processing on-device, using neural networks to fuse with minimal , enhancing applications in wearables and .

Biometric and Environmental Sensors

Biometric sensors serve as input devices by capturing unique physiological or behavioral characteristics for identity verification, enabling secure authentication without traditional passwords. Fingerprint scanners, one of the most common biometric inputs, utilize capacitive or optical technologies to detect ridge patterns on the skin. Capacitive scanners measure variations in electrical capacitance between ridges and valleys, while optical scanners employ light reflection to create a digital image of the fingerprint. Apple's Touch ID, introduced in the iPhone 5s in 2013, exemplifies capacitive fingerprint sensing integrated into consumer devices for unlocking and payments. Iris and retina scanners rely on infrared (IR) imaging to capture the intricate patterns of the eye's colored ring or vascular structure, respectively, offering high accuracy for access control in secure environments. Facial recognition systems, such as Apple's Face ID launched in the iPhone X in 2017, use 3D depth mapping via structured light or IR dot projection to distinguish faces from photos, enhancing security in mobile and surveillance applications. Environmental sensors provide input on ambient conditions, allowing devices to adapt responses to external factors like or proximity. Temperature sensors often employ thermistors—semiconductor devices whose resistance changes with —for precise measurements, achieving accuracies around ±0.5°C in many integrated systems. Humidity sensors typically use capacitive or resistive elements whose change with moisture absorption. Light sensors, typically photodiodes, detect ambient illumination levels to adjust display brightness or trigger adaptive features in electronics. Proximity sensors detect nearby objects without contact, using either IR reflection for short-range detection or ultrasonic waves for broader coverage, commonly found in smartphones to disable the screen during calls. These sensors input data to inform behavior, such as conserving or preventing unintended inputs. Biometric processing involves creating and comparing digital templates derived from data, with algorithms aligning input samples against stored references to verify identity. False acceptance rates (FAR) in modern systems are typically below 0.1%, balancing and through thresholds that minimize unauthorized access. Environmental data processing often incorporates techniques, combining inputs from multiple sensors—like proximity and light—to provide contextual awareness; for instance, smartphones use this to detect placement and adjust screen state accordingly. Such enhances reliability by compensating for individual limitations, enabling features like automatic or adjustments informed by ambient conditions. These sensors find applications in security logins, where replace PINs for faster access; smart home systems, such as the Nest Learning released in 2011, which uses temperature and humidity inputs for energy-efficient climate control; and health monitoring via wearables, including the Apple Watch's ECG feature introduced in 2018 for detecting irregular heart rhythms through bioelectric signals. concerns arise from the sensitive nature of biometric data, prompting standards like FIDO2, finalized in 2019, which supports encrypted, decentralized to prevent template theft. By 2025, expansions under the EU AI Act and GDPR reinforce protections for biometric processing, prohibiting high-risk uses like real-time remote identification in public spaces and mandating impact assessments for data handling.

Advanced and Specialized Devices

High-Degree-of-Freedom Devices

High-degree-of-freedom (DOF) input devices enable users to objects or interfaces along multiple independent axes of motion, typically six or more, encompassing three translational (, z position) and three rotational (, yaw, roll orientation) degrees in . This contrasts sharply with traditional two-DOF devices like computer mice, which are limited to planar x-y for cursor , restricting their utility in immersive or complex modeling environments. Such high-DOF devices facilitate intuitive of or physical systems by mimicking natural movements, enhancing precision in tasks requiring spatial awareness. Prominent examples include 3D mice, such as the SpaceMouse series, first introduced in 1993 as an affordable six-DOF puck-shaped controller using a six-axis / for simultaneous position and rotation input. Haptic controllers, like the prototype, extend this by incorporating -feedback mechanisms through actuated motors that simulate tactile responses, allowing users to "feel" virtual interactions such as grasping or resistance. These devices often feature ergonomic designs with programmable buttons for workflow integration, supporting seamless navigation without disrupting traditional keyboard-mouse setups. Technologically, high-DOF devices rely on electromagnetic or optical tracking systems to capture positional data; electromagnetic methods use magnetic fields generated by base stations to detect sensor coils on the device, offering occlusion-free tracking up to several meters, while optical systems employ cameras and markers for high-precision line-of-sight measurements. is commonly represented using quaternions, a four-dimensional vector that avoids singularities like in , enabling smooth interpolation and computational efficiency in applications. These tracking modalities achieve sub-millimeter accuracy in controlled settings, though electromagnetic systems may suffer from in metallic environments. In applications, high-DOF devices excel in (CAD) modeling, where the SpaceMouse integrates directly with software like to enable intuitive pan, zoom, and rotate operations on 3D assemblies, improving efficiency compared to shortcuts. For virtual and (VR/AR) manipulation, devices like the Controller, launched in 2013, provide optical hand tracking with up to 25 for the hand and fingers, supporting gesture-based interactions such as object or precise placement in immersive simulations. Recent advancements as of 2025 include full-body exoskeletons for high-DOF teleoperation in robotics, such as the XoMotion demonstrated at CES 2025 and the HOMIE hardware combining arm exosuits with glove interfaces, which enable remote control of manipulators with haptic feedback for tasks like surgical or industrial assembly. These systems enable loco-manipulation in unstructured environments, with data-driven AI enhancing motion prediction for beyond-human-scale operations.

Composite and Multimodal Devices

Composite and input devices integrate multiple sensory channels to facilitate more intuitive and versatile human-computer (HCI). These systems combine distinct input modalities—such as touch, , motion, audio, and visual cues—either simultaneously or interchangeably, allowing users to leverage complementary strengths for enhanced expressiveness and efficiency. Unlike single-modality devices, setups process fused data streams to interpret complex user intents, reducing errors and supporting natural communication patterns akin to human behavior. Early examples include graphics tablets, which pioneered the blending of positional tracking with pressure sensitivity. In 1984, introduced the WT Series, the world's first cordless pen tablet featuring electromagnetic resonance technology for stylus position detection and variable pressure sensitivity, enabling artists to simulate traditional drawing tools digitally. Gamepads further exemplify composite design through the integration of discrete buttons, analog joysticks, and triggers; the original Xbox Controller S, released in 2001, combined these elements with force feedback for precise navigation and action in gaming, marking a shift toward ergonomic control. techniques enhance such integration by algorithmically combining raw data from disparate sensors—e.g., accelerometers for and gyroscopes for rotational tracking—to yield more accurate orientation estimates. The Switch's controllers, launched in 2017, demonstrate this by fusing (IMU) data with button presses, supporting hybrid gameplay modes like motion-aiming in shooters. Contemporary devices like smartphones embody comprehensive within a compact , aggregating capacitive touchscreens for gestural input, microphones for recognition, cameras for visual capture, and embedded sensors (e.g., accelerometers, proximity, and ambient light) for contextual awareness. This fusion enables seamless transitions between modalities, such as tilting the device for navigation while issuing commands. Similarly, large-scale surfaces like the table, unveiled in 2007, supported up to 52 simultaneous touch points and object tracking via , fostering collaborative interactions in environments like retail or . The advantages of composite and devices include minimized hardware proliferation, improved task efficiency through redundant input paths (e.g., voice fallback for imprecise gestures), and greater for diverse users, such as those with motor impairments benefiting from combined haptic and auditory cues. However, implementation challenges persist, particularly in synchronizing asynchronous data streams to avoid perceptual delays— below 100 ms is often critical for —and managing computational overhead from fusion algorithms. In the 2020s, has elevated multimodal inputs through large language models (LLMs) capable of orchestrating text, voice, and image processing in unified frameworks, as seen in systems like GPT-4o that interpret combined queries for responsive virtual assistance. These AI-driven approaches further mitigate synchronization issues via predictive fusion, enabling applications in where gestural, vocal, and visual inputs converge fluidly.

Legacy and Niche Input Methods

Punched Media Systems

Punched media systems encompass early mechanical input methods that encoded data through perforations in paper cards or tapes, allowing machines to interpret information via physical or electrical detection. These systems emerged in the late as a means to automate and processing, particularly for large-scale tabulations where manual methods were inefficient. developed punched cards specifically for the 1890 U.S. Census, using cards measuring 3¼ by 7⅜ inches (83 by 188 mm) with 24 columns and 12 rows of round holes to represent categorical data such as , , and . The perforations were created manually via a pantograph punch, and the cards were read by tabulating machines employing spring-loaded pins or electrical brushes that completed circuits through the holes, enabling rapid counting without human intervention; this innovation reduced census processing time from an estimated decade to about 2.5 years. Subsequent advancements standardized punch card formats, with adopting 80-column rectangular-hole cards by the 1920s for broader alphanumeric encoding in business and scientific applications. Input was typically performed using manual devices, such as the 026 introduced in , which featured a typewriter-like to punch holes corresponding to keystrokes while the on the for verification. Readers mechanically fed stacks of cards past sensing brushes dipped in mercury pools or conductive rollers, detecting hole positions to generate electrical signals for computer input; for example, early tabulators processed cards at rates of 150 to 1,000 per minute depending on the model. These systems were prone to errors from physical damage, such as tears in the or misfeeds during reading, which could misalign holes and corrupt interpretation. Punched paper tape, an alternative medium, traced its roots to 19th-century and was adapted for data input in the with teletype machines using 5-channel formats to encode characters via the , where each row of holes across the tape's width represented one of 32 symbols. Variants included 7- or 8-channel tapes for expanded character sets like ASCII in later decades, with chadless designs that cut slits rather than removing circular chads to minimize debris and jamming. Tape punches operated similarly to keypunches but produced continuous rolls, while readers used photoelectric or mechanical sensors to scan perforations at speeds reaching up to 1,000 characters per second by the in high-performance systems like those integrated with early computers. In applications, punched media facilitated foundational for censuses, payrolls, and inventory, while also serving as program input for computers; decks of punch cards encoded source code in the 1950s, allowing batch submission to mainframes like the for compilation and execution. By the 1970s and 1980s, these systems declined in favor of direct entry and , which offered faster, more reliable, and editable input without mechanical wear. Today, punched media persist in archival contexts, preserved in museums to demonstrate early input techniques.

Other Obsolete or Specialized Techniques

Switch panels, consisting of toggle switches and patch bays, served as primary input mechanisms in early electronic computers, allowing operators to manually configure wiring and set binary states for programming and control. For instance, the , delivered in 1951, featured a control console with numerous toggle switches for entering data and instructions directly, supplemented by plugboards for interconnecting components. These systems required physical manipulation to load programs or alter machine states, often in conjunction with punched cards for bulk data entry, though the switches enabled real-time adjustments during operation. Light pens emerged as an innovative interaction tool for () displays in the mid-20th century, detecting the electron beam's position to enable direct screen manipulation. In Ivan Sutherland's 1963 system, developed on the TX-2 computer, the allowed users to draw lines, select objects, and perform geometric transformations interactively, pioneering graphical user interfaces. By sensing the phosphor's glow from the scan, the device translated beam timing into coordinates, facilitating precise pointing and drawing without mechanical intermediaries, though it was limited to vector displays and became obsolete with raster screens. Among obsolete pointing devices, trackballs—early variants providing an inverted mouse-like control for cursor movement in constrained environments, such as cabinets and industrial terminals in the and —exemplified by Atari's Trak-Ball controller used in games like from 1980, rotated a ball to manipulate on-screen elements, offering durability over joysticks but suffering from precision issues due to mechanical wear. Foot pedals function as accessibility aids by mapping foot pressure to or actions, enabling users with upper-body limitations to perform clicks or ; examples date to accessibility systems in the late . In niche modern applications, brain-computer interfaces (BCIs) like Emotiv's EPOC EEG headset, introduced in 2010, capture brain signals via non-invasive electrodes to enable thought-based control of devices, translating neural patterns into commands for gaming or prosthetics as of 2025. These systems process (EEG) data to detect intentions, though accuracy remains challenged by signal noise; by 2025, advanced models like Emotiv's EPOC X provide 14-16 channel EEG with 16-bit resolution for improved applications in and research. Magnetic styluses, employing electromagnetic resonance () technology as in Wacom tablets since the 1990s, allow battery-free input by generating position data through coil interactions with the tablet's grid, supporting pressure-sensitive drawing in professional design workflows. Specialized domain inputs include MIDI keyboards, standardized in to transmit musical performance data as digital events—such as note velocity and duration—between synthesizers and computers, revolutionizing music production by unifying incompatible instruments. The protocol, developed by Sequential Circuits and others, uses a 5-pin for low-latency at 31.25 kbps, enabling real-time control in studios. Industrial jog pendants provide handheld control for machinery like CNC routers, featuring jog wheels and buttons for precise axis movement and tool adjustments, often wireless for operator mobility around large equipment. QR code scanners, invented in by Wave for automotive part tracking, input encoded data via optical recognition, storing up to 7,089 numeric characters per code for rapid inventory and logistics applications.

References

  1. [1]
    [PDF] INPUT/OUTPUT DEVICES AND INTERACTION TECHNIQUES
    Pen input, via sketching, can be used to define 3D objects (Zeleznik ... Absolute input device An input device that reports its actual position, rather than ...
  2. [2]
    The design space of input devices
    An input device is part of the means used to engage in dialogue with a computer or other machine.Missing: definition | Show results with:definition
  3. [3]
    Input devices - Ada Computer Science
    Input devices provide data to a computer. Common examples include keyboards, mice, microphones, and touchscreens.
  4. [4]
    Some Milestones in Computer Input Devices: An Informal Timeline
    In the early 1950s, Robert Everett developed a light gun to read the position of a dot on the screen of the Whirlwind computer for diagnostic purposes.
  5. [5]
    [PDF] Input and Output - MSU Denver
    • Any data or instructions entered into a computer. • Input devices translate data into a form that the system. unit can process. • Some hardware input devices ...
  6. [6]
    [PDF] Best Practices for Open Sound Control - OpenSoundControl.org
    The transduction of a physical gesture into a digital representation requires measurement of the temporal trajectory of all relevant dimen- sions of the ...
  7. [7]
    [PDF] An Introduction to 3D User Interface Design
    Devices that combine both discrete and continuous events to form single, more flexible devices are called combination or hybrid input devices. Examples of ...
  8. [8]
    [PDF] ESE 150 – Lab 01: Sampling and Quantizing Audio Signals
    A device known as an analog-to-digital converter (ADC or A2D) is the component needed to digitize our signal. An. A2D will have a sampling rate: meaning how the ...
  9. [9]
    [PDF] Motion Tracking: No Silver Bullet, but a Respectable Arsenal
    As a result, fooling the human senses can prove exceed- ingly challenging, requiring high spatial accuracy and resolution, low latency, and high update rates.<|control11|><|separator|>
  10. [10]
    [PDF] Chapter 9 - FAA Human Factors
    Jan 9, 2019 · This section provides rules for keyboards, function keys, pointing devices, and some alternative input devices. The advantages and.
  11. [11]
    [PDF] INPUT DEVICES AND TECHNIQUES Robert J.K. Jacob, Tufts ...
    The principal means of human output or computer input today is through the user's hands, for example keyboards, mice, gloves, and 3D trackers; these are ...
  12. [12]
    Strategic Directions in Human Computer Interaction
    Nov 13, 1996 · It involves input and output devices and the interaction techniques that use them; how information is presented and requested; how the computer ...
  13. [13]
    [PDF] Interaction Styles and Input/Output Devices
    Input is a neglected field relative to output, particularly considering the great strides made in computer graphics, but for that reason it is also an area that ...
  14. [14]
    Digital Analogies:The Keyboard as Field of Musical Play
    Apr 1, 2015 · As Raykoff notes, Baudot introduced it as the input device for his multiplexed telegraph system, but it was denigrated as unwieldy and ...Missing: precursors | Show results with:precursors
  15. [15]
    [PDF] Introduction to Computer Technology, Network Economics, and ...
    Toward the end of the nineteenth century, a U.S. Census Bureau agent named Herman Hollerith developed a punched-card tabulating machine to automate the census.Missing: precursors | Show results with:precursors
  16. [16]
    The Modern History of Computing
    Dec 18, 2000 · ENIAC was not a stored-program computer, and setting it up for a new job involved reconfiguring the machine by means of plugs and switches. For ...
  17. [17]
    [PDF] A History of Computers
    inventor called Herman Hollerith, whose idea it was to use Jacquard's punched cards to ... Herman Hollerith's Tabulating Machines http://www.maxmon.com/1890ad.htm ...Missing: precursors | Show results with:precursors
  18. [18]
    The Typewriter – “that almost sentient mechanism” | Inside Adams
    Mar 19, 2015 · ... 1873 under the name Sholes and Glidden Type-Writer. This ... popularized the QWERTY layout we are still using on our computer keyboards.
  19. [19]
    The Mother of All Demos | Lemelson
    Dec 10, 2018 · The prototype was invented by Douglas Engelbart and Bill English in 1964 at the Stanford Research Institute (SRI), and is on loan to the museum ...
  20. [20]
    [PDF] A Brief History of Human-Computer Interaction Technology
    The mouse was then made famous as a practical input device by Xerox. PARC in the 1970s. It first appeared commercially as part of the Xerox Star (1981), the.
  21. [21]
    The Magnavox Odyssey predicted the future of video games
    Sep 19, 2022 · It was notably different from what Atari and arcade games would adopt: joysticks and one or a few buttons. Instead of a joystick, the ...
  22. [22]
    Steve Kirsch - IEEE Spectrum
    Aug 1, 2000 · From the UCLA computer room, Kirsch went on to invent the optical mouse, patent the method of tracking advertising impressions on the Internet ...
  23. [23]
    [PDF] Experience of Teaching Advanced Touch Sensing Technologies
    Oct 16, 2014 · In 1983, Hewlett-Packard introduced the HP-150, a home computer with touch screen technology. It had a built-in grid of infrared beams across ...
  24. [24]
    [PDF] Continuous connectivity, handheld computers, and mobile spatial ...
    Nov 19, 2013 · Here, I situate the emergence of continuous connectivity in the marketing of handheld computers in the late-1990s, to historicize the importance ...
  25. [25]
    [PDF] an abstract of the thesis of - Oregon State University
    May 30, 2012 · Microsoft released the Kinect for the Xbox 360 in 2010, a camera based system that tracks players' movement to allow for complex and natural ...Missing: assistants haptic
  26. [26]
    Serious About Siri | Brandeis Magazine
    Twenty years ago, at the nonprofit research institute SRI International, Cheyer developed the first prototype for Siri. In 2003, with funding from the U.S. ...
  27. [27]
    [PDF] Haptic Rendering: Introductory Concepts - Stanford University
    This article surveys current haptic systems and discusses some basic haptic-rendering algorithms. Page 2. device and perceives audiovisual feedback from audio.Missing: input | Show results with:input
  28. [28]
    Exclusive Q&A: Neuralink's Quest to Beat the Speed of Type
    A Q&A with Joseph O'Doherty of Neuralink about the company's attempt to break the record for brain-machine interface performance.
  29. [29]
    Membrane vs. Mechanical Keyboards: What's the Difference?
    Jan 1, 2022 · Membrane keyboards use the electrical contact between the membrane layers (that rubber-like sheet section) and PCB, while mechanical boards have small pins.
  30. [30]
    PSA: You Can Replace the Keys on Your Mechanical Keyboard
    Aug 26, 2022 · Instead of a mechanical switch, a membrane keyboard has a little ... keyboard right down to the style of the little scissor lifts under the keys.
  31. [31]
    [PDF] HID Usage Tables - Universal Serial Bus (USB)
    Oct 12, 2020 · 1.1rc1. October 13, 1998. Incorporate Keyboard Usage Table from the 1.0 HID Specification and HID Review. Requests 16, 34, 38, 40, 41, 42, 43, ...
  32. [32]
    The QWERTY Keyboard Will Never Die. Where Did the 150-Year ...
    Feb 25, 2025 · The invention's true origin story has long been the subject of debate. Some argue it was created to prevent typewriter jams, while others insist it's linked to ...
  33. [33]
    May 12, 1936: Dvorak Patents Keyboard - WIRED
    May 12, 2010 · Because touch typing had become widespread, Dvorak concluded that a new, more efficient layout needed to be devised to serve people with high ...<|separator|>
  34. [34]
    Windows keyboard layouts - Globalization - Microsoft Learn
    Sep 24, 2025 · Choose a keyboard below to view its layouts.
  35. [35]
    The 6 Best Gaming Keyboards of 2025 - RTINGS.com
    Jun 25, 2025 · The best keyboard for gaming we've tested at a budget price point is the Corsair K70 RGB TKL. Along with outstanding gaming performance, it ...Corsair K70 RGB TKL · Corsair K100 RGB · Corsair K65 RGB MINI<|separator|>
  36. [36]
    The 5 Best TKL Keyboards of 2025 - RTINGS.com
    Mar 18, 2025 · TKL or TenKeyLess keyboards are full-size keyboards without a numpad. This makes them a particularly popular choice for gamers as they free ...
  37. [37]
    What Are Braille Keyboards? 3 Top Braille Keyboards - Allyant
    Nov 30, 2023 · Braille keyboards are special keyboards that allow visually impaired and blind people who rely on braille to communicate by typing as a ...
  38. [38]
    The Word-Gesture Keyboard - Communications of the ACM
    Sep 1, 2012 · ... handwriting recognition as a text input method. The original 1996 Palm Pilot that successfully launched the PDA (personal digital ...
  39. [39]
    Text Entry Systems: Mobility, Accessibility, Universality | Guide ...
    Twiddler typing: One-handed chording text entry for mobile phones. ... (1990). Text compression . Upper ... keyboard, predictive keyboard, and handwriting.
  40. [40]
    Gradient-based learning applied to document recognition
    This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task.
  41. [41]
    The Machine Intelligence Behind Gboard - Google Research
    May 24, 2017 · An intelligent keyboard needs to be able to account for these errors and predict the intended words rapidly and accurately. As such, we built a ...
  42. [42]
    A Chorded Keyboard for Sighted, Low Vision, and Blind Mobile Users
    Stenotype is a chorded method still in use by court reporters as expert stenotyping speed is much faster than expert Qwerty [33].
  43. [43]
    A Survey of Virtual Keyboards | Request PDF - ResearchGate
    Virtual keyboard and mouse systems are proposed as a new generation of HCI devices and paradigms. A virtual keyboard and mouse is known as a touch-typing device ...
  44. [44]
    Text input for motor-impaired people - ACM Digital Library
    ... Twiddler typing: one-handed chording text entry for mobile phones. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '04 ...
  45. [45]
    Douglas Engelbart: Computer visionary - Berkeley Engineering
    He went on to invent the computer mouse in 1964, with a prototype that consisted of a block of pine, a circuit board and two metal wheels.
  46. [46]
    encoder - How does a ball mouse know the direction?
    Jun 15, 2011 · The trick is how the two receivers are placed, namely in quadrature. · This means that the pulses of one receiver precede the pulses of the other ...Mouse Wheel Rotary Encoder ... encoding wrongDIY Quadrature Encoder? [closed] - Electronics Stack ExchangeMore results from electronics.stackexchange.com<|separator|>
  47. [47]
    How Optical Mice Came To Dominate Input Devices - Tedium
    May 19, 2024 · One of those innovations was the optical mouse, which one of its engineers, Richard F. Lyon, developed in the early 1980s. Lyon, thankfully for ...
  48. [48]
    Meet The Inventor of the Mouse Wheel - Coding Horror
    May 16, 2007 · Matt Young was kind enough to forward me a link finally revealing who invented the mouse wheel: Microsoft's Eric Michelman.
  49. [49]
    What Is Mouse DPI and Why Does it Matter for Gaming? - IGN
    Jan 31, 2024 · A high DPI setting of up to 3600, or higher, is useful for ultra-quick, flick-and-fire moves and trick shots. This can also reduce consistency ...
  50. [50]
    A Brief History of the Personal Computer Trackball
    ... trackball as an alternative to a mouse so Logitech introduced its first trackball, the original Trackman, in 1989. Plus, it was quickly pretty obvious that ...
  51. [51]
    Trackball History: Canada's Earliest Gift to Computing - Tedium
    Nov 12, 2021 · But the real turning point came in 1989, when the Swiss accessories manufacturer Logitech created the ergonomically minded Trackman, which had ...
  52. [52]
    What is a TrackPoint (pointing stick)? | Definition from TechTarget
    Jun 2, 2023 · History of the TrackPoint. The TrackPoint was invented by IBM in 1992. It was originally designed for use in the ThinkPad line of laptops ...
  53. [53]
  54. [54]
    Quadrature Decoder - fpga4fun.com
    Quadrature signals are two signals generated with a 90 degrees phase difference. They are used in mechanical systems to determine movement (or rotation) of an ...
  55. [55]
    Quadrature Encoders - The Ultimate Guide
    A quadrature encoder is a type of incremental encoder used in many general automation applications where sensing the direction of movement is required.
  56. [56]
    Ergonomic interventions for preventing work‐related ...
    We found physical ergonomic interventions, such as using an arm support with a computer mouse based on neutral posture, may or may not prevent work‐related MSDs ...Missing: RSI | Show results with:RSI
  57. [57]
    About Us - Synaptics
    Synaptics launched the first notebook PC touchpad, revolutionizing laptop navigation and replacing the mechanical trackball with a sleek, capacitive interface.Leadership · Events · BlogMissing: 1994 | Show results with:1994
  58. [58]
    How IT works: Trackpad technology and multi-touch support
    Jan 19, 2017 · Capacitive trackpads are more common in laptops nowadays and were first commercialized by Apple in 1994 with its PowerBook 500 series. Prior ...<|separator|>
  59. [59]
    Touchscreen Types, History & How They Work - Newhaven Display
    Apr 11, 2023 · Unlike resistive touchscreens, capacitive touchscreens don't rely on screen pressure to detect a touch event. When a user touches the screen ...
  60. [60]
    A Brief History Of Touchscreen Technology: From The IPhone To ...
    Jul 20, 2022 · While capacitive touchscreens were the first to be invented, resistive touchscreens surpassed them in initial years. Dr. G. Samuel Hurst ...
  61. [61]
    What are the advantages of Active ES (Electrostatic) pen ... - Wacom
    Apr 8, 2021 · ... Active ES pen achieves superb performance such as high pen pressure sensivity and accuracy expressing intricate details. Wacom's Active ES ...
  62. [62]
    Getting Started with Windows Touch Gestures - Win32 apps
    Aug 23, 2019 · To use Windows touch gestures, set up a window, handle WM_GESTURE messages, and interpret them using GetGestureInfo.Missing: recognition | Show results with:recognition
  63. [63]
    Exploring Haptic Technology's Impact on User Experience
    Rating 5.0 (260) Jul 15, 2024 · Haptic feedback is the technology that applies touch and force feedback to allow users to feel and experience the touch when they are using ...
  64. [64]
    Foldable Smartphones: New Devices, New Opportunities
    Jan 10, 2025 · Foldable smartphones include fold-out phones (horizontal, tablet-sized when unfolded) and flip phones (vertical, compact when folded). Tri-fold ...Missing: 2020s | Show results with:2020s
  65. [65]
    Flatbed Scanner - an overview | ScienceDirect Topics
    6. The light source is typically a cold-cathode tube, which is moved over the document, and the reflected light is sensed by photosensitive dot-sized cells, ...Introduction to Flatbed... · Hardware Architecture and...
  66. [66]
    Optical Scanner Technology - CIS and CCD Sensors Explained
    Dec 13, 2022 · CIS was/is more affordable than CCD technology but, initially, there was a drop-off in the image quality, as CIS scanners lacked the precision ...
  67. [67]
    Comparing the depth of field of two types of flatbed scanner, a CIS ...
    The CCD design has at least 10X the depth of field of the CIS. Although dof is still quite modest it can be sufficient for a range of 3D subjects.
  68. [68]
  69. [69]
    [PDF] CCD or CIS: The Technology Decision - Image Access
    Our CIS scanners come very close to our CCD scanners with respect to color fidelity and gamut. One issue remains and is a fundamental difference between the ...
  70. [70]
    The Best Scanners We've Tested (November 2025) | PCMag
    Specs & Configurations. Flatbed. Maximum Optical Resolution 4800 pixels. Mechanical Resolution 4800 pixels. Automatic Document Feeder. Ethernet Interface.
  71. [71]
    The Best Scanners of 2025 | Tested & Rated - Tech Gear Lab
    Rating 4.8 · Review by Sentry KellyJun 3, 2025 · SPECIFICATIONS. Scanner Type, Flatbed. Paper Sizes, Max: 8.5" x 11.7". Optical Resolution, 6400 DPI. Simplex/Duplex, Simplex. Automatic Document ...
  72. [72]
    Scanner Resolution and Color Depth - Lifewire
    Aug 28, 2024 · The typical optical resolution in multifunction printers with scanning capabilities is 300 dpi, which more than meets the needs of most people.
  73. [73]
    Scanning Resolution Explained - The Scanner Shop
    Oct 23, 2024 · 24-bit colour depth can capture over 16 million colours, which is the standard for most home and office scanners. 36-bit colour depth can ...Missing: color | Show results with:color
  74. [74]
  75. [75]
    What is a TWAIN Driver? The only guide you need for 2025
    Mar 13, 2025 · TWAIN drivers are the most accessible protocol for document scanning devices in today's market. Nearly all scanners come with a TWAIN driver.
  76. [76]
    HP Enterprise MFP - Install and Configure HP Scan Twain
    TWAIN is an industry standard interface between a scan hardware and a software application. The HP Scan Twain is a free common desktop software that enables ...
  77. [77]
  78. [78]
    A Complete Guide to Types of Scanner – CZUR TECH
    Aug 7, 2025 · 1.1 Flatbed Scanner · 1.2 Portable Scanner · 1.3 Sheet-fed Scanner · 1.4 Photo Scanner · 1.5 Book/Overhead Scanner · 1.6 Drum Scanner.
  79. [79]
    What is laser 3D scanning? - Artec 3D
    Feb 11, 2025 · Triangulation-based laser scanners operate by emitting laser light onto an object and capturing reflected light with an onboard camera sensor.
  80. [80]
  81. [81]
    OCR Scanning Explained - Record Nations
    Aug 9, 2023 · OCR scanning enables companies and individuals to turn their paper documents into an editable and searchable digital version.
  82. [82]
    Scanning to the OCR Editor - abbyy
    You can open images from a scanner or camera in the OCR Editor, where you will be able to: Draw and edit recognition areas manually; Check recognized text ...
  83. [83]
    Scanners for Healthcare | Epson US
    Award-winning imaging solutions that capture data safely and accurately either at the front desk, back office or department in your healthcare organization.
  84. [84]
  85. [85]
    The Evolution of Document Scanning - Scanbot SDK
    Nov 10, 2023 · In the 1960s, scanning technology advanced with IBM's 805 Test Scoring Machine and Russell Kirsch's Drum Scanner, the first flatbed scanner. The ...Missing: USB | Show results with:USB
  86. [86]
    How USB Came to Be - IEEE Spectrum
    Feb 22, 2022 · Initially intended to simplify attaching electronic devices to a PC, USB became a very successful low-cost, high-speed interface for home and business use.
  87. [87]
    The Best Scanning and OCR Apps We've Tested for 2025 | PCMag
    Make a digital copy of your deeds and titles, save other important documents, and turn tax paperwork into PDFs with the best scanning apps we've tested.
  88. [88]
  89. [89]
    The best compact camera for 2025: top choices to take anywhere
    Oct 6, 2025 · Best premium – Fujifilm GFX100RF: Taking image quality to new heights, the GFX100RF packs a 100MP 44x33mm sensor – that's larger than full-frame ...
  90. [90]
    The best compact cameras | Digital Camera World
    Sep 24, 2025 · The new FujiFilm X100VI is the best overall compact on the market with 40MP stills and 6K video capabilities.Best point and shoot · Ricoh GR IIIx review · Best camera for street
  91. [91]
    Best Cameras for Beginners 2025: Our Top Picks to Help You Learn ...
    Oct 12, 2025 · Discover the best cameras for beginners 2025 for photos and video. See which Canon, Sony, and Nikon models deliver the most value for you.
  92. [92]
    Logitech C920e Business Webcam - Full HD 1080p
    Shop C920e Business Webcam. Features 1080p video, autofocus, automatic light correction, detachable privacy screen, versatile mounting options & more.Missing: HDMI | Show results with:HDMI
  93. [93]
    Logitech C920s PRO Full HD Webcam with Privacy Shutter
    In stock Rating 4.6 (266) C920s delivers remarkably crisp and detailed Full HD video (1080p at 30fps) with a full HD glass lens, 78° field of view, and HD auto light correction—plus ...
  94. [94]
    Logitech C920e Full HD 1080p Business Webcam Black 960-001384
    In stock Rating 4.5 (33) With a choice of 30 fps at 1080p or the hyperfast 60 fps at 720p, you can record or go live with vibrant, true-to-life video on channels like Twitch and YouTube ...
  95. [95]
    Video Codecs Explained: MPEG-2, H.264, HEVC & AV1 Evolution
    Sep 30, 2025 · The influence of H.264 is hard to overstate. It quickly became the backbone for Blu-ray discs, streaming platforms, and video conferencing.
  96. [96]
    The 6 Best Video Streaming Protocols and Streaming Formats in 2025
    May 8, 2025 · In contrast, RTSP is designed for real-time streaming with lower latency, typically used in surveillance or monitoring applications where quick, ...
  97. [97]
    How AV1 video encoding is transforming network video for the future
    Feb 24, 2025 · AV1 is a next-generation, device-agnostic video encoding standard poised to revolutionize the surveillance market.
  98. [98]
  99. [99]
    A Lightweight Real-Time Infrared Object Detection Model Based on ...
    Sep 12, 2024 · This framework employs the YOLO model to extract features from ground infrared videos and images taken by Forward-Looking Infrared cameras.<|separator|>
  100. [100]
  101. [101]
    Face Tracking for Movement SDK for Unity - Meta for Developers
    Aug 28, 2025 · The Face Tracking API provides FACS-based blendshapes that represent most of the face including nose, mouth, jaw, eyebrows, and areas close to the eye.
  102. [102]
    Niantic Spatial SDK Brings Immersive Reality to Meta Quest 3 with ...
    Aug 6, 2025 · Niantic Spatial SDK v3.15 now supports Meta Quest 3 with immersive mixed reality features powered by Meta's Passthrough Camera API—including ...
  103. [103]
    [PDF] CHAPTER 4 3D User Interface Input Hardware - People
    Many different characteristics can be used to describe input devices. One of the most important is the degrees of freedom (DOF) that an input device af- fords.
  104. [104]
    Haptic Input - an overview | ScienceDirect Topics
    The SpaceMouse (left) and the SpacePilot (right) are widespread examples for 6 DOF input devices. The controller is dragged to control the z -direction of the ...5.7 Input Devices · 3.4 Modalities · 4.1. 1 Assembly And...
  105. [105]
    About UK: Company profile, history and mission - 3Dconnexion
    A global patent was granted and in 1993, SpaceMouse®, the world's first affordable 3D mouse, was launched. The product was marketed under the name Magellan in ...
  106. [106]
    [PDF] CLAW: A Multifunctional Handheld Haptic Controller for Grasping ...
    CLAW is a handheld virtual reality controller that augments the typical controller functionality with force feedback and actuated movement to the index ...
  107. [107]
    A Robust Tri-Electromagnet-Based 6-DoF Pose Tracking System ...
    Magnetic pose tracking is a non-contact, accurate, and occlusion-free method that has been increasingly employed to track intra-corporeal medical devices ...
  108. [108]
    [PDF] An Improved Calibration Framework for Electromagnetic Tracking ...
    A quaternion based formulation provides a simple and fast computational framework for representing orientation errors. Our experimental apparatus consists of a ...
  109. [109]
    [PDF] Optical Tracking From User Motion To 3D Interaction
    Common tracking systems use magnetic or ultrasonic trackers in dif- ferent variations as well as mechanical devices. All of these systems have drawbacks which ...<|control11|><|separator|>
  110. [110]
    Dassault Systèmes SOLIDWORKS and 3Dconnexion
    The SpaceMouse, CadMouse are the ultimate combo for the digital designer utilizing offers from 3DEXPERIENCE Works. Whether designing in SOLIDWORKS desktop, or ...Missing: integration | Show results with:integration
  111. [111]
    A Comparative Study of Interaction Time and Usability of Using ...
    Sep 13, 2021 · The Leap Motion Controller is a low-cost hand-gesture-sensing device. It can be attached to a VR headset to interact with the virtual ...
  112. [112]
    Exoskeleton Technology Has A Strong Showing At CES 2025 - Forbes
    Jan 13, 2025 · Forbes contributors publish independent expert analyses and insights. Bobby covers exoskeletons, exosuits and wearable robotics. Follow Author.
  113. [113]
    The Robotics Breakout Moment | Salesforce Ventures
    Aug 7, 2025 · HOMIE is an exoskeleton teleoperation hardware that combines arms, gloves, and foot pedals for loco-manipulation teleoperation. Hand Technology.
  114. [114]
    Input devices | Introduction to Human-Computer Interaction
    Aug 21, 2025 · Input streams can also be multimodal, which means that multiple input streams are combined for the purpose of sensing user input. A useful ...
  115. [115]
    A Multimodal Human-Computer Interaction System and Its ...
    A multimodal human-computer interaction system is composed of the comprehensive usage of various input and output channels.
  116. [116]
    Multimodal Interfaces for Human-Computer Interaction - Tentackles
    What are Multimodal Interfaces? Multimodal interfaces allow you to interact with devices through multiple input methods, simultaneously or interchangeably.
  117. [117]
    Celebrate Wacom's 40th Anniversary
    The 1980s marked the dawn of personal computers, and keyboard-based text input was the norm. In 1984, Wacom launched the world's first pen tablet with cordless ...Missing: invention | Show results with:invention
  118. [118]
    (1989..1990) History of Pen, Touch and Gesture Computing
    Rand tablet invented by Tom Ellis around 1966: also an ... Product literature on Wacom force/pressure-sensitive pen stylus and cordless pen stylus tablet.<|separator|>
  119. [119]
    CES 2001: Hands-On With the Xbox Controller - GameSpot
    May 17, 2006 · The port closest to the face of the controller is for the 8MB memory card, while the back port will be for extra peripherals, such as a ...
  120. [120]
    [PDF] Multi-Modal Data Fusion in Enhancing Human-Machine Interaction ...
    Feb 15, 2022 · Multimodal data fusion (MMDF) integrates the input of different modalities to enhance the strengths and reduce the deficiencies of the ...
  121. [121]
  122. [122]
    How Nintendo Switch's Controllers Track Movement - SlashGear
    Jun 26, 2023 · The gyro sensor detects the orientation of the controller, whether you're holding it vertically, horizontally, upside-down, and so on. The ...
  123. [123]
    Sensor types | Android Open Source Project
    Oct 9, 2025 · A glance gesture sensor enables briefly turning the screen on to enable the user to glance content on screen based on a specific motion.
  124. [124]
    Smartphone Sensors for Health Monitoring and Diagnosis - PMC
    Ericsson R380 was the first device to use the mobile-specific Symbian operating system (OS) and only second to IBM's Simon to have a touchscreen in a phone.Missing: 1983 | Show results with:1983
  125. [125]
    Microsoft Surface: Behind-the-Scenes First Look - Popular Mechanics
    Jun 30, 2007 · Multitouch devices accept input from multiple fingers and multiple users simultaneously, allowing for complex gestures, including grabbing, ...
  126. [126]
    Multimodal AI's Impact on Human-Computer Interaction (HCI) - Sapien
    Dec 11, 2024 · Multimodal AI improves HCI by integrating multiple inputs, enhancing accessibility, creating seamless experiences, and enabling multitasking.
  127. [127]
    Multimodal System - an overview | ScienceDirect Topics
    Advantages of multimodal interaction include increased flexibility 15 , error reduction , user preference , task efficiency , and universal access. 1
  128. [128]
    How does multimodal AI enhance human-computer interaction?
    Multimodal AI enhances interaction by processing multiple inputs like text, speech, images, and sensor data, handling complex scenarios and improving context.
  129. [129]
    [PDF] Integration and Synchronization of Input Modes during Multimodal ...
    The purpose of this research was to conduct a comprehensive exploratory analysis of multimodal integration and synchronization patterns during pen/voice human- ...
  130. [130]
    Generative AI in Multimodal User Interfaces: Trends, Challenges ...
    Nov 15, 2024 · This paper explores the integration of Generative AI in modern UIs, examining historical developments and focusing on multimodal interaction, cross-platform ...
  131. [131]
    Multi-Modal AI: How LLMs Are Integrating Text, Image & Video ...
    Unlike text-only systems, multimodal AI needs perfectly matched data across different types of inputs. Getting images with accurate captions. Audio with ...
  132. [132]
    Douglas W. Jones's punched card index - University of Iowa
    The punched card as used for data processing, originally invented by Herman Hollerith, was first used for vital statistics tabulation.
  133. [133]
    The Hollerith Machine - U.S. Census Bureau
    Aug 14, 2024 · Herman Hollerith's tabulator consisted of electrically-operated components that captured and processed census data by reading holes on paper punch cards.
  134. [134]
    Making Sense of the Census: Hollerith's Punched Card Solution
    The 60 million cards punched in the 1890 United States census were fed manually into machines like this for processing. The dials counted the number of ...Missing: format | Show results with:format
  135. [135]
    The IBM punched card
    Hollerith's cards were used for the 1890 US Census, which finished months ahead of schedule and under budget. Punched cards emerged as a core product of what ...
  136. [136]
    IBM Key Punches - Columbia University
    IBM Key Punches. IBM 029 keypunch. Columbia's Herman Hollerith pioneered punch ... 1949, Alphanum, 80, Auto, Yes, BCD. 024, Card Punch, 1949, Alphanum, 80, Auto ...
  137. [137]
    Punch Cards for Data Processing
    By the time Hollerith tabulating equipment was used in the 1890 U.S. Census, holes were scattered across the cards, although their meaning was ...
  138. [138]
    Punch Cards for Data Processing | Smithsonian Institution
    The meaning of each hole was indicated on the card. By the time Hollerith tabulating equipment was used in the 1890 U.S. Census, holes were scattered across ...
  139. [139]
    How it was: Paper tapes and punched cards - EE Times
    The original computer tapes had five channels, so each data row could represent one of thirty-two different characters. However, as users began to demand more ...Missing: history | Show results with:history
  140. [140]
    [PDF] Elliott 503 Ultra High Speed Digital Computer for Science and Industry
    Input and output for the basic computer is on punched paper tape, read at 1000 characters per second and punched at one hundred characters per ... machine in 1950 ...
  141. [141]
    [PDF] operating principles of the - univac file-computer issued january 1 ...
    The Univac File-Computer System is a medium sized electronic system designed to combine efficiently electronic computing with large cap- acity internal magnetic ...
  142. [142]
    Univac I Computer System, Part 1
    The UNIVAC I was a stored program computer with the ability to modify its program instructions. Ten UNISERVO tape units were used for input and output data. It ...Missing: panels | Show results with:panels
  143. [143]
    [PDF] IVAN EDWARD SUTHERLAND B.S., Carnegie Institute of ...
    A Sketchpad user sketches directly on a co~uter display with a. "light pen." The light pen is used both to position parts of the drawing on the display and to ...
  144. [144]
    Sketchpad | Interactive Drawing, Vector Graphics & CAD - Britannica
    Sketchpad displayed graphics on the CRT display, and a light pen was used to manipulate the line objects, much like a modern computer mouse. Various computer ...
  145. [145]
    The Failed Promise of the VCS/2600 Trak-Ball Controller
    Atari's 4.5" Trak-Ball (TM) controller was designed by Jerry Lichac and used in their Football, Basketball, Baseball, Soccer, and Missile Command arcade games.
  146. [146]
    Feet Controlled Alternative Computer Input Devices
    Mice and keyboard devices that are controlled by feet and foot pressure.
  147. [147]
    10 years of EPOC: A scoping review of Emotiv's portable EEG device
    Jul 14, 2020 · One of these devices, Emotiv EPOC, is currently used in a wide variety of settings, including brain-computer interface (BCI) and cognitive ...
  148. [148]
    Performance of the Emotiv Epoc headset for P300-based applications
    Jun 25, 2013 · ... Emotiv Epoc headset and the ANT medical/research EEG device is performed based on a standard P300 Brain-Computer Interface. This aims at ...Bci System · P300-Based Approach · P300 Results
  149. [149]
    Magnetic Resonance) method incorporated in a pen tablet? - Wacom
    Apr 8, 2021 · When it moves, electricity runs through the coil inside the pen. The tablet then receives inductive signals generated by the magnetic field ...Missing: stylus | Show results with:stylus
  150. [150]
    MIDI History Chapter 6-MIDI Begins 1981-1983 – MIDI.org
    This article is the official definitive history of how MIDI got started between 1981 and 1983. Dave Smith, Bob Moog, Ikutaro Kakehashi and Tom Oberheim
  151. [151]
  152. [152]
    QR Code development story|Technologies|DENSO WAVE
    In 1994, there was an event that totally changed the concept of code reading, that is, the advent of the QR Code system. It was developed by engineers working ...Missing: input | Show results with:input<|control11|><|separator|>