Fact-checked by Grok 2 weeks ago

Speech-generating device

A speech-generating device (SGD) is an electronic (AAC) system comprising hardware and software that converts user input—such as symbols, letters, or words selected via , , switches, or eye-tracking—into synthesized or digitized speech output, enabling individuals with severe speech impairments to express themselves audibly. These devices are prescribed as for users unable to meet functional speaking needs through natural speech due to conditions like (), , disorders, or . SGDs range from dedicated portable units to applications running on tablets and smartphones, with output generated via text-to-speech synthesis or pre-recorded messages. The efficacy of SGDs in enhancing communication has been demonstrated in clinical studies, particularly for non-verbal adolescents with , where tailored interface designs improve message construction and verbal responsiveness. High-tech SGDs have shown superior outcomes in fostering social communication compared to low-tech alternatives in minimally verbal children with disorder. A notable application involves , who, after losing speech due to ALS progression in 1985, relied on an SGD interfaced with a computer, initially controlled by hand and later by cheek-muscle-activated switches, to compose texts, deliver lectures, and bestselling books. This technology allowed Hawking to maintain intellectual productivity despite profound physical limitations. SGDs originated from mid-20th-century innovations like the Patient Operated Selector Mechanism (POSM) in the 1950s, which enabled switch-based selection for output, evolving through 1960s systems like for environmental control and communication, to modern portable synthesizers in the and app-based solutions today. While early devices were bulky and limited, advancements in computing have increased portability, vocabulary capacity, and access methods, though challenges persist in user training and device abandonment rates due to inadequate . These tools promote by facilitating needs expression, social interaction, and participation in or employment for affected individuals.

History

Precursors and Early Innovations (Pre-1990s)

Early precursors to speech-generating devices (SGDs) emerged in the as electronic aids to augment communication for individuals with severe motor impairments, building on manual systems like symbol boards. The Patient Operated Selector Mechanism (POSM), developed around 1960 by Reg Maling and colleagues at in the , represented a foundational innovation. This operated system allowed paralyzed patients to control a or teletype machine via breath switches or joysticks, producing printed text output for communication. In the United States, the Prentke Romich Company (PRC), founded in 1966 by engineers Ed Prentke and Barry Romich following their collaboration in the early 1960s, produced its first communication device in 1969—a modified Teletype machine enabling text-based typing for non-speaking users. During the 1970s, portable electronic aids advanced further, including the Talking Brooch (circa 1973), a wearable alphabetic display device, and the Lightwriter, which facilitated text input and output in a compact form suitable for everyday use. These devices primarily relied on visual or printed text rather than synthesized speech, serving as direct antecedents to voice-output systems. The integration of marked a pivotal early innovation in the . PRC introduced the Express 3 in 1982, the company's first device with synthesized speech capabilities, allowing users to generate audible output from selected or typed messages via early text-to-speech technology. This era also saw broader adoption, exemplified by physicist Stephen Hawking's use of a customized computer with starting in 1985, which converted his cheek movements into text and then speech, highlighting the potential for such devices in enabling complex communication. These pre-1990s developments laid the groundwork for modern SGDs by combining adaptive input methods with emerging electronic output, though limited by bulky hardware and rudimentary quality.

Commercialization and Expansion (1990s-2010s)

The 1990s marked the commercialization of dynamic screen speech-generating devices (SGDs), enabling users to select symbols or text from customizable grids to generate speech output. Companies such as Prentke Romich Company (PRC) expanded their offerings, building on earlier synthesized speech innovations from 1982 by introducing devices with digitally recorded speech like the IntroTalker in 1988, followed by international branches in and by 1990. DynaVox, established in 1983, contributed to this era by developing portable SGDs with dynamic displays, allowing for more flexible vocabulary organization and message construction compared to fixed-grid predecessors. These advancements facilitated broader clinical adoption for individuals with conditions like and , as portable hardware reduced dependency on bulky systems. During the 2000s, SGDs saw further expansion through improved portability, programmability, and integration of synthesized speech technologies, making devices more user-friendly and accessible. PRC and DynaVox released models like the Vantage and DynaVox V/Vmax, which supported approximately 40 vocabulary configurations and enhanced input methods such as scanning and direct selection. Saltillo Corporation emerged as another key player, producing dedicated SGDs that emphasized customizable interfaces for diverse user needs. This period witnessed increased regulatory recognition, with U.S. coverage for SGDs expanding to support home and community use, driving market growth and research into efficacy for long-term communication. By the , the proliferation of interfaces and software compatibility with personal devices like tablets began blurring lines between dedicated SGDs and general-purpose tech, though specialized hardware from PRC, DynaVox (later Tobii Dynavox post-2014 merger), and others remained dominant for reliability in severe impairments. Advancements in natural-sounding improved intelligibility, with studies confirming benefits for children and adults in educational and social settings. Commercial expansion included global distribution networks, reflecting a shift toward scalable production and clinician training programs to optimize device customization.

Contemporary Milestones (2020s Onward)

In the early 2020s, speech-generating devices (SGDs) saw accelerated integration of () for generation and more natural , enabling faster message formulation and improved user interaction efficiency. By 2024, collaborative research involving users and models demonstrated potential for user-centered innovations, such as customized vocabulary expansion and adaptive interfaces that learn from individual input patterns to reduce selection errors by up to 30% in preliminary tests. This shift addressed longstanding limitations in static symbol grids, with algorithms analyzing usage data to prioritize frequent phrases, thereby enhancing real-time communication for users with motor impairments. Mid-decade developments emphasized access, including advanced eye-tracking and fused with AI-driven speech prediction, allowing devices to anticipate and vocalize incomplete inputs with context-aware accuracy exceeding 80% in controlled studies. In March 2025, manufacturers released updated SGD models with expanded hardware specifications, such as longer battery life (up to 12 hours) and enhanced features like haptic feedback for low-vision users, alongside software booklets summarizing integration protocols for clinicians. Tools like the INSTRUCT app emerged to mitigate external barriers, providing simplified training modules and remote customization to support implementation in resource-limited settings. Ethical and implementation challenges accompanied these advances, with discussions in 2024-2025 highlighting risks of data privacy breaches in cloud-based systems and the need for user consent in voice modeling to preserve authentic expression. underscored that while AI-enhanced SGDs improved independence for approximately 5 million potential U.S. beneficiaries, systemic barriers like device cost (often $5,000-15,000) and clinician training gaps persisted, prompting calls for policy reforms in and . These milestones reflect a pivot toward scalable, , though empirical validation remains ongoing to ensure causal benefits outweigh implementation hurdles.

Technical Design

Input and Access Methods

Input and access methods for speech-generating devices (SGDs) enable users with speech impairments to select symbols, letters, words, or phrases for conversion into synthesized speech, accommodating a range of motor abilities from full dexterity to minimal movement. These methods are broadly classified into direct selection, where users target items immediately, and indirect selection, involving sequential presentation of options. Direct selection techniques predominate for users with sufficient , including touchscreen tapping, physical or on-screen keyboards for text entry, head pointers, and optical laser devices. Eye-tracking systems represent an advanced direct method, using cameras to monitor pupil and corneal reflections for precise gaze-based selection after , often with or blink activation to confirm choices; such systems achieve selection rates comparable to manual pointing for proficient users. Head tracking employs cameras or sensors to translate head movements into cursor control, suitable for those with limited limb function. Indirect selection methods, essential for users with profound motor limitations, include partner-assisted scanning and automated scanning modes where the device highlights elements in rows, columns, or individually, requiring switch to navigate and select. Single switches—operated by hand pressure, breath control, proximity sensors, or muscle twitches like cheek clenching—facilitate this process, with customizable scan patterns to optimize speed and accuracy; for example, a 1980s patient-operated selector mechanism used switch inputs for message encoding. Encoding schemes further enhance efficiency by assigning numeric or alphabetic codes to predefined , reducing required activations to as few as two per . Contemporary SGDs integrate multiple access modalities, allowing seamless switching between methods like emulation, voice recognition for residual speech, or algorithms to minimize inputs, with empirical studies confirming improved communication rates through hybrid approaches tailored to individual impairments such as those from .

Message Formulation and Display Systems

Message formulation in speech-generating devices (SGDs) refers to the software processes enabling users to construct utterances, typically through selecting or inputting linguistic elements such as letters, words, , or phrases from a displayed set. These systems distinguish between basic spelling-based formulation, which requires sequential letter selection to build words, and advanced multi-method approaches incorporating pre-stored for faster message assembly. For instance, devices classified under code E2508 mandate spelling for all messages, limiting efficiency for users with motor or cognitive impairments, whereas E2510 codes support hybrid methods including grids and predictive algorithms to reduce selection steps. Display systems serve as the visual for message formulation, presenting in organized layouts to minimize cognitive and navigational demands. Static displays feature fixed grids of symbols or words, suitable for users with predictable needs but prone to clutter with expanded exceeding 100-200 items. Dynamic displays, conversely, employ hierarchical or navigational structures where selecting a —such as ""—transitions to context-specific sub-options, enhancing efficiency for literate users by leveraging semantic organization; empirical studies indicate dynamic layouts can reduce message generation time by 20-50% in adults with compared to static arrays. Visual scene displays integrate images with embedded hotspots for elements, aiding recall in children with developmental disabilities by aligning formulation with natural scene perception, though research shows mixed outcomes on attention allocation without optimized icon placement. Advanced formulation aids embedded in display systems include word prediction, which ranks likely completions based on frequency and context to curtail selections—e.g., predicting "the" after "want" in English—and syntactic support that suggests grammatical completions, documented to boost utterance complexity in pediatric users by up to 30% in controlled trials. Customizable interfaces allow clinicians to tailor grid sizes, color coding, and layout density to user profiles, with evidence from interface design research emphasizing high-contrast, low-density displays (e.g., 4-9 items per screen) to sustain visual attention in individuals with autism or intellectual disabilities. Retrieval mechanisms, such as search functions or topic-based folders, further streamline formulation by enabling rapid access to stored phrases, though over-reliance on navigation depth can increase error rates if hierarchies exceed three levels, as observed in usability studies of utterance-based AAC systems. Overall, effective display design prioritizes motor planning efficiency and linguistic predictability, with peer-reviewed evaluations underscoring the need for empirical validation of layouts to match individual cognitive profiles rather than generic templates.

Speech Output Technologies

Speech output in speech-generating devices (SGDs) primarily utilizes text-to-speech (TTS) synthesis to convert user-formulated messages—typically from text or symbolic input—into audible verbal communication, enabling unlimited vocabulary generation beyond the constraints of pre-recorded digitized clips. Synthesized output contrasts with digitized speech, which replays stored human-recorded segments for fixed phrases, limiting spontaneity but preserving natural prosody in specific contexts; synthesized methods dominate modern SGDs for their adaptability to novel sentences. TTS systems process input text through linguistic analysis (grapheme-to-phoneme conversion), prosodic modeling (intonation, rhythm), and waveform generation, outputting via integrated speakers or amplifiers for louder projection in face-to-face interactions. Early TTS in SGDs relied on synthesis, which simulates vocal tract acoustics using a source-filter model: an (glottal pulses for voiced sounds or noise for fricatives) is filtered by time-varying frequencies to mimic resonances, governed by parametric rules rather than recordings. This approach, exemplified by the system developed by Dennis Klatt in the early and widely adopted in devices like those used by from 1986 onward, produced highly intelligible but robotic, monotone speech due to its rule-based abstraction from natural variability. methods remain viable in resource-constrained systems for their computational efficiency and small footprint, avoiding large needs. Concatenative advanced naturalness by selecting and seamlessly joining pre-recorded speech units—such as diphones (phoneme transitions) or syllables—from a large database, with algorithms optimizing for prosodic fit via techniques like prosodic modification and unit blending. Deployed in SGDs from the , this method improved perceptual quality over formants by leveraging real human utterances, though it demanded extensive corpora (often 10-40 hours of speech) and could introduce discontinuities or artifacts at join points, especially for atypical prosody. Unit selection variants, common in mid-2000s commercial TTS engines integrated into software, balanced quality against database size but remained database-bound, limiting voices to available recordings. Contemporary SGDs increasingly incorporate neural TTS, employing architectures like sequence-to-sequence models (e.g., Tacotron) paired with vocoders (e.g., ) for end-to-end synthesis: text is mapped directly to mel-spectrograms, then inverted to waveforms, yielding human-like intonation, , and expressivity from trained on vast datasets. Introduced in contexts around 2017 with advances like Google's , neural methods surpass prior techniques in mean opinion scores for naturalness (often exceeding 4.0 on 5-point scales) and enable voice cloning via on short user samples, as in VocaliD systems preserving pre-impairment voices for dysarthric users. These systems, now embedded in apps and devices like Dynavox or grid-based software, support multilingual output and emotional inflection but require significant computational power, mitigated by cloud integration or optimized models; empirical tests show superior intelligibility in noise compared to concatenative baselines. Challenges persist in low-resource languages and real-time latency for SGDs, prompting hybrid approaches combining neural prediction with rendering for deployment.

Vocabulary and Customization

Initial Vocabulary Selection

Initial vocabulary selection for speech-generating devices (SGDs) prioritizes a compact set of high-frequency words, known as core vocabulary, which typically comprises 50 to 400 terms such as pronouns, verbs, prepositions, and descriptors, enabling users to form a wide range of utterances with limited input. This approach contrasts with fringe vocabulary, which includes low-frequency, personalized nouns tied to specific needs or interests, and is deferred until core competence develops to maximize early generative communication potential. Core words account for the majority of words used in everyday speech, allowing non-speaking individuals, particularly children with developmental delays, to express needs, actions, and ideas efficiently from the outset. Selection methods rely on empirical frequency data derived from transcribed samples of typical speech, adapted for the user's , , and cultural relevance, often supplemented by observation of the individual's communicative attempts and input from parents or caregivers. Speech-language pathologists (SLPs) assess motor access, cognitive level, and motivational factors to ensure the initial set aligns with functional goals, such as requesting or protesting, using tools like inventories that catalog observed preferences for toys or activities while avoiding overemphasis on rote requesting. For early symbolic communicators, core lists are validated against developmental milestones, with studies confirming their comparability across diverse populations and settings, though customization accounts for linguistic variations, as in Zulu-language adaptations derived from child-directed speech corpora. In SGD , the initial is programmed as symbols or text on grids or pages, starting with 20-100 items to match motor and visual processing constraints, with from intervention studies showing that such selections foster foundational growth when paired with modeling and aided language stimulation. Over-reliance on parent-reported items risks limiting versatility, whereas data-driven prioritization supports causal pathways to novel sentence generation, as terms facilitate syntactic combinations underrepresented in unaided gestures.

Dynamic Adaptation and Maintenance

Dynamic adaptation in speech-generating devices (SGDs) enables systems to evolve with the user's linguistic and contextual needs, primarily through dynamic interfaces that reorganize vocabulary hierarchies in real-time based on selections or predictive algorithms. These interfaces, unlike static grids, generate context-specific screens—such as expanding from core words like "want" to subcategories of or activities—facilitating access to thousands of symbols or words without overwhelming the initial view. This adaptability supports long-term use by allowing clinicians or caregivers to integrate user-specific terms, such as personal names or evolving interests, thereby mirroring growth in typically developing individuals. Empirical studies indicate that such dynamic features enhance communication efficiency, with interventions using high-tech SGDs showing positive outcomes in requesting skills among children with developmental disabilities, as measured by increased length and accuracy over static systems. However, requires ongoing input to avoid mismatches, as unmonitored changes can lead to navigational errors; for instance, completion, while reducing keystrokes by up to 50% in proficient users, demands proficiency to prevent in novices. Recent advancements incorporate AI-driven , adjusting vocabulary frequency based on usage logs, though evidence remains preliminary and tied to small cohorts. Maintenance of SGDs encompasses durability, software integrity, and to sustain functionality amid daily wear. Best practices include weekly cleaning of screens and keypads with cloths and mild disinfectants, avoiding solvents that degrade plastics, alongside monthly checks to prevent discharge-related failures, which affect up to 20% of devices in clinical reports. Software updates, typically pushed quarterly by manufacturers, address bugs and expand vocabularies but necessitate professional oversight to preserve custom configurations. RESNA guidelines emphasize annual servicing by certified technicians for calibration, particularly for eye-gaze or switch-access models, to mitigate downtime that can exceed 10% without routine checks. and programs, often spanning 4-6 sessions, are critical for adaptation persistence, as lapses correlate with underutilization rates of 30-50% in longitudinal studies.

Ethical Implications in Content Design

In the design of vocabulary and phrases for speech-generating devices, ethical considerations center on balancing user autonomy with safeguards against potential misuse or cultural imposition. Clinicians and developers often select initial content based on core principles, prioritizing high-frequency words for efficiency, but this process can limit expressive range if not tailored to individual needs, potentially undermining the user's authentic voice. The American Speech-Language-Hearing Association emphasizes that functional, personalized vocabulary selection correlates with greater intervention success, yet decisions made by proxies—such as speech-language pathologists for non-literate users—raise questions of and , as these intermediaries may inadvertently prioritize institutional norms over user intent. Cultural and linguistic biases in pre-loaded content pose additional ethical challenges, as many high-tech devices inadequately accommodate or non-dominant dialects, effectively privileging English-centric designs that disadvantage users from diverse backgrounds. A 2022 study of major manufacturers found that while some offer multilingual options, implementation is inconsistent, with limited support for or minority languages, which can perpetuate inequities in communication access and reinforce majority cultural dominance in content formulation. This shortfall stems from commercial priorities rather than deliberate exclusion, but it highlights a need for designs that avoid embedding ethnocentric assumptions in symbol sets or phrase banks, ensuring content enables truthful, context-specific expression without imposed ideological filters. Privacy in content customization further complicates ethics, as personalized vocabularies—often stored locally or in cloud-linked systems—risk unauthorized or data breaches, particularly for vulnerable users reliant on caregivers for modifications. Ethical frameworks for advocate explicit consent protocols and robust data protections to prevent , such as family members altering content to suppress dissenting views, though empirical cases of such remain undocumented in peer-reviewed . Developers must prioritize transparent algorithms for dynamic , avoiding opaque AI-driven suggestions that could introduce biases from training datasets skewed toward academic or media sources known for systemic left-leaning tilts, thereby preserving causal fidelity in user-generated speech over sanitized outputs.

User Applications

Target Populations and Use Cases

Speech-generating devices (SGDs) primarily serve individuals with severe speech impairments arising from neurological conditions, developmental disorders, or injuries that preclude reliable verbal communication. Key target populations include those with , where progressive motor neuron degeneration eliminates natural speech, as evidenced by over 90% of severely impaired ALS patients in a 2022 U.S. survey utilizing SGDs for expression. Individuals with often rely on SGDs due to motor control deficits affecting articulation, enabling functional interaction despite persistent . Children and adults with represent another major group, particularly preschoolers and those with limited verbal abilities, where SGDs facilitate requesting and expressive through targeted interventions. Acquired conditions such as stroke-induced or similarly drive SGD adoption, compensating for sudden or evolving speech loss in adults. Less common but documented uses extend to , , and developmental delays under age three, where devices augment residual speech or serve as alternatives. Use cases span daily living, education, and professional environments, with SGDs enabling users to convey needs, participate in conversations, and access services independently. In clinical settings, they support medical decision-making and therapy compliance for populations like patients facing respiratory decline. For autistic children, SGDs enhance social communication and reduce behavioral challenges by providing clear expressive outlets during interventions. Educational applications include classroom integration for students with or intellectual disabilities, promoting literacy and peer interaction via customizable interfaces. In neurodegenerative cases, early SGD implementation preserves autonomy as speech deteriorates, as seen in progressive disorders where devices interface with eye-tracking for hands-free operation. Overall, these applications prioritize causal restoration of communicative intent over mere supplementation, grounded in empirical needs of impaired populations.

Notable Examples and Outcomes

Theoretical physicist , who developed (ALS) symptoms in 1963 and lost natural speech capability following a 1985 , utilized a customized speech-generating device (SGD) thereafter. The system employed an infrared switch activated by cheek muscle twitches to navigate a screen-based word prediction interface, composing sentences at rates of 5 to 15 words per minute before synthesis via a text-to-speech module voiced by synthesized phonemes derived from linguist Dennis Klatt's recordings. This configuration enabled Hawking to produce seminal works including (1988, over 25 million copies sold globally), deliver international lectures, and participate in scientific collaborations, sustaining intellectual productivity for decades amid near-total . Hawking declined upgrades to more natural-sounding voices, preserving the distinctive electronic timbre that became integral to his public persona and recognizability. Film critic , silenced by complications from surgeries culminating in 2006, adopted a SGD integrated with text-to-speech software to resume professional output. The device facilitated scripted commentary for television segments and online reviews, allowing Ebert to host Ebert Presents: At the Movies from 2009 to 2011 and author books such as Life Itself (2011), thereby extending his career influence post-laryngectomy. In clinical case studies, SGDs have yielded measurable communication gains; for instance, three adolescents with autism spectrum disorder demonstrated improved request initiation and response accuracy using grid-based interfaces on tablet devices, with efficacy tied to visual layout predictability rather than auditory feedback alone. A of 27 group and single-subject studies involving interventions, including SGDs, reported moderate positive effects on in children and adults with developmental disabilities, with effect sizes averaging 0.57 for verbal output increases. Such outcomes underscore SGDs' role in bridging expressive gaps, though individual variability persists due to motor and cognitive factors.

Efficacy and Empirical Evidence

Demonstrated Benefits and Achievements

Speech-generating devices (SGDs) have demonstrated substantial benefits in enhancing communication for individuals with severe speech impairments, including those with autism spectrum disorder (ASD), (ALS), and . Empirical reviews indicate that SGD-based (AAC) interventions yield positive outcomes in 86% of studies involving developmental disabilities, with 78% categorized as providing conclusive evidence for improvements in functional communication, such as requesting and social interaction. High-tech SGDs, in particular, outperform low-tech alternatives in fostering social communication skills among minimally verbal children with ASD. In ASD populations, responsiveness to SGD input has been shown to increase rates of both augmented and natural spoken communication, supporting without hindering ; across reviewed cases, 94% of participants experienced gains or stability in verbal output. Studies involving 46 participants across 16 interventions highlighted SGDs' efficacy in facilitating requesting behaviors and symbolic navigation. For young children aged 1-3 years, including SGDs promotes early language acquisition. Notable achievements include enabling prolonged intellectual productivity for patients, as evidenced by sustained global communication and contributions over decades. Recent advancements achieved real-time for a paralyzed patient, restoring conversational fluidity using pre-recorded voice samples. Caregivers report heightened perceptions due to SGDs' naturalness and social acceptability. In cases, SGDs serve as primary tools for overcoming speech barriers, enhancing independence.

Criticisms, Limitations, and Empirical Shortcomings

Empirical evaluations of speech-generating devices (SGDs) highlight methodological limitations in the research base, including small sample sizes, short intervention durations, and inadequate assessment of long-term outcomes, which constrain the ability to draw firm conclusions about sustained efficacy across diverse populations. Systematic reviews note risks of , where positive results are overrepresented, alongside frequent omissions in reporting treatment maintenance, generalization to untrained skills, and real-world applicability beyond clinical settings. Usage patterns in naturalistic environments reveal practical shortcomings, with children relying on SGDs for only approximately 10% of communication initiations and responses during play interactions, indicating limited into spontaneous despite targeted . Device abandonment rates remain high, affecting up to 50% of users in some cohorts, often stemming from discrepancies between controlled-study performance and everyday functionality, such as inadequate adaptability to evolving communicative needs. While early criticisms posited that SGDs might inhibit natural speech development—a concern rooted in fears of reduced for —meta-analyses of 27 cases across studies found no decreases in attributable to () interventions, with 89% showing gains or stability; however, individual variability persists, and robust longitudinal data on complex emergence remains sparse, underscoring gaps in causal evidence for comprehensive developmental impacts. Technical constraints, including synthetic limitations in conveying prosody, emphasis, and emotional nuance, further impede perceived naturalness and social efficacy, as evidenced by user reports of disrupted conversational flow.

Operational Challenges

Technical Reliability and Usability Issues

Technical reliability of speech-generating devices (SGDs) remains a persistent challenge, with components prone to under regular use. A of newly issued SGDs over five years documented a mean time to first repair of 42.7 weeks (standard deviation 41.2 weeks), with at least 40% of devices requiring service interventions due to malfunctions such as screen failures, breakdowns, or errors. These issues often stem from mechanical wear, environmental exposure, or manufacturing variability, leading to unplanned that can isolate users during critical communication moments. Although advancements in ruggedized designs have been introduced since this data, recent empirical reviews continue to reference similar vulnerability patterns, underscoring the need for robust backup systems. Software-related problems further compound reliability, including glitches from incompatible updates, synchronization failures with external peripherals, or algorithmic errors in voice synthesis that produce garbled output. Users dependent on SGDs for daily interactions report intermittent crashes that necessitate reboots or , exacerbating frustration in time-sensitive scenarios like medical consultations. Battery dependency introduces additional risks, as limited runtime—often 4-8 hours under active use—forces reliance on charging , with depletion during transport or outages posing acute barriers for mobile users. Usability issues frequently arise from steep learning curves and suboptimal interfaces, particularly for individuals with co-occurring motor or cognitive impairments. High cognitive demands for navigating symbol grids or can slow input rates to 10-20 , far below natural speech, resulting in conversational lags and reduced participation. Poor customization options for methods, such as eye-gaze or switch scanning, contribute to abandonment, with studies indicating over 60% discontinuation within 12 months due to inadequate fit with user capabilities. These factors highlight systemic gaps in ergonomic design and training protocols, often overlooked in deployment despite evidence that tailored interfaces improve retention.

Economic and Accessibility Barriers

High-end speech-generating devices (SGDs) often cost between $3,000 and $15,000 or more, depending on features like eye-tracking integration and customization, creating substantial upfront financial hurdles for individuals and families without support. Basic low-tech or single-message alternatives, such as CheapTalk communicators, range from $100 to $500, but these lack the synthesized speech output and vocabulary depth of full SGDs, limiting their utility for complex communication needs. In the United States, SGDs qualify as durable medical equipment under , , and many private insurers, potentially covering up to 80-100% of costs for eligible beneficiaries with documented medical necessity. However, barriers persist through stringent requirements, appeals processes, and denials based on insufficient evidence of speech impairment or alternative trials, disproportionately affecting low-income families who may face out-of-pocket expenses exceeding annual budgets. Globally, access disparities are acute in developing countries, where high device costs—often unsubsidized—intersect with , inadequate , and scarce local manufacturing or repair services, restricting SGD availability to urban elites or aid-dependent programs. An estimated 97 million people worldwide who rely on non-speech communication face barriers including unaffordable pricing and limited distribution networks, perpetuating exclusion from , , and social participation. Low-cost adaptations on smartphones offer partial mitigation in resource-constrained settings, but persistent issues like access, , and language-specific software availability hinder widespread adoption.

Ethical and Societal Considerations

Speech-generating devices (SGDs) pose risks due to their capacity to log user inputs, generate speech outputs, and store communication , which may include sensitive personal expressions from individuals with severe speech impairments. Manufacturers recommend encrypting logfiles by default to prevent unauthorized access, as unencrypted could reveal thoughts if devices are lost or shared. Organizations owning SGDs treat device loss as a potential , prompting caution in enabling features like logging for or . In AI-enhanced SGDs, extensive user processing for or voice adaptation amplifies these concerns, as continuous monitoring of inputs could inadvertently capture intimate details without robust safeguards. Consent challenges arise particularly for users reliant on caregivers or facilitators, where intermediaries may interpret or override device inputs, potentially misrepresenting the user's intended message and undermining . Ethical guidelines emphasize explicit user for in systems, as violations could enable or of non-speaking individuals, with resources highlighting the need for protocols to prevent harm in intimate or medical contexts. Regulatory frameworks require for research involving users with complex needs, yet exclusion from studies often stems from presumptions about their capacity, raising questions about proxy validity. In healthcare settings, mismatched communication between SGD users and providers can lead to uninformed decisions, as devices may not facilitate real-time nuanced without privacy-protected interfaces. Misuse risks encompass of connected SGDs to alter outputs or steal voice models, as well as broader of for impersonation or deepfakes. Voice cloning from SGD-recorded samples enables fraudulent applications like scams or , with federal agencies noting heightened vulnerabilities for disabled users whose synthesized voices could be replicated without permission. Ethical analyses categorize harms including non-consensual voice replication for malicious rumors or unauthorized content generation, underscoring the need for transparency in synthetic speech attribution. Advanced zero-shot text-to-speech techniques exacerbate these dangers by requiring minimal audio to forge high-fidelity replicas, potentially eroding trust in SGD-mediated communications.

Debates on Dependency and Natural Speech Development

Concerns have persisted that prolonged use of speech-generating devices (SGDs) or other (AAC) systems may foster , potentially discouraging individuals—particularly children with developmental delays—from developing or improving natural vocal speech. This stems from early theoretical apprehensions that reliance on external aids could reduce for verbal attempts, akin to how crutches might affect walking in orthopedic cases, though such analogies lack direct empirical parallels in communication disorders. Proponents of caution argue that withholding AAC until natural speech trajectories clarify might prevent over-dependence, as seen in some clinical guidelines emphasizing to prioritize vocal output. Empirical meta-analyses, however, consistently refute claims of inhibition, with a 2006 review of 23 studies involving 89 participants across various disabilities finding gains in 89% of cases and no declines in any, attributing positive outcomes to AAC's role in and reducing barriers to . Similarly, a 2021 systematic review of AAC interventions for children with disorder reported facilitative effects on speech development in most cohorts, linking gains to enhanced symbolic understanding rather than device dependency. The American Speech-Language-Hearing Association () synthesizes this evidence to dismiss the "dependency myth," noting that AAC often serves as a bridge, with 94% of reviewed participants showing stable or improved verbal output post-intervention. Nuances emerge in subgroup analyses, where AAC yields variable results; for instance, a critical appraisal of SGD-specific studies in developmentally delayed children found no overall increase in natural speech production, though exceptions occurred in select motor-speech impairment cases without co-occurring intellectual deficits. Dependency risks appear minimal when AAC integrates with speech therapy, as multimodal approaches—combining device use with vocal prompting—correlate with broader language gains without supplanting natural efforts. Longitudinal data underscore individual factors like baseline verbal imitation skills predict outcomes more than device exposure duration, challenging blanket dependency fears. Critics of expansive AAC adoption highlight potential over-reliance in low-motivation profiles, yet no peer-reviewed evidence documents causal hindrance, with causal inference favoring AAC as a neutral-to-positive scaffold. Ongoing debates center on implementation fidelity, with some researchers advocating rigorous trials to isolate AAC's isolated effects from therapies, given observational biases in non-randomized designs. While mainstream , informed by randomized and quasi-experimental data, supports AAC's compatibility with natural speech goals, pockets of persist in resource-limited settings where monitoring for dependency lapses proves challenging. Future research prioritizes predictive models of speech trajectories to tailor AAC withdrawal strategies, ensuring devices augment rather than eclipse endogenous vocal capacities.

Future Prospects

Emerging Technologies and AI Integration

Recent advancements in speech-generating devices (SGDs) incorporate (AI) for enhanced and predictive text generation, enabling more fluid communication for users with severe speech impairments. Neural text-to-speech (TTS) systems, leveraging models, produce highly intelligible and prosodic speech that mimics natural intonation, surpassing traditional concatenative methods. For instance, in 2024, researchers demonstrated online synthesis of intelligible words from neural signals via brain-computer interfaces (BCIs) in individuals with anarthria, achieving real-time decoding rates of up to 62 words per minute. Similarly, AI-empowered voice generation for (ALS) patients uses neural synthesis to recreate personalized voices from limited pre-diagnosis recordings, with models trained on datasets yielding mean opinion scores exceeding 4.0 for naturalness as of January 2025. Large language models (LLMs) are being integrated into (AAC) systems to support expressive output beyond predefined phrases, generating contextually relevant sentences from user inputs like symbols or partial text. The Speak Ease system, introduced in a March 2025 preprint, combines LLMs with speech generation to amplify self-expression in non-speaking users, allowing dynamic scripting for social interactions while maintaining user control over core messages. AI-based adaptation in SGDs, including eye-gaze controlled variants, has improved response accuracy by 28% compared to prior systems, as reported in market analyses from October 2025, by predicting intents and refining vocabulary in real-time. These integrations address limitations in traditional SGDs, such as rigidity in phrasing, though empirical validation remains ongoing, with studies emphasizing the need for user-centered testing to mitigate over-reliance on algorithmic predictions. Emerging hybrid approaches fuse with hardware like BCIs and advanced sensors, promising direct neural-to-speech translation without manual input. A 2022 review highlighted BCIs for speech decoding in motor-impaired individuals, with subsequent 2024 trials extending this to full sentence synthesis from cortical activity. Updates to commercial devices, such as Dynavox's TD Pilot series in March 2025, incorporate AI-driven features like adaptive scanning and inputs (e.g., combining eye-tracking with ). Despite these strides, challenges persist in computational demands for portable devices and ensuring AI outputs align with users' authentic intent, underscoring the importance of interdisciplinary validation from speech-language pathology and fields. The global market for speech-generating devices (SGDs) was valued at approximately USD 327 million in 2024 and is projected to reach USD 834 million by 2032, reflecting a driven by increasing prevalence of speech-impairing conditions such as (ALS), disorders, and stroke-related . Growth factors include an aging population— with over 1.5 billion people worldwide affected by communication disorders as of 2023— and advancements in portable, software-based SGDs compatible with smartphones and tablets, which lower entry barriers compared to dedicated hardware. Key market players, including Tobii Dynavox and PRC-Saltillo, have shifted toward AI-enhanced and eye-tracking integrations, expanding adoption in educational and home settings, though hardware-dominant segments still account for over 60% of revenue due to reliability preferences in clinical environments. Emerging trends emphasize affordability and customization, with low-cost apps like Proloquo2Go enabling SGD functionality on consumer devices, potentially capturing underserved markets in developing regions where traditional devices cost USD 5,000–15,000. However, disruptions and high development costs for durable, battery-efficient have tempered growth in hardware sales, projected at 5–7% CAGR through 2030, while software solutions grow faster at 9–11%. dominates with over 40% market share, fueled by robust insurance frameworks, but regions show accelerating demand due to rising healthcare investments and penetration exceeding 70% in urban areas. In the United States, SGDs are classified by the as Class II medical devices, subject to general controls and, in many cases, exempt from premarket notification to expedite market entry while ensuring basic safety and efficacy. Medicare covers SGDs as under Part B for beneficiaries with severe, medically documented speech impairments, requiring a physician's prescription, speech-language pathologist , and demonstration of functional benefit, with no annual cap on device costs since October 2015 revisions to rules. Private insurers like and align with Medicare criteria but often exclude non-speech-generating communication aids, mandating trials to confirm cognitive and physical usability, which can delay access for up to 20% of applicants due to documentation hurdles. These policies enhance accessibility under frameworks like the Americans with Disabilities Act (ADA) and (IDEA), mandating SGD provision in s and workplaces, yet empirical gaps persist: coverage approvals average 60–70% for eligible cases, but out-of-pocket costs and rural provider shortages limit utilization, particularly for pediatric users where only 30% of school districts report consistent SGD training. Policy implications include incentivizing innovation through reimbursable upgrades—such as features qualifying as "medically necessary" enhancements—but also raising concerns over dependency on subsidized tech, potentially crowding out natural speech therapies absent rigorous outcome studies. Internationally, variable coverage (e.g., limited in public systems without equivalent DME categories) underscores the need for harmonized standards to prevent market fragmentation and ensure equitable scaling of evidence-based deployments.

References

  1. [1]
    NCD - Speech Generating Devices (50.1) - CMS
    Speech generating devices are speech aids consisting of devices or software that generate speech and are used solely by the individual who has a severe speech ...
  2. [2]
    Speech Generating Devices (SGDs) - Center for Medicare Advocacy
    What Are Speech Generating Devices? SGDs are typically tablet-like units that allow a person to communicate thoughts by electronic voice generation when he ...
  3. [3]
  4. [4]
    Speech-generating devices: effectiveness of interface design ... - NIH
    Sep 29, 2016 · We analyzed the efficacy of the interface design of speech generating devices on three non-verbal adolescents with autism spectrum disorder (ASD).
  5. [5]
    Clinical Effectiveness of AAC Intervention in Minimally Verbal ...
    Dec 19, 2023 · AAC aids are effective tools for increasing communication in ASD children, but high-tech aids were more effective in increasing social communication, ...
  6. [6]
    How Intel Gave Stephen Hawking a Voice - WIRED
    Jan 13, 2015 · It's a CallText 5010, a model given to Hawking in 1988 when he visited the company that manufactured it, Speech Plus. The card inside the ...
  7. [7]
    The technology that gave Stephen Hawking a voice should be ...
    Mar 16, 2018 · Stephen Hawking was one of the most prominent people in history to use a high-tech communication aid known as augmentative and alternative communication (AAC).
  8. [8]
    Past, Present, and Future of Augmentative and Alternative ...
    Aug 4, 2020 · According to a great timeline by NDi Media, one of the earliest communication devices developed was the Patient Operated Selector Mechanism ( ...
  9. [9]
    Telling tales: unlocking the potential of AAC technologies - PMC - NIH
    Dec 30, 2018 · The cumbersome dedicated devices of the 1970s have evolved into a burgeoning AAC app industry. However, the limited use and abandonment of AAC ...<|separator|>
  10. [10]
    Speech-generating Devices - AAC & Speech Devices from PRC
    Speech-generating devices (SGDs) are durable medical equipment that help individuals with severe speech impairments meet their functional speaking needs.
  11. [11]
    The History of Augmentative and Alternative Communication (AAC)
    Jul 15, 2020 · AAC started with sign language, then the first device was the F. Hall Roe board, followed by the POSSUM, and later portable devices like the " ...Missing: key examples
  12. [12]
    Our History - PRC-Saltillo
    1969: PRC produces its first communication device, a typing system based on a discarded Teletype machine. 1970s. photo of PRC's first laser headmouse 1973: PRC ...
  13. [13]
    5 Things You May Not Know About the Early Days of AAC
    Oct 7, 2019 · One of the first wearable AAC devices came out in the early 1970's. The Talking Brooch (Newell, 1973) was an alphabetic display that was ...
  14. [14]
    DynaVox - Wikipedia
    The company was formed in 1983 and produces speech communication devices and special education software used to assist individuals in overcoming speech, ...History · Product scope · Speech generating devices
  15. [15]
    1.5 History and Origins of AAC - AAC4ALL
    The history of AAC can be traced back to the late 19th century when sign language and manual communication were primary methods for individuals with speech and ...
  16. [16]
    Words We Would Want: Comparison of Three Pre-programmed ...
    A quick survey reveals that the Vantage/Vanguard (Prentke-Romich Company) and the DynaVox V/Vmax (DynaVox Technologies) offer approximately 40 different ...
  17. [17]
    Augmentative and Alternative Communication and Voice Products ...
    Speech-generating devices—Designed specifically for communication, SGDs may provide the most effective means of meeting communication needs through highly ...
  18. [18]
    About Us - Tobii Dynavox Global
    Tobii Dynavox history​​ It began with two assistive technology pioneers: Sweden-based Tobii Technology and U.S.-based DynaVox. In 2014, the companies merged to ...Missing: expansion 1990s- 2010s
  19. [19]
    Augmentative and Alternative Communication (AAC) Advances
    A common attribute of modern day AAC solutions tends to rely on the translation of a user's intended meanings into speech via speech generating devices (SGDs) ...
  20. [20]
    Our History - AAC & Speech Devices from PRC - Prentrom
    Since 1966, PRC has led in developing speech-generating devices and language and vocabulary, allowing those with communication challenges to participate in ...
  21. [21]
  22. [22]
    "Speech Generating Devices Market: Innovations in Assistive ...
    Mar 28, 2025 · Advanced SGDs incorporate AI to improve speech recognition and produce more natural-sounding synthesized speech, enhancing the user experience ...Missing: 2020s | Show results with:2020s
  23. [23]
    March 2025 AAC Device Updates & New Speech Generating ...
    Mar 26, 2025 · This update includes expanded details on device specifications, warranty updates, accessibility improvements, and new product releases across ...<|separator|>
  24. [24]
    Dismantling societal barriers that limit people who need or use AAC
    The recent development of the INSTRUCT app is intended to overcome technology barriers and provide accessible tools to support individuals who need or use AAC ...
  25. [25]
    Transforming lives: the remarkable impact of assistive technology
    May 24, 2024 · According to the American Speech-Language-Hearing Association, an estimated 5 million Americans may benefit from using AAC.Missing: 2020s | Show results with:2020s
  26. [26]
    Ethical Conversations: What are the Implications of AI for AAC ...
    AI in AAC raises ethical questions about privacy, user consent, and the representation of the user’s authentic voice.
  27. [27]
    Trends & New Tools for Augmentative & Communication in 2025
    May 12, 2025 · From eye-tracking to AI-driven speech prediction, the tools now available offer a new level of personalisation, speed, and responsiveness.Missing: 2020-2025 | Show results with:2020-2025
  28. [28]
  29. [29]
    AAC Direct Selection (Access Methods)
    Feb 4, 2021 · Direct selection access methods for AAC include touch, laser, head tracking, and eye gaze devices. They can be used for low, mid, and high tech AAC devices.Missing: input | Show results with:input
  30. [30]
    How does eye tracking work for AAC? - Tobii Dynavox Global
    Eye tracking uses infrared light reflected in eyes, picked up by cameras, and filtered to determine where the user is looking. Calibration measures eye ...
  31. [31]
  32. [32]
  33. [33]
    Speech Generating Devices - Medical Clinical Policy Bulletins - Aetna
    Speech generating devices are speech aids consisting of devices or software that generate speech and are used solely by the individual who has a severe speech ...
  34. [34]
    [PDF] Speech Generating Devices - Cigna Healthcare
    Nov 12, 2019 · Synthesized speech devices permit the user multiple methods of message formulation and multiple methods of access. Multiple methods of message ...<|separator|>
  35. [35]
    Speech Generating Devices (SGD) - Policy Article (A52469) - CMS
    E2513 is only for use with code E2510 (SPEECH GENERATING DEVICE, SYNTHESIZED SPEECH, PERMITTING MULTIPLE METHODS OF MESSAGE FORMULATION AND MULTIPLE METHODS OF ...
  36. [36]
    Designing Effective AAC Displays for Individuals with ... - NIH
    This paper reviews research on the impact of AAC display variables on visual attention and performance of children with developmental disabilities and adults ...
  37. [37]
    Types of AAC - Communication Matters
    These displays are dynamic – they change according to what the AAC speaker selects. For example, if they select the “food” symbol, the display may change to a ...
  38. [38]
    Improving the Design of AAC Systems - AAC at Penn State
    For individuals with complex communication needs, the visual features of an AAC display play a key role in supporting successful use. At present, many AAC ...
  39. [39]
    [PDF] Utterance-Based Systems: Organization and Design of AAC Interfaces
    A typical AAC device consists of an interface (AAC interface) and a language set: the interface defines the way the user interacts with the device, while the ...
  40. [40]
    Speech Generating Devices | Providers - Blue Cross NC
    These devices or aids are electronic, and computer based and can generate synthesized (computer-generated) and/or digitized (natural human) speech output.<|separator|>
  41. [41]
    Speech Generating Devices
    Digitized audible/verbal speech output, using prerecorded messages; Synthesized audible/verbal speech output that requires message formulation by spelling and ...
  42. [42]
    Speech-Generating Devices | voice preservation | programs
    A Bluetooth amplifier may be used to make your speech output louder for face to face communication when using a speech generating device with Bluetooth ...
  43. [43]
    From Hawking to Siri: The Evolution of Speech Synthesis - Deepgram
    Apr 8, 2025 · In 1981, Dennis Klatt introduced his KlattTalk System TTS (text-to-speech) which forms the basis for many synthesis systems today.
  44. [44]
    Speech Synthesis - an overview | ScienceDirect Topics
    Methods for concatenation include formant, concatenative, and articulatory. In formant synthesis, periodic and nonperiodic source signals are generated and ...Missing: SGDs | Show results with:SGDs
  45. [45]
    Synthesized Speech - an overview | ScienceDirect Topics
    Concatenative synthesis creates speech by selecting and concatenating recorded speech units such as phonemes, diphones, and syllables, with unit selection ...
  46. [46]
    9.1. Concatenative speech synthesis
    Concantenative speech synthesis (CSS), also known as unit selection speech synthesis, is one of the two primary modern speech synthesis techniques together with ...<|separator|>
  47. [47]
    [PDF] A Survey on Neural Speech Synthesis - arXiv
    Jul 23, 2021 · The early computer-based speech synthesis methods include articulatory synthesis [53, 300], formant synthesis [299, 5, 171, 172], and.
  48. [48]
    Towards personalized speech synthesis for augmentative ... - PubMed
    The paper describes a VocaliD approach to generate personalized speech synthesis by extracting prosodic properties and applying them to a surrogate talker's ...
  49. [49]
    [PDF] Perception of Concatenative vs. Neural Text-To-Speech (TTS)
    This study tests speech-in-noise perception and social ratings of speech produced by different text-to-speech (TTS) synthesis methods.Missing: SGDs | Show results with:SGDs
  50. [50]
    [PDF] Enhancing AAC with Generative Imagery and Zero-Shot TTS
    Aug 17, 2025 · This paper presents an Augmentative and Alternative Com- munication (AAC) approach for minimally verbal children with.
  51. [51]
    Teaching with core words: building blocks for communication
    Core words are 50-400 words that make up the majority of everything we say. Augmentative and Alternative Communication (AAC) systems use these core words to ...
  52. [52]
    AAC: Core Vocabulary
    Core Vocabulary consists of high frequency, everyday words. These are the words that combine to create phrases and sentences. These words form the DNA of ...
  53. [53]
    Vocabulary Selection - Virginia's Assistive Technology Priority Project
    Core Vocabulary refers to the most frequently used words and primarily consists of verbs, pronouns, descriptors, and prepositions, with very few nouns. These ...
  54. [54]
    Vocabulary Selection and Implementation in ... - ASHA Journals
    The purpose of this scoping review was to examine the vocabulary selection techniques and other aspects of intervention studies focused on vocabulary ...
  55. [55]
    Step 3: Select Appropriate Vocabulary
    Feb 10, 2019 · Select vocabulary by watching your child, focusing on what they want to communicate, and choosing motivating, functional, and appropriate  ...
  56. [56]
    PrAACtical Resources: Vocabulary Selection Questionnaire
    Jul 2, 2014 · Core words are central, for most people who use AAC, but we always want to include the words that help them express very specific and personal ...<|separator|>
  57. [57]
    AAC Vocabulary Selection: Communicating Beyond Requesting
    Sep 1, 2020 · On a vocabulary selection inventory, parents may provide an extensive list of their child's favorite toys and snacks as words to include on ...
  58. [58]
    Examining core vocabulary with language development for early ...
    Jan 17, 2023 · Core vocabulary lists are frequently used to select vocabulary for early symbolic communicators who require augmentative and alternative communication (AAC).Missing: initial | Show results with:initial
  59. [59]
    Determining a Zulu core vocabulary for children who use ... - PubMed
    Dec 13, 2019 · Vocabulary selection is an important aspect to consider when designing augmentative and alternative communication (AAC) systems for children ...
  60. [60]
    Vocabulary Selection in AAC: Application of Core ... - ASHA Journals
    It was concluded that core vocabulary is comparable for both populations in various contexts, with various communication partners, over various topics, and in ...
  61. [61]
    Full article: Speech-language pathologists' vocabulary selection and ...
    Vocabulary selection is an integral aspect of early AAC intervention as the first words that children who use AAC interact with are the foundation for later ...
  62. [62]
    SECTION 5: TYPES OF AAC DEVICES
    Synthesized speech AAC devices use a technology that translates the user's input into machine-generated speech using algorithms representing linguistic rules, ...
  63. [63]
    How High-Tech AAC Devices Transform Lives - QuickTalker Freestyle
    Discover the transformative benefits of high-tech AAC devices in enhancing communication for those with communication disorders.Missing: 2020s | Show results with:2020s
  64. [64]
    Review Facilitating requesting skills using high-tech augmentative ...
    Overall, the intervention results were largely positive, suggesting that high-tech devices can be successfully implemented as augmentative and alternative ...
  65. [65]
    [PDF] State of the Art in AAC: A Systematic Review and Taxonomy
    Accessibility researchers have considered leveraging AI systems to improve the adaptiveness (N=42, 7.5%) of the AAC system – to improve feature recognition i.e. ...
  66. [66]
    [PDF] ADAPTIVE EQUIPMENT MAINTENANCE PROTOCOLS
    Generally these devices are best cleaned using a microfiber cloth for the display screen or keyboard. Do not use any solvents, especially on the display. Keep.
  67. [67]
    [PDF] 1 CTCA Whitepaper Version 11 RESNA Suggested Procedures and ...
    It was developed as guidance to help improve outcomes for individuals needing Speech. Generating Devices (SGD) and incorporates input from many professionals, ...
  68. [68]
    Speech-Language Pathologists' Practices in Augmentative and ...
    This survey study examined augmentative and alternative communication (AAC) practices reported by early intervention speech-language pathologists (SLPs) ...
  69. [69]
    Full article: Multicultural considerations in augmentative and ...
    The purpose of this study was to investigate how high-tech AAC device manufacturers consider language variation and multilingualism in device design and ...
  70. [70]
    Ethical Considerations in AAC - The AAC Academy
    This session will explore key ethical considerations related to autonomy, consent, privacy, access, culture, and conflict of interest.
  71. [71]
    ethics : PrAACtical AAC
    Feb 2, 2017 · In this post, we explore some of those issues. Vocabulary and Message Selection Until our clients are fully literate or competent with a ...Missing: considerations | Show results with:considerations
  72. [72]
    [PDF] A recent survey of augmentative and alternative communication use ...
    Nov 30, 2022 · Among respondents with severe speech impairment, over 90% reported using speech-generating devices, and just over half reported ... population.
  73. [73]
    Tobii Dynavox US: Assistive communication solutions
    Discover Tobii Dynavox AAC solutions for people with conditions such as autism, ALS and cerebral palsy. We make communication apps and AAC devices, speech ...
  74. [74]
    A Systematic review of AAC interventions using speech generating ...
    Mar 31, 2025 · This systematic review investigated intervention studies using speech generating devices to enhance the expressive language of autistic preschoolers.
  75. [75]
    Wego 5A - Talk To Me Technologies
    The Wego 5A provides an ideal communication solution for children and adults with significant communication needs resulting from autism, stroke, brain injury, ...
  76. [76]
    What Is a Speech Generating Device - Advanced Therapy Clinic
    May 14, 2025 · The 1990s and early 2000s marked a period of rapid technological progress. ... Speech Generating Devices for Communication - Avaz Inc. [PDF] ...Missing: commercialization | Show results with:commercialization
  77. [77]
    Augmentative and Alternative Communication | The ALS Association
    speech-generating devices. The ...Missing: 1990s 2000s<|separator|>
  78. [78]
    Speech-Generating Devices for Communication Skills in Autism
    Jan 17, 2024 · Tablet-based speech-generating devices (SGDs) have been shown to be highly effective at improving communication skills in individuals with autism spectrum ...<|separator|>
  79. [79]
    AAC Devices & Speech Solutions - Numotion
    We support clients with a wide range of diagnoses, including ALS, Autism, Cerebral Palsy, Stroke, Traumatic Brain Injury, Apraxia and more. Browse products.
  80. [80]
    The Benefits of Speech Devices in Supporting Communication
    Aug 27, 2025 · Speech-generating devices serve individuals at all ages, from young children learning to communicate to older adults experiencing progressive ...
  81. [81]
    Bringing A New Voice to Genius—MITalk, the CallText 5010, and ...
    Mar 26, 2018 · It was the CallText voice, Dennis Klatt's synthesized voice, that brought sound back to Stephen Hawking's genius.Missing: outcomes | Show results with:outcomes
  82. [82]
    Stephen Hawking's famous voice belonged to an MIT scientist
    Mar 29, 2024 · Stephen Hawking's synthetic voice was modeled after the real-live voice of Dennis Klatt, a pioneer in text-to-speech systems.
  83. [83]
    An Engineer's Quest To Save Stephen Hawking's Voice - NPR
    Mar 27, 2018 · That machine voice became Hawking's voice after a debilitating neurological disease took away his own ability to talk. Hawking used to joke ...
  84. [84]
    Stephen Hawking's Eternal Voice | The MIT Press Reader
    Dec 24, 2024 · Of his version of Perfect Paul, Hawking stated, “I use a separate synthesizer, made by Speech Plus. It is the best I have heard, though it gives ...
  85. [85]
    What is a Speech Generating Device? - Kutest Kids
    Jul 12, 2024 · Speech-Generating Devices (SGDs) are personalized devices, often tablet-like, that help those who can't speak communicate by outputting ...
  86. [86]
    Speech-generating devices: effectiveness of interface design—a ...
    Sep 29, 2016 · We analyzed the efficacy of the interface design of speech generating devices on three non-verbal adolescents with autism spectrum disorder (ASD).
  87. [87]
    The Impact of Augmentative and Alternative Communication ...
    The best evidence indicates that AAC interventions do not have a negative impact on speech production. All but 1 of the 17 participants (94%) in our review ...
  88. [88]
    The Effect of Responsiveness to Speech-Generating Device Input ...
    The use of speech-generating devices (SGD) in early interventions for children with autism spectrum disorder (ASD) can improve communication and spoken ...
  89. [89]
    Communication interventions involving speech-generating devices ...
    Positive outcomes were reported for 86% of the studies and 78% of the studies were categorized as providing conclusive evidence.
  90. [90]
    Augmentative and Alternative Communication
    Over the years, Stephan Hawking utilized many different speech-generating devices. During the 1980s, he used a joystick that effectively allowed him to type out ...<|separator|>
  91. [91]
    First-of-its-kind technology helps man with ALS 'speak' in real time
    Jun 11, 2025 · We showed how a paralyzed man was empowered to speak with a synthesized version of his voice. This kind of technology could be transformative ...
  92. [92]
    Communication Matters—Pitfalls and Promise of Hightech ... - NIH
    Jul 27, 2018 · An interview-study of 34 family caregivers of ALS-patients reports a very positive attitude toward HT-AAC devices, an increased perception of ...
  93. [93]
    Communication Devices for Cerebral Palsy | Learn How They Can ...
    Apr 21, 2023 · AAC devices are some of the primary ways that many children with cerebral palsy communicate if they have speech or hearing impairments. ...Communication Boards · Speech-Generating Devices · Hearing Aids And Implants
  94. [94]
    Augmentative and Alternative Communication (AAC) - ASHA
    This systematic review explores the characteristics and impact of augmentative and alternative communication (AAC) interventions utilizing ...
  95. [95]
    Rethinking device abandonment: a capability approach focused model
    Although AAC is considered an evidenced-based intervention, device abandonment remains common, and researchers have attempted to analyze the causes of people ...
  96. [96]
    The impact of augmentative and alternative ... - PubMed
    None of the 27 cases demonstrated decreases in speech production as a result of AAC intervention, 11% showed no change, and the majority (89%) demonstrated ...
  97. [97]
    A Woman's Discovery of Shortcomings of AAC Devices - USSAAC
    The more I use AAC, the more aware I've become of its faults. Several of these include lack of volume, timing, lack of emphasis & intonation and word ...
  98. [98]
    Reliability of Speech Generating Devices: A 5-Year Review
    Sep 9, 2009 · This study examined the reliability of new SGDs and found that mean time to first failure was 42.7 (SD = 41.2) weeks and at least 40% required ...
  99. [99]
    Augmentative and Alternative Communication for Children with ...
    Mar 31, 2021 · AAC includes unaided systems such as manual sign and gestures as well as aided systems such as the Picture Exchange Communication System (PECS; ...
  100. [100]
    Augmentative and alternative communication (AAC) - Simple Practice
    Sep 22, 2025 · Augmentative and alternative communication encompasses numerous tools and strategies that help individuals with speech and language impairments ...
  101. [101]
    Performance and Usability - ASHA Journals
    Problem: Current research provides little evidence of optimal rates for effective communication, or how recent AAC innovations affect interaction performance.
  102. [102]
    Comparing and contrasting barriers in augmentative alternative ...
    Jun 10, 2024 · Research on AAC use shows improved communication skills in autistic children (9) – including those with intellectual disabilities and CCN (10, ...
  103. [103]
    TTMT Pricing - Talk To Me Technologies
    Talk To Me Technologies 2025 Pricing ; Z.12.MP. Zuvo 12 Mount Plate (without eye gaze camera). $130 ; Z.12.CH. Zuvo 12 Charger. $59 ; Z.12.ASP. Zuvo 12 Auditory ...Missing: 2023-2025 | Show results with:2023-2025
  104. [104]
  105. [105]
  106. [106]
  107. [107]
  108. [108]
    Insurance Tips - AAC Funding
    STEP 1: GATHER INFORMATION · STEP 2: CALL YOUR INSURANCE COMPANY · STEP 3: CHECK BENEFITS FOR A SPEECH-GENERATING DEVICE · STEP 4: ASK ADDITIONAL QUESTIONS · STEP 5 ...
  109. [109]
    [PDF] Effects of Low-Income Communities on the Accessibility of AAC ...
    The purpose of the study is to investigate the effects of low-income communities on the accessibility of AAC devices for children with ASD. Purpose. • Insurance.
  110. [110]
  111. [111]
    Social Economic Barriers to Information Communication Technology ...
    Sep 25, 2024 · This article explores the multifaceted challenges PWDs face in accessing ICT in Africa, emphasizing the intersections of disability with poverty, education, ...
  112. [112]
    Development and evaluation of a speech-generating AAC mobile ...
    Oct 3, 2017 · Mobile touchscreen devices are currently being used as speech-generating devices (SGDs) and have been shown to promote the communication ...
  113. [113]
    [PDF] Global Report on Assistive Technology - Unicef
    May 17, 2022 · Access to Assistive Technology deserves greater attention now than ever before. In fact, access to appropriate, quality assistive technology can ...
  114. [114]
    [PDF] Privacy and AAC - Augmentative Communication, Inc.
    AAC devices should be equipped with a CLEAR button that erases the screen as well as the buffer. Logfiles should be encrypted by default to prevent unintended ...
  115. [115]
    [PDF] Data Privacy and Security for AAC | ISAAC
    Mar 9, 2021 · If the device is owned by an organization, this loss is considered a data breach. Organizations may therefore be cautious about adding such ...
  116. [116]
    [PDF] Ethical Implications of AI-Driven AAC Systems: Ensuring Inclusivity ...
    Oct 27, 2024 · The analysis of AI-driven AAC systems reveals substantial privacy challenges due to the extensive collection and processing of user data ...<|separator|>
  117. [117]
    AAC data collection and privacy - AssistiveWare
    Oct 21, 2024 · We believe that AAC users have an absolute right to privacy regarding what they say, when they say it, and to whom they say it.
  118. [118]
    AAC and Consent, Safety, & Dignity - NWACS
    A curated collection of information and resources related to prevention of abuse, harm, and infantilization of complex communicators and AAC users.
  119. [119]
    The Ethics of Inclusion in AAC Research of Participants with ...
    Apr 10, 2020 · “Ethical Issues in AAC Research.” Paper presented at the Research Symposia Proceedings—International Society for Augmentative & Alternative ...Missing: shortcomings | Show results with:shortcomings
  120. [120]
    Privacy in the Information Age: Unique Issues for AAC Users
    Unique privacy concerns, both ethical and regulatory, confront individuals who rely on augmentative and alternative communication (AAC) technologies.
  121. [121]
    [PDF] Robust and Universal Voice Protection Against Malicious Speech ...
    Aug 15, 2025 · Speech synthesis technology has brought great convenience, while the widespread usage of realistic deepfake audio has triggered hazards.Missing: misuse | Show results with:misuse
  122. [122]
    Preventing the Harms of AI-enabled Voice Cloning
    Nov 16, 2023 · It also poses significant risk—families and small businesses can be targeted with fraudulent extortion scams; and creative professionals, such ...Missing: SGD | Show results with:SGD
  123. [123]
    Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech ...
    Jan 25, 2024 · The rapid and wide-scale adoption of AI to generate human speech poses a range of significant ethical and safety risks to society that need ...
  124. [124]
    Speaker Identity Unlearning for Zero-Shot Text-to-Speech - ICML 2025
    The rapid advancement of Zero-Shot Text-to-Speech (ZS-TTS) technology has enabled highfidelity voice synthesis from minimal audio cues, raising significant ...
  125. [125]
    [PDF] AAC and natural speech in individuals with developmental disabilities
    ... meta- analysis, provides a gross estimate of speech use but does not address any changes in quality or accuracy of productions. These are important ...
  126. [126]
    Augmentative and Alternative Communication and Speech ...
    Jan 28, 2021 · This review evaluated the effects of augmentative and alternative communication (AAC) on speech development in children with autism spectrum disorders (ASD).
  127. [127]
    [PDF] Critical Review: Do Speech-Generating Devices (SGDs) Increase ...
    There are a number of social, communicative benefits to using SGDs, including greater naturalness for listeners, greater social acceptability among peers, and ...
  128. [128]
    Online speech synthesis using a chronically implanted brain ...
    Apr 26, 2024 · Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a man with impaired ...
  129. [129]
    Artificial intelligence empowered voice generation for amyotrophic ...
    Jan 8, 2025 · More recently, neural speech synthesis has emerged as the state of the art, largely supplanting concatenate and parametric TTS techniques.<|control11|><|separator|>
  130. [130]
    [PDF] Supporting Self-expression through Speech Generation and LLMs ...
    Mar 21, 2025 · In this paper, we present Speak Ease: an augmentative and alternative communication (AAC) system to support users' expressivity.
  131. [131]
  132. [132]
    Enhancing AAC Systems Through Artificial Intelligence: Parent and ...
    Jun 19, 2025 · Despite their potential, AAC devices face significant challenges in usability, linguistic competence, and social interaction, including limited ...
  133. [133]
    Speech synthesis from neural decoding of spoken sentences - NIH
    Technology that translates neural activity into speech would be transformative for people unable to communicate as a result of neurological impairment.
  134. [134]
    Artificial Intelligence in Communication Sciences and Disorders
    Oct 17, 2024 · In speech-language pathology, a wide range of AI-driven tools are being developed to enhance the efficiency, accessibility, and effectiveness of ...
  135. [135]
  136. [136]
    Speech Generating Devices Market Report - Dataintelo
    The global speech generating devices market size was estimated to be approximately USD 256 million in 2023, with an anticipated growth at a robust CAGR of 8.6% ...
  137. [137]
    [PDF] Molina Clinical Policy Speech Generating Devices: Policy No. 445
    Dec 11, 2024 · Speech generating devices (SGDs) are covered if a speech evaluation is done, including a trial, and the individual has the ability to use the  ...Missing: insurance | Show results with:insurance
  138. [138]