Fact-checked by Grok 2 weeks ago

Human-centered computing

Human-centered computing (HCC) is an interdisciplinary field that applies methodologies from , human-computer , and social sciences to design, develop, and evaluate technologies centered on human needs, behaviors, and contexts. It emphasizes creating intuitive, accessible systems that augment human capabilities—physical, cognitive, and social—while assessing the benefits, risks, and societal impacts of computing. By integrating human sciences such as and cognitive studies with technical disciplines like and interface design, HCC ensures technologies adapt to diverse users and promote ethical, culturally aware interactions. The origins of HCC trace back to mid-20th-century visions of human-technology symbiosis, notably J.C.R. Licklider's 1960s conceptualization of computing as a collaborative partner to enhance and communication. The field coalesced in 1997 through a pivotal U.S. workshop that convened 51 researchers from , , , and other areas to outline HCC as a philosophical and practical framework for human-enhancing systems. This event marked HCC's emergence as a distinct subdiscipline of human-computer interaction, evolving from earlier HCI efforts to address broader socio-technical dynamics. By the early 2000s, NSF established dedicated funding programs, fostering growth into academic degrees at universities including , Clemson, and the . Key principles of HCC include , where iterative prototyping and evaluation prioritize end-user feedback to make technology "invisible" and seamless in supporting human endeavors. It also stresses interdisciplinary integration, combining behavioral analysis with computational innovation to mitigate risks like usability barriers or unintended social effects. Applications span interfaces for natural interaction, assistive tools for , systems for creative expression, and domain-specific solutions in health, education, and . Recent developments increasingly focus on human-centered , emphasizing trustworthy, value-aligned systems that enhance rather than replace human .

Introduction

Definition and Scope

Human-centered computing (HCC) is an interdisciplinary paradigm that prioritizes the design, development, and deployment of computing systems to enhance human capabilities while accounting for human limitations, contexts, and needs. It integrates principles from human-computer interaction (HCI), social and behavioral sciences, and to create technologies that support rather than replace and . This approach emerged from a 1997 (NSF) workshop on human-centered systems, which aimed to bridge HCI with broader computing disciplines by merging related NSF programs into a cohesive framework. The scope of HCC encompasses the full lifecycle of interactive systems, including their , , and deployment in real-world settings, with a focus on socio-technical environments where humans and co-evolve. Unlike traditional , which often emphasizes and technical innovation in isolation, HCC distinguishes itself by centering on the interplay between human behaviors, societal impacts, and technological artifacts, ensuring systems are evaluated not just for performance but for their effects on users and communities. This includes research into human-technology interfaces, collaborative tools, and domain-specific applications such as or educational platforms. Central to HCC are key concepts like , , and inclusivity, which guide the creation of systems that accommodate diverse populations, including those with disabilities or varying cultural backgrounds. For instance, adaptive interfaces that adjust to individual cognitive or physical needs—such as voice-activated controls for elderly or customizable displays for visually impaired individuals—exemplify how HCC promotes equitable adoption by emphasizing "" between and systems. These principles ensure that computing solutions amplify without exacerbating inequalities. HCC has evolved to incorporate emerging technologies like (AI) and the (IoT), adapting them to human contexts through mixed-initiative systems where humans and machines collaborate seamlessly. This expansion maintains the field's human focus, addressing challenges such as ethical AI deployment and privacy in environments, thereby extending HCC's relevance to contemporary socio-technical challenges.

Historical Development

The conceptual origins of human-centered computing (HCC) trace back to mid-20th-century visions of human-technology symbiosis, such as J.C.R. Licklider's 1960s ideas of computing as a partner to . Building on these foundations, HCC traces its practical origins to human-computer interaction (HCI) established in the 1980s, particularly through pioneering work at Xerox PARC, where researchers developed the first graphical user interfaces (GUIs) using the system in 1973 and commercialized aspects in the in 1981. These innovations introduced key concepts like windows, icons, menus, and pointers (WIMP interfaces), emphasizing intuitive human interaction with computers over command-line paradigms. Building on this HCI legacy, HCC emerged as a distinct field in the early , formalized through the U.S. National Science Foundation's (NSF) support for interdisciplinary research integrating computing with human sciences. Key milestones in HCC's development include the NSF's establishment of dedicated funding for human-centered computing in the early 2000s, incorporating and to address broader human behaviors and societal impacts beyond traditional HCI. During the , HCC evolved with the proliferation of and technologies, enabling research on ubiquitous interfaces and media's role in human computation. Influential pioneers from HCI laid critical groundwork for HCC; advanced principles in his 1988 book , stressing the importance of and human needs in system design. Similarly, introduced the concept of direct manipulation interfaces in a 1983 paper, promoting visual, immediate feedback to enhance user control and satisfaction. Post-2020 developments in HCC have increasingly focused on ethical considerations, particularly the integration of human-centered (HCAI) frameworks amid rising and concerns in systems. This shift is exemplified by NSF-funded initiatives emphasizing responsible design, such as the National AI Research Institutes launched in 2020 to promote human- collaboration. The European Union's AI Act, which entered into force in August 2024, further drives this evolution by mandating human oversight for high-risk systems to mitigate harms to health, safety, and .

Core Principles

User-Centered Design Fundamentals

User-centered design (UCD) in human-centered computing prioritizes understanding and addressing user needs through core principles that foster and continuous improvement. Empathy mapping serves as a foundational tool, enabling designers to visualize and synthesize user insights by categorizing what users say, think, do, and feel, thereby creating a shared team understanding of user perspectives early in the process. Iterative prototyping complements this by involving repeated cycles of design, testing, and refinement, where low-fidelity prototypes are built and evaluated with users to identify issues and enhance , often yielding significant improvements such as a 38% increase in usability per iteration in early studies. Feedback loops are integral to this approach, closing the cycle by incorporating user input at each stage to ensure designs evolve in alignment with real-world needs and behaviors. A key philosophy in UCD is the "" concept, which integrates human oversight and input into computational systems to augment rather than supplant human judgment, particularly in AI-driven applications. This approach leverages collaborative human-AI interactions, such as reciprocal learning frameworks, to improve and system reliability while maintaining user agency. By keeping humans actively involved in , , and refinement processes, it mitigates risks of over-automation and ensures technologies support human capabilities effectively. Central to UCD frameworks is Donald Norman's model, which describes the psychological process of user interaction with s: forming a goal, formulating an intention, specifying an action, executing the action, perceiving the world, interpreting the state, and evaluating the outcome. This model highlights core challenges like the Gulf of Execution—the gap between a user's intentions and the actions available in the —and the Gulf of Evaluation—the difficulty in interpreting to assess outcomes. Addressing these gulfs through intuitive interfaces reduces and enhances interaction effectiveness. Inclusivity in UCD draws from principles, which aim to create systems usable by the widest range of people without adaptation, exemplified by the curb-cut effect where ramps installed for users also benefit parents with strollers, delivery workers, and others. These principles—equitable use, flexibility, simplicity, perceptible information, and error tolerance—ensure broad accessibility in computing interfaces. Human factors metrics unique to this focus include error rates, which quantify user mistakes during tasks to identify design flaws, and satisfaction scores, often measured via tools like the to gauge subjective . Ethical considerations in UCD emphasize from the outset, achieved through diverse personas that represent varied demographics, abilities, and experiences to uncover and counteract potential inequalities in system design. For instance, tools like GenderMag facilitate the creation of such personas to evaluate interfaces for gender biases, promoting equitable outcomes by simulating interactions across user groups during cognitive walkthroughs. This proactive inclusion helps prevent discriminatory impacts in human-centered computing applications.

Interdisciplinary Integration

Human-centered computing (HCC) integrates insights from diverse disciplines to ensure that computational systems align with human needs, behaviors, and contexts. Core contributing fields include , which provides cognitive models to understand mental processes such as and in interactions; , which examines the social impacts and dynamics of technology adoption within groups and communities; , which informs cultural contexts to avoid ethnocentric designs; and , which focuses on practical system implementation to make interfaces reliable and scalable. A prominent example of this integration is the application of ethnographic studies from to interface design, where researchers immerse themselves in user environments to uncover tacit practices and cultural nuances that inform more intuitive digital tools. Similarly, models like Fitts' Law, derived from psychological research on , optimize efficiency by predicting movement times based on target distance and size. The formula is given by: T = a + b \log_2 \left( \frac{D}{W} + 1 \right) where T represents the average time to acquire a target, D is the distance to the target, W is the target width, and a and b are empirically determined constants reflecting device and user factors. This interdisciplinary approach yields benefits such as the development of context-aware systems, including culturally adaptive user interfaces that adjust to regional norms and preferences for broader accessibility. However, challenges like siloed knowledge across fields can hinder collaboration, often addressed through cross-disciplinary teams that foster shared methodologies and iterative feedback. In 2025, trends highlight the expanding role of neuroscience in HCC, particularly through brain-computer interfaces (BCIs) that enable direct neural interaction with devices, enhancing personalization for users with disabilities while raising ethical considerations in human augmentation.

Key Areas

Human-Computer Interaction

Human-computer interaction (HCI) forms the foundational layer of human-centered computing, emphasizing intuitive and efficient communication between users and digital systems. In HCI, interaction techniques prioritize natural human capabilities, enabling seamless engagement with computational tools while minimizing . Core input methods include gestures, voice, and touch, which allow users to convey intentions through physical or verbal actions rather than abstract commands. , for instance, interprets hand movements or body poses via sensors, facilitating expressive control in environments like or mobile devices. Voice input processes spoken commands using algorithms, enabling hands-free operation and supporting diverse applications from dictation to . Touch interfaces, popularized by capacitive screens in the late 2000s, enable direct manipulation on surfaces through gestures such as pinching or swiping. Output modalities complement these inputs by delivering feedback through visual, auditory, and haptic channels, ensuring is perceivable across sensory preferences. Visual outputs, such as graphical displays and animations, provide immediate spatial and contextual cues, forming the basis of most interfaces. Auditory uses non-speech sounds like beeps or tones for alerts, enhancing awareness without visual distraction. Haptic outputs transmit tactile sensations via vibrations or force , improving in tasks like or surgical simulations and aiding users with visual impairments. These modalities together support robust interaction , including direct manipulation and conversational interfaces. Direct manipulation, introduced as a paradigm where users act on visible representations of objects with rapid, reversible actions, reduces syntactic complexity and fosters a of , as seen in drag-and-drop operations in graphical editors. Conversational interfaces simulate through , allowing users to query systems via speech or text, akin to human conversation, and have evolved to include embodied agents for more engaging exchanges. Evaluating HCI effectiveness relies on quantitative metrics like task completion time and error rates, which measure and reliability in user-system exchanges. Task completion time assesses the duration required to achieve goals, with shorter times indicating streamlined interactions; for example, studies show average completion rates around 78% across tasks, highlighting room for design improvements. Error rates quantify mistakes per task, such as incorrect selections, helping identify friction points; lower rates correlate with intuitive designs. Complementing these, Jakob Nielsen's 10 heuristics provide expert-based guidelines for inspection. Key among them is the visibility of system status, which requires timely feedback on actions, like progress bars during file uploads, to keep users informed. User control and freedom emphasize /redo functions and exits, empowering users to recover from s without . Other heuristics include with standards and prevention, ensuring interfaces align with user expectations. Advancements in HCI have expanded toward , , and context-sensitive designs, aligning with human-centered computing's adaptive ethos. integrates multiple sensory channels, such as combining speech and gesture, to resolve ambiguities and boost efficiency; research demonstrates 19-41% error reduction through mutual disambiguation and 10% faster task completion in spatial applications. standards like WCAG 2.2, updated in 2023, address cognitive disabilities with criteria such as accessible (avoiding memory tests or puzzles) and redundant entry prevention (auto-filling prior inputs), reducing for users with impairments. In human-centered contexts, context-sensitive interactions adapt interfaces based on user behavior, exemplified by adaptive menus that reorder or highlight items according to usage patterns; these systems, using predictive models, improve selection speed by up to 20% while maintaining stability through elective adaptations like split menus. enhancements briefly augment these by personalizing adaptations in , though core HCI techniques remain modality-agnostic.

Human-Centered Artificial Intelligence

Human-centered artificial intelligence (HCAI) refers to the design and development of systems that prioritize values, needs, and capabilities, aiming to augment rather than replace abilities while ensuring reliability, safety, and trustworthiness. This approach emphasizes principles such as , which involves clear disclosure of AI operations; , ensuring responsibility for AI outcomes; and explainability, enabling users to understand AI decision-making processes. These principles guide AI to serve benefit, as outlined in early strategies promoting -AI to address societal challenges. Key concepts in HCAI include human-AI teaming, where AI systems work collaboratively with humans to enhance performance, such as in collaborative robots (cobots) that assist in tasks by adapting to human movements without direct programming. detection methods are also central, employing fairness metrics to mitigate discriminatory outcomes; for example, demographic parity requires that the prediction probability is independent of a protected attribute, formally expressed as P(\hat{Y}=1 \mid A=0) = P(\hat{Y}=1 \mid A=1), where A is the sensitive attribute (e.g., ) and \hat{Y} is the model's prediction. These techniques draw briefly from human-computer interaction foundations to create intuitive AI interfaces that support oversight. By 2025, ethical AI frameworks have advanced significantly; the EU AI Act, which entered into force on 1 August 2024 with phased implementation (full applicability by 2 August 2026 and ongoing discussions of potential delays as of late 2025), establishes a risk-based regulation that mandates , human oversight, and for high-risk AI systems to foster human-centered deployment across sectors. Applications in digital well-being, such as AI-driven monitoring tools, incorporate privacy safeguards like to analyze user data without centralizing sensitive information, thereby supporting early intervention while protecting confidentiality. Challenges in HCAI include the risk of over-reliance on AI, where users defer excessively to automated decisions, potentially leading to errors in critical scenarios like medical diagnostics. Solutions involve hybrid intelligence models that integrate human judgment with AI capabilities, promoting adaptive collaboration to balance automation benefits with human control and reduce dependency.

Applications

Multimedia Systems

Human-centered approaches in multimedia systems prioritize user needs, preferences, and perceptual capabilities throughout the , , and with such as video, audio, and images. These systems aim to make accessible and intuitive by incorporating ergonomic design principles that reduce cognitive effort and enhance creative expression. For instance, tools emphasize seamless workflows that align with human cognition, ensuring that technical complexities do not hinder artistic or informational goals. In multimedia production, human-centered design focuses on intuitive authoring tools that democratize content creation. Drag-and-drop editors, such as those used in digital storytelling platforms, enable users without advanced technical skills to assemble multimedia elements like videos, animations, and text through visual interfaces that mimic natural manipulation. Human factors in production also extend to collaborative editing platforms, where real-time synchronization and role-based permissions support team-based workflows, minimizing conflicts and fostering creativity among distributed users. For example, systems designed through workshops with professional video editors incorporate features like shared timelines and conflict resolution aids to align with users' collaborative practices. For analysis, human-centered multimedia systems leverage user-driven content tagging and recommendation mechanisms to organize and retrieve information effectively. These systems allow users to apply tags based on personal interpretations, which feed into recommendation algorithms that suggest relevant items by matching user profiles derived from tagging behaviors. Perceptual models assess quality by incorporating human subjective judgments, such as (MOS), a standardized 5-point scale developed by the for evaluating audio, video, and audiovisual quality through listener or viewer ratings. MOS scores, obtained via methods like Absolute Category Rating, provide a benchmark for aligning objective metrics with human perception, ensuring content meets experiential standards. Interaction in human-centered multimedia emphasizes natural and immersive interfaces to enhance engagement. Immersive experiences through (AR) and (VR) enable users to interact with in three-dimensional spaces, guided by principles of and presence to avoid disorientation. Gesture-based navigation for video streams, using to detect hand movements, allows intuitive scrubbing, zooming, or selection without traditional controls, improving for diverse users including those with motor impairments. A core emphasis in human-centered computing for is , tailoring delivery to individual preferences and contexts. Adaptive streaming algorithms dynamically adjust video bitrate and based on user profiles, network conditions, and device capabilities to optimize (QoE) while conserving resources. This extends to managing , where systems modulate playback speed or segment complexity to match user attention levels, as seen in educational videos that segment content to prevent overload and improve comprehension. Such approaches ensure multimedia consumption is efficient and user-aligned, supporting broader integration into everyday digital environments. Human-centered computing applications extend beyond multimedia and ubiquitous systems to include multimodal interfaces for natural interaction, assistive technologies for accessibility, and tailored solutions in domains like health (e.g., personalized patient monitoring tools) and education (e.g., adaptive learning platforms). These applications emphasize ethical design to address diverse user needs without overlapping with core interaction or AI-focused areas.

Ubiquitous and Ambient Computing

, as envisioned by in his seminal 1991 article, refers to an environment where computing capabilities are embedded seamlessly into everyday objects and spaces, making technology invisible and integrated into human activities rather than requiring explicit attention. This vision emphasizes hundreds of interconnected devices per room that anticipate user needs through context-aware processing, shifting from desktop-centric paradigms to pervasive, that supports human cognition without disruption. Building on this foundation, extends by incorporating proactive, adaptive systems that use to create environments sensitive to human presence and preferences, enabling intuitive interactions without manual intervention. These systems draw from multidisciplinary advances in sensors, pervasive computing, and AI to foster environments that respond dynamically to users, prioritizing human well-being over technological dominance. In human-centered computing, ubiquitous and ambient systems prioritize user privacy and seamless integration to enhance daily experiences. Privacy-preserving designs, such as , allow models to be trained across distributed devices while keeping sensitive data localized, mitigating risks in resource-constrained ambient environments like smart homes. For instance, enables collaborative intrusion detection in networks without centralizing user data, thus balancing intelligence with ethical data handling. Seamless interactions are exemplified in smart home ecosystems, where principles guide the creation of interconnected devices that adapt to user routines, such as automated and based on and preferences, reducing through intuitive, voice- or gesture-based controls. These designs emphasize user-driven research to ensure technology supports rather than overwhelms household dynamics. Applications of ubiquitous and ambient computing in human-centered contexts include wearables for health monitoring, which collect real-time physiological data to provide personalized insights while respecting autonomy. Devices like smartwatches track such as and activity levels, enabling early detection of health issues through on-body technologies that integrate seamlessly into daily wear. However, challenges such as notification overload arise from constant streams of alerts in these environments, potentially leading to user fatigue and reduced effectiveness. Mitigation strategies employ systems with priority algorithms that assess context, user state, and alert urgency to filter and defer non-essential notifications, ensuring only relevant information reaches the user at optimal times. For example, algorithms in ubiquitous settings predict interruptibility based on behavioral cues, suppressing low-priority alerts to preserve cognitive resources. As of 2025, trends in ubiquitous and ambient computing highlight edge computing's role in networks to deliver low-latency responses tailored to needs, processing locally to minimize delays in time-sensitive applications like alerts or environmental adaptations. This approach reduces strain on central clouds while enhancing responsiveness in human-centered scenarios, such as immediate feedback from wearables during physical activities. Edge computing's integration with supports scalable, secure environments that prioritize user-centric performance, with projections indicating approximately 20 billion connected devices globally as of 2025 by enabling faster, privacy-focused decision-making at the network periphery.

Design and Development

Human-Centered Design Process

The (HCD) process in human-centered computing (HCC) provides a structured framework for incorporating user needs, behaviors, and contexts into the development of interactive systems, ensuring that technology serves human goals effectively and ethically. This iterative methodology emphasizes , , and continuous refinement to mitigate risks associated with , , and societal impact. Unlike traditional approaches that prioritize technical specifications, HCD integrates human factors from to deployment, fostering systems that are intuitive, inclusive, and adaptable to diverse user populations. The foundational stages of the HCD process are outlined in the ISO 9241-210 standard (2019), which defines a cyclical to guide designers in creating user-centered products. The process begins with planning the human-centered design activities, where s define objectives, resources, and timelines to align the project with organizational goals while identifying key human factors. Next, the context of use is specified by analyzing the users, their tasks, environments, and organizational constraints through methods like ethnographic studies and stakeholder interviews. This is followed by specifying user and organizational requirements, translating contextual insights into measurable needs, such as standards or performance criteria. Designers then produce design solutions, generating conceptual models and prototypes that address these requirements. Finally, the prototypes are built and evaluated against requirements in real or simulated settings, with results feeding back into earlier stages for refinement. This loop ensures that designs evolve based on rather than assumptions. Key tools and techniques support these stages to make the process tangible and user-focused. Personas are fictional yet data-driven representations of user archetypes, derived from to guide design decisions by embodying typical goals, frustrations, and behaviors. Scenarios outline narrative sequences of user interactions with the system, helping to anticipate real-world applications and edge cases. Wireframing involves creating low-fidelity sketches or digital layouts of interfaces to visualize structure and flow without committing to aesthetics prematurely. In agile development environments, HCC principles are integrated through sprints that prioritize user feedback, such as incorporating midway through iterations to adjust backlogs dynamically and ensure rapid incorporation of human insights. These tools bridge abstract requirements with concrete artifacts, promoting efficiency in collaborative teams. Iteration lies at the core of the HCD process, emphasizing divergent and convergent thinking to explore possibilities before refining solutions. The Double Diamond model, developed by the Design Council, structures this as four phases: discover (researching user needs through immersion), define (synthesizing insights to frame problems), develop (ideating and prototyping solutions), and deliver (testing and implementing the refined design). This model encourages broad exploration to avoid premature convergence on suboptimal ideas, while incorporating risk assessments for human impacts, such as evaluating potential biases in data-driven features or long-term effects on user well-being. By cycling through these phases multiple times, designers can address complexities like evolving user contexts or technological constraints, leading to more robust outcomes. In HCC, the HCD process is adapted to address unique challenges posed by advanced technologies, particularly through the inclusion of ethical reviews at each stage. For instance, during prototyping of systems, designers conduct impact assessments to evaluate implications, algorithmic fairness, and societal consequences, using frameworks like those from the IEEE to ensure alignment with values. These adaptations extend the ISO stages by embedding multidisciplinary input—such as from ethicists and social scientists—preventing harm and promoting trust in computational systems. Such integrations have been shown to enhance adoption rates in sensitive domains like healthcare interfaces.

Evaluation and Usability Methods

Evaluation and usability methods in human-centered computing (HCC) focus on systematically assessing how well systems support human users in terms of efficiency, effectiveness, and satisfaction, enabling iterative refinements to align technology with human needs. These methods draw from human-computer interaction (HCI) principles to identify barriers and opportunities in system design, ensuring that evaluations are user-centric rather than purely technical. Common approaches include both qualitative and quantitative techniques to capture diverse aspects of . Usability testing involves observing users as they interact with a system to uncover real-world challenges, often employing think-aloud protocols where participants verbalize their thoughts during tasks to reveal cognitive processes and . This method, widely adopted in HCI since the 1980s, provides direct insights into user mental models and pain points without relying on post-hoc rationalizations. complements this by having experts review interfaces against established guidelines, such as Jakob Nielsen's 10 usability heuristics, to efficiently detect potential issues early in development. , a comparative quantitative method, exposes different user groups to interface variants and measures performance metrics like completion rates to determine superior designs empirically. Eye-tracking technology further enhances these evaluations by mapping visual attention patterns, revealing where users focus, fixate, or overlook elements, which is particularly useful for optimizing layouts in complex interfaces. Key metrics quantify usability outcomes, with the (SUS) being a standardized 10-item that yields a score from 0 to 100 through an adjusted average: odd-numbered items are scored as (response - 1) and even-numbered as (5 - response), then summed and multiplied by 2.5 for the final score, providing a benchmark for perceived ease of use across systems. Accessibility audits, essential for inclusive HCC, employ tools like WAVE from WebAIM to scan for compliance with standards such as WCAG, flagging issues like missing alt text or insufficient color contrast. Advanced approaches extend beyond single-session tests to capture evolving user behaviors. Longitudinal studies track participants over extended periods to assess long-term adoption and adaptation, addressing how initial impressions influence sustained engagement in dynamic environments. Mixed-methods evaluations integrate quantitative data, such as task success rates (e.g., percentage of goals achieved without errors), with qualitative insights from interviews, offering a holistic view of both measurable performance and subjective experiences. As of 2025, AI-assisted methods are gaining prominence for , particularly automated of user feedback, which processes from surveys or session recordings to detect emotional tones and prioritize issues without manual coding. These AI tools enable rapid analysis of large datasets, supporting continuous monitoring in agile development cycles while maintaining human oversight for nuanced interpretations.

Careers and Education

Academic Programs

Academic programs in human-centered computing (HCC) encompass a range of degree offerings at the bachelor's, master's, and doctoral levels, often interdisciplinary in nature and blending computing with , , and principles. These programs train students to create that prioritizes human needs, capabilities, and contexts. For instance, the in Human-Centered Computing at (RIT), introduced in 2015, exemplifies an undergraduate degree focused on intuitive system . Similarly, master's and programs in human-computer interaction (HCI) or HCC prepare graduates for advanced and application development. Pioneering institutions have shaped HCC education since the 1990s. Carnegie Mellon University's Human-Computer Interaction Institute (HCII), founded in 1993, offers one of the earliest comprehensive programs, including the world's first Master of Human-Computer Interaction degree, alongside BS and PhD options in HCI. By 2025, the field has seen significant growth in AI-integrated tracks; Stanford University's Institute for Human-Centered Artificial Intelligence (HAI) supports AI-focused education through courses, fellowships, and summits emphasizing ethical AI design. (ASU) has expanded with its Master of Science in Engineering (Human-Centered Artificial Intelligence), which integrates human factors engineering with . Core curricula in these programs typically include foundational courses in HCI, , and programming, alongside interdisciplinary elements like electives to understand user behavior. At RIT, for example, students take courses such as Foundations of Human-Centered Computing, Introduction to , and and Problem Solving. Hands-on learning is emphasized through projects, such as senior capstone developments, and required internships or experiences, often spanning multiple terms to build practical skills in real-world settings. Recent trends in HCC education highlight the rise of online certifications and a stronger focus on ethical training amid AI advancements. The UX Design Professional Certificate, launched in , provides accessible training in processes, including empathy mapping and prototyping, suitable for entry-level preparation. By 2025, programs increasingly incorporate modules to address AI biases and societal impacts, as seen in frameworks guiding higher education's responsible AI implementation. These elements equip students for professional roles in technology design and development.

Professional Roles

Human-centered computing (HCC) encompasses a range of professional roles that prioritize user needs in the design and development of interactive systems. These roles bridge technical implementation with human factors, ensuring technologies are intuitive, , and ethically sound. Key positions include UX/UI designers, interaction designers, and information architects, each contributing distinct expertise to create user-focused experiences. UX/UI designers focus on crafting prototypes and conducting user research to enhance product and . They emphasize user satisfaction by iterating on interfaces that are enjoyable and inclusive, often through wireframing and visual design. In contrast, interaction designers specialize in defining behavioral flows, mapping user journeys to ensure seamless functionality and engagement across digital products. Information architects, meanwhile, organize content structures to facilitate intuitive navigation, integrating UX principles with logical information hierarchies at the core of system design. Essential skills for these roles include proficiency in prototyping tools such as and , which enable rapid iteration and visualization of user interfaces. Soft skills like interviewing are crucial for understanding user perspectives and motivations during phases. Certifications from the , such as those in UX fundamentals or , validate expertise and are widely recognized in the field. Career paths in HCC often begin with entry-level positions supported by a degree, which provides the technical foundation for understanding system constraints. Professionals advance to lead roles, such as senior UX managers or directors, by gaining in cross-functional teams and demonstrating impact on product outcomes. By 2025, demand has surged for specialized paths like ethics consulting within HCC, where experts ensure human-centered principles guide deployments to mitigate biases and promote equitable outcomes. In industry settings, tech giants like employ dedicated teams for initiatives such as , where UX/UI and interaction designers collaborate to standardize user-centered interfaces across platforms. Conversely, startups in wearable tech, such as those developing fitness trackers and AR devices, rely on versatile professionals to prototype adaptive interfaces that respond to real-time user contexts, often in agile, resource-constrained environments.

Notable Projects and Initiatives

Government and Industry Projects

The (NSF) has supported human-centered computing (HCC) through its dedicated HCC program, initiated in the early 2000s as part of the Directorate for Computer and Information Science and Engineering (CISE), to advance research in human-computer interaction and related technologies. This program has funded interdisciplinary projects exploring user interfaces, , and cognitive aspects of computing systems, with solicitations dating back to at least 2003 emphasizing fundamental HCI research. , via its Human Systems Integration Division, applies HCC principles to interfaces, including the design of intuitive displays and automation systems that enhance pilot and reduce errors in high-stakes environments. For instance, projects at Ames have developed human-centered tools for low-visibility taxi operations and multifunction displays, integrating user feedback to improve safety in . In the defense sector, the has pursued HCC in autonomy-focused initiatives during the , such as the Assured Autonomy program, which develops technologies for verifiable human-AI interactions in cyber-physical systems like unmanned vehicles. Similarly, the ASIMOV program establishes benchmarks for ethical human oversight of autonomous systems, ensuring alignment with operational values through human-centered evaluation metrics. On the international front, the European Union's framework (2021-2027) includes funding calls for human-centered and ethical AI development, promoting trustworthy technologies that prioritize user needs, transparency, and societal impact. These efforts, under Cluster 4 on Digital, Industry, and Space, support projects addressing AI ethics and human-AI collaboration. Industry leaders have also advanced HCC through proprietary guidelines and toolkits. Apple's , first published in the 1980s with the Macintosh era and continuously updated, provide principles for intuitive, user-focused designs across platforms, including a 2024 revision for on the Vision Pro headset to support interactions. Microsoft's toolkit, released in 2017, offers methodologies to create accessible products by considering diverse abilities from the outset, emphasizing principles like recognizing exclusion and solving for one to extend to many. This toolkit has influenced broader practices for inclusivity. These government and industry initiatives have yielded tangible impacts, particularly in enhancing safety for autonomous vehicles through HCC-driven interfaces that improve and explainability. For example, human-centered designs in systems, informed by such programs, aim to reduce collision risks by enabling better communication of vehicle intentions to human drivers in mixed-traffic scenarios. Brief academic collaborations, such as those in DARPA's projects, have further refined these outcomes by integrating research expertise.

Academic Research Centers

The Center for Cognitive Ubiquitous Computing (CUbiC) at Arizona State University, established in the early 2000s, serves as a key interdisciplinary hub for advancing human-centered computing through cognitive ubiquitous systems that integrate sensing, processing, recognition, learning, interaction, and multimedia delivery. Focused on assistive, rehabilitative, and healthcare applications, CUbiC emphasizes technologies tailored for individuals with disabilities, including visually impaired users, with flagship prototypes like the iCARE system developed since 2004 to enhance independence via wearable computing. The center's research extends to eldercare technologies by developing human-centered multimedia tools that support daily activities and rehabilitation, promoting accessibility across the ability spectrum. CUbiC has produced over 440 refereed publications and prototypes, including ambient interaction systems explored in the 2010s to facilitate intuitive environmental awareness for users with cognitive challenges. Funded by grants such as a $3 million NSF IGERT award for training in interactive intelligence, CUbiC collaborates with entities like Arizona's Rehabilitation Services Administration to translate research into practical tools. At the , the Fluid Interfaces Group conducts ongoing research in human-centered computing by designing systems that augment human cognition through human-computer interaction, , and neuroscience-inspired interfaces. The group develops prototypes supporting learning, , , and emotion regulation, often incorporating wearable sensors and brain-computer interfaces to create fluid, user-adaptive experiences that prioritize psychological well-being. Complementary efforts within the Media Lab, such as the Tangible Media Group's work on tangible user interfaces, bridge digital and physical realms to enable direct manipulation of , fostering intuitive interactions in human-centered designs like dynamic shape displays. These contributions include high-impact publications and systems tested in real-world settings, advancing HCC by emphasizing motivation and in mixed-initiative environments. The group's initiatives receive support from MIT's broader funding ecosystem, including NSF grants for sustainable and AI-driven technologies. Carnegie Mellon University's CREATE Lab exemplifies community-driven human-centered computing by partnering with local groups to co-design technologies that empower education, , and social expression. Established as a technology breeding ground, the lab focuses on scalable tools like the Smell MyCity app, which enables citizens to map pollution odors, and open-source environmental sensors for real-time data analysis in underserved communities. Its research prioritizes inclusive prototypes that address societal needs, such as integrating into curricula to foster technology literacy through processes. CREATE Lab's contributions include publications on and community empowerment, with NSF-funded projects like robotic kits for and schools demonstrating impact on over 140 students. Collaborations with universities and foundations sustain these efforts, aligning with NSF priorities for integrative partnerships in HCC. Recent expansions in human-centered AI (HCAI) for , influenced by broader NSF initiatives, have seen centers like CUbiC and CREATE Lab incorporate AI-driven tools to enhance learning , such as adaptive interfaces for diverse learners. These developments, supported by NSF's $100 million investment in AI research institutes partnering with industry leaders like , underscore academic centers' role in fostering ethical, user-focused AI applications.

References

  1. [1]
    Human-Centered Computing (HCC) - National Science Foundation
    Aug 21, 2013 · Human-Centered Computing (HCC) supports research in human-computer interaction (HCI), taken broadly, integrating knowledge across disciplines— ...Missing: definition key
  2. [2]
    [PDF] Human-Centered Computing: A Multimedia Perspective
    Human-Centered Computing (HCC) is a set of methodolo- gies that apply to any field that uses computers, in any form, in applications in which humans directly ...
  3. [3]
    Human-Centered Computing: A New Degree For Licklider's World
    May 1, 2013 · In the 1960s, J.C.R. Licklider described his vision for the future of computing, which is remarkably like today's world. He saw computing as ...Missing: origins | Show results with:origins
  4. [4]
    Human-centered Computing
    HCC is “a philosophical-humanistic position regarding the ethics and aesthetics of the workplace”;. • a HCC system is “any system that enhances human ...
  5. [5]
    Major in Computer Science, Human-Centered Computing ...
    Human-centered computing (HCC) focuses on improving the relationship between people and technology, making the computer invisible. It involves human-centric ...Missing: aspects | Show results with:aspects
  6. [6]
    Human-Centered AI Design in Reality: A Study of Developer ...
    Oct 8, 2022 · Human-Centered AI (HCAI) advocates the development of AI applications that are trustworthy, usable, and based on human needs.Missing: key | Show results with:key<|control11|><|separator|>
  7. [7]
  8. [8]
    None
    ### Summary of Human-Centered Computing (HCC) from the Introduction
  9. [9]
    A Smalltalk-80 graphical user interface (GUI) - CHM Revolution
    Smalltalk promoted an “object-oriented” style of programming. The overlapping windows, with titles in the upper left, remain graphical standards today.<|control11|><|separator|>
  10. [10]
    NSF 25-542: Smart Health and Biomedical Research in the Era of ...
    Jul 24, 2025 · The complexity of biomedical and health systems requires deeper understanding of causality in AI/ML models; new ways of integrating social and ...
  11. [11]
    Empathy Mapping: The First Step in Design Thinking - NN/G
    Jan 14, 2018 · Empathy mapping can be driven by any method of qualitative research (and can be sketched even if research is lacking). They can help UX ...Format of an Empathy Map · One User vs. Multiple-Users...
  12. [12]
    Parallel & Iterative Design + Competitive Testing = High Usability
    Dec 3, 2024 · 3 Design Processes for High Usability: Iterative Design, Parallel Design, and Competitive Testing · There's no one perfect user-interface design, ...Iterative Design · Parallel Design · Competitive TestingMissing: seven | Show results with:seven
  13. [13]
    What is Human-Centered about Human-Centered AI? A Map of the ...
    Apr 19, 2023 · 2 Humans in the Loop. Human-in-the-loop (HITL) is an area of AI that is aimed at leveraging both AI and the human in the creation ...
  14. [14]
    The Two UX Gulfs: Evaluation and Execution - NN/G
    Mar 11, 2018 · Evaluation: Understanding the state of the system; Execution: Taking action to accomplish a specific goal. These challenges are described as the ...Missing: seven | Show results with:seven
  15. [15]
    The Curb-Cut Effect - Stanford Social Innovation Review
    The Architectural Barriers Act of 1968 required government buildings to make themselves universally accessible, but traversing the streets in a wheelchair ...
  16. [16]
    Beyond the NPS: Measuring Perceived Usability with the SUS ...
    Feb 11, 2018 · These metrics tell you what the user's satisfaction level was, but do not pinpoint any weaknesses or strengths of the experience (or what you ...
  17. [17]
    Empower Diversity in AI Development | Communications of the ACM
    Nov 7, 2024 · GenderMag encompasses several practices including evaluating software features for potential gender biases, creating diverse user personas to ...
  18. [18]
  19. [19]
    Master's in HCI/UX Program & Courses | Drexel CCI
    These interdisciplinary fields draw on human-centered disciplines like psychology, sociology ... technology-centered disciplines like software engineering ...
  20. [20]
    [PDF] an ethnographic approach to design
    The Interview As a Communicative Event. .970 and Perspectives. 980. Rules of Thumb When Interviewing … 971. Ethnography in Action.
  21. [21]
    Fitts' Law as a Research and Design Tool in Human-Computer ...
    Nov 11, 2009 · According to Fitts' law, human movement can be modeled by analogy to the transmission of information. Fitts' popular model has been widely ...
  22. [22]
    Integrating Human-Centered Design and Social Science Research ...
    Human-centered design is pivotal in forming actionable research questions, integrating disciplines, and supporting iterative data analysis and synthesis.
  23. [23]
    Brain-computer interfaces face a critical test | MIT Technology Review
    Apr 1, 2025 · Neuralink, Synchron, and Neuracle are expanding clinical trials and trying to zero in on an actual product.
  24. [24]
    Gestural Interfaces in Human–Computer Interaction (Chapter 26)
    May 1, 2024 · This chapter concerns the use of manual gestures in human–computer interaction (HCI) and user experience research (UX research).
  25. [25]
    The role of voice input for human-machine communication. - PNAS
    This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines.
  26. [26]
    High-Precision Touchscreens 1988-1991 HCIL Research
    The University of Maryland's Human-Computer Interaction Laboratory worked on a series of projects which shared a common aspect: the use of touchscreens.Missing: history | Show results with:history
  27. [27]
    [PDF] Haptics for Human-Computer Interaction: From the Skin to the Brain
    “To- wards multimodal affective feedback: Interaction between visual and haptic modalities”. ... “Action enhances auditory but not visual temporal sensitivity”.
  28. [28]
    [PDF] in HCI: Haptics, Non-Speech Audio, and Their Applications
    This chapter provides an overview of research in the use of these non-visual modalities for interaction, showing how new output modalities can be used in the ...
  29. [29]
    Direct manipulation: A step beyond programming languages ...
    Direct manipulation involves three interrelated techniques:1. Provide a physically direct way of moving a cursor or manipulating the objects of interest.2.
  30. [30]
    Conversational interfaces | Communications of the ACM
    In this paper, we argue for embodied corrversational characters as the logical extension of the metaphor of human - computer interaction as a conversation.
  31. [31]
    [PDF] Making Sense of Usability Metrics: Usability and Six Sigma
    Examples of these metrics are: task completion rates, average time to task completion, average task error counts and average task satisfaction scores (see [1] ...<|separator|>
  32. [32]
    Usability Metrics - A Guide To Quantify The Usability Of Any System
    Usability metrics quantify the usability of a system in terms of the effectiveness, efficiency and satisfaction with which users perform tasks when using ...
  33. [33]
    10 Usability Heuristics for User Interface Design
    ### Nielsen's 10 Usability Heuristics
  34. [34]
    None
    Summary of each segment:
  35. [35]
    Web Content Accessibility Guidelines (WCAG) 2.2 - W3C
    Dec 12, 2024 · Abstract. Web Content Accessibility Guidelines (WCAG) 2.2 covers a wide range of recommendations for making web content more accessible.How to Meet WCAG (Quickref... · Success Criterion 1.3.1 · WCAG22 history
  36. [36]
    [PDF] Design Space and Evaluation Challenges of Adaptive Graphical ...
    To guide researchers and designers in developing effective adaptive GUIs, particularly adaptive control struc- tures such as menus and toolbars, we first ...Missing: paper | Show results with:paper
  37. [37]
    Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy
    Feb 10, 2020 · The Human-Centered Artificial Intelligence (HCAI) framework clarifies how to (1) design for high levels of human control and high levels of computer automation.Missing: seminal | Show results with:seminal
  38. [38]
    [PDF] The National Artificial Intelligence Research and Development ...
    Jun 21, 2019 · R&D Strategic Plan, national interest has grown in human-AI collaboration. When AI systems complement and augment human capabilities, humans ...
  39. [39]
    Human–Autonomy Teaming: Definitions, Debates, and Directions
    May 27, 2021 · In this review, we do a deep dive into human–autonomy teams (HATs) by explaining the differences between automation and autonomy and by reviewing the domain of ...
  40. [40]
    Enhancing mental health with Artificial Intelligence: Current trends ...
    This review explores the integration of AI into mental healthcare, elucidating current trends, ethical considerations, and future directions in this dynamic ...
  41. [41]
    [PDF] Explanations Can Reduce Overreliance on AI Systems During ...
    Overreliance is when people agree with an AI even when incorrect. This paper argues that people strategically choose to engage with AI explanations, which can ...
  42. [42]
    Why Hybrid Intelligence Is the Future of Human-AI Collaboration
    Mar 11, 2025 · Hybrid intelligence combines the best of AI and humans, leading to more sustainable, creative, and trustworthy results.
  43. [43]
    Modelling Human Factors in Perceptual Multimedia Quality
    Perception of multimedia quality is shaped by a rich interplay between system, context and human factors. While system and context factors are widely ...Missing: MOS centered
  44. [44]
    How to Design a Digital Storytelling Authoring Tool for Developing ...
    How to Design a Digital Storytelling Authoring Tool for Developing Pre-Reading and Pre-Writing Skills ... Human Factors in Computing Systems. Paper No.: 395, ...<|separator|>
  45. [45]
    Designing for Collaborative Video Editing - ACM Digital Library
    Oct 8, 2022 · This paper explores the design space of collaborative video editing through a series of design workshops with video editors.
  46. [46]
    [PDF] Automatic Tag Recommendation Algorithms for Social ... - Yang Song
    Tag recommendation suggests useful tags to new objects. Approaches include user-centered (based on user behavior) and document-centered (based on document ...<|control11|><|separator|>
  47. [47]
    The VR Book: Human-Centered Design for Virtual Reality
    The VR principles discussed in this book will enable readers to intelligently experiment with the rules and iteratively design towards innovative experiences.
  48. [48]
    User Preference-Based Dynamic Optimization of Quality of ... - MDPI
    Section 2 reviews existing adaptive bitrate (ABR) streaming strategies, reinforcement learning (RL) methodologies, and relevant research in preference learning.Missing: centered | Show results with:centered
  49. [49]
    Effective Educational Videos: Principles and Guidelines for ...
    Oct 13, 2017 · Effective use of video as an educational tool is enhanced when instructors consider three elements: how to manage cognitive load of the video; ...
  50. [50]
    [PDF] Ubiquitous computing
    MARK WEISER is head of the Comput- er Science Laboratory at the Xerox Palo. Alto Research Center. He is working on the next revolution of computing after.
  51. [51]
    The computer for the 21st century - ACM Digital Library
    Specialized elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence.
  52. [52]
    Ambient intelligence: Technologies, applications, and opportunities
    Ambient intelligence is an emerging discipline that brings intelligence to our everyday environments and makes those environments sensitive to us.
  53. [53]
    Ambient Intelligence: A New Multidisciplinary Paradigm - IEEE Xplore
    Jan 31, 2005 · Abstract: Ambient intelligence (AmI) is a new multidisciplinary paradigm rooted in the ideas of NormanAuthor of the Invisible Computer [32].Missing: definition centered
  54. [54]
    Federated Learning for Network Intrusion Detection in Ambient ...
    Jul 1, 2023 · This article investigates the performance of FL in comparison to deep learning (DL) with respect to network intrusion detection in ambient ...
  55. [55]
    Federated Learning: A Survey on Privacy-Preserving Collaborative ...
    Aug 12, 2025 · Green FL: As FL becomes more ubiquitous, its energy consumption and environmental impact need to be considered. Energy-efficient FL ...Missing: ambient | Show results with:ambient
  56. [56]
    (PDF) Human-Centered Design and Smart Homes: How to Study ...
    The focus of this chapter is on designing for smart homes. The perspective will be user-driven design research.
  57. [57]
    Connecting the dots: How users understand and diagnose smart ...
    Our research focuses on users' encounters with smart homes as interconnected spaces. We build on Woźniak et al.'s ecosystem-centred approach (Woźniak et al., ...
  58. [58]
    The Impact of Wearable Technologies in Health Research: Scoping ...
    Wearables showed an increasingly diverse field of application such as COVID-19 prediction, fertility tracking, heat-related illness, drug effects, and ...
  59. [59]
    A Survey of Attention Management Systems in Ubiquitous ...
    In this article, we review attention management system research with a particular focus on ubiquitous computing environments.
  60. [60]
    Alert Now or Never: Understanding and Predicting Notification ...
    Jan 6, 2023 · We found that users prefer mitigating undesired interruptions by suppressing alerts over deferring them and referred to notification content ...
  61. [61]
    Edge Computing: The Backbone of Scalable, Low-Latency IoT
    May 22, 2025 · Edge computing enables fast, secure, and scalable IoT by processing data locally—reducing latency and boosting real-time decisions.Missing: human- | Show results with:human-
  62. [62]
    10 IoT Trends Shaping the Future in 2025 - LORIOT
    Edge computing processes data at or near its source, minimizing latency, bandwidth usage, and reducing the load on centralized servers. This approach is ...What's New In 2025 · Benefits Of Iiot · Applications In 2025
  63. [63]
    RIT to offer human-centered computing degree
    Nov 18, 2015 · A new bachelor's degree in human-centered computing at RIT will combine principles from computing, design and psychology to help students ...
  64. [64]
    About the HCII - Carnegie Mellon University
    Our Beginnings. In a 1967 letter to Science entitled "What is Computer Science?," Allen Newell, Herbert A. Simon, and Alan J. Perlis wrote that the new ...Missing: history | Show results with:history
  65. [65]
    Education | Stanford HAI
    No readable text found in the HTML.<|separator|>
  66. [66]
    ASU Programs - Artificial Intelligence - Arizona State University
    Learn the advanced skills needed to apply artificial intelligence and machine learning in human-centered systems, emphasizing the application of human factors ...
  67. [67]
    Human-Centered Computing BS | RIT
    Human-centered computing degrees are about leveraging technology, and exploring and adapting how people access and interact with it. Finding ways to integrate ...Missing: definition aspects
  68. [68]
    Google UX Design Professional Certificate - Coursera
    Get on the fast track to a career in UX design. In this certificate program, you'll learn in-demand skills, and get AI training from Google experts.Foundations of User · Product Design · What Degree Do I Need
  69. [69]
    Ethics Is the Edge: The Future of AI in Higher Education
    Jun 24, 2025 · A new framework outlines eight ethical principles to guide higher education's implementation of artificial intelligence.
  70. [70]
    What Does a UX Designer Do? Key Responsibilities and Skills
    Jul 28, 2025 · A UX designer makes products usable, enjoyable, and accessible, focusing on user satisfaction and improving the customer experience.
  71. [71]
    UX Design Job Titles and Job Descriptions | Learner Help Center
    Interaction Designer. Interaction Designers focus on designing the experience of a product and how it functions. They strive to understand the user flow, or ...
  72. [72]
    What Is a UX Architect? Responsibilities, Skills, and Career
    Oct 1, 2021 · UX architects are responsible for the structure and flow of products at the intersection of UX and Information Architecture.
  73. [73]
  74. [74]
    Most Important Skill Required for UX Professionals (Video) - NN/G
    Apr 20, 2018 · What do successful user experience professionals have in common? Empathy. It's the ability to understand the perspective and motivations of ...
  75. [75]
    Certification of UX Training Achievement with Nielsen Norman Group
    Exam-based UX Certificate proves user experience expertise to management, colleagues, clients. Optional specialty certification: Interaction Design, ...UX Certified People · Exams · Specialties · Emerging Patterns in Interface...
  76. [76]
    10 Careers in User Experience Design
    Aug 1, 2024 · A bachelor's degree in UX design or computer science accompanied by three to five years of experience is typically required for this role.Usability Analyst · Front-End Developer · Information Architect
  77. [77]
    5 UX Designer Career Paths: Stepping Up Your Design ... - Coursera
    Oct 6, 2025 · Advancing your UX design career can mean becoming a manager, advancing within design, freelancing, consulting, or switching to a related UX role.
  78. [78]
    Growth in AI Job Postings Over Time: 2025 Statistics and Data
    The rise of interdisciplinary roles, such as AI ethics consultants and human-AI interaction designers, has expanded career opportunities beyond traditional ...<|separator|>
  79. [79]
    Introduction - Material Design
    Material is a design system created by Google to help teams build high-quality digital experiences for Android, iOS, Flutter, and the web.Principles Link · Components Link · Theming LinkMissing: responsibilities | Show results with:responsibilities
  80. [80]
  81. [81]
    [PDF] Human Factors Design Guidelines for Multifunction Displays
    The guidelines cover general design, air traffic, weather, navigation displays, MFD menu, automation, individual displays, and general design principles for ...
  82. [82]
    Assured Autonomy - DARPA
    The goal of the Assured Autonomy program is to create technology for continual assurance of Learning-Enabled, Cyber Physical Systems (LE-CPSs).
  83. [83]
    ASIMOV: Autonomy Standards and Ideals with Military Operational ...
    The Autonomy Standards and Ideals with Military Operational Values (ASIMOV) program aims to develop benchmarks to objectively and quantitatively measure the ...
  84. [84]
    European approach to artificial intelligence
    The European AI Strategy aims at making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy. Such an objective translates into ...AI Act · AI Continent Action Plan · AI innovation package
  85. [85]
  86. [86]
    Human Interface Guidelines | Apple Developer Documentation
    Human Interface Guidelines. The HIG contains guidance and best practices that can help you design a great experience for any Apple platform.Layout · Components · Typography · App iconsMissing: 1980s | Show results with:1980s
  87. [87]
    [PDF] Apple human interface guidelines - Andy Matuschak
    At Apple in the late 1970s and early 1980s, the development of the Lisa computer carried the work still further. A range of features now familiar in the ...<|control11|><|separator|>
  88. [88]
    Microsoft Inclusive Design
    Microsoft Inclusive Design is a practice that anyone who creates and manages products and services can use to build more inclusive experiences for everyone.Missing: 2017 | Show results with:2017
  89. [89]
    [PDF] Inclusive - Microsoft Download Center
    Most design processes are iterative and heuristic. The inclusive design toolkit aims to complement, not replace, the many existing types of design process.
  90. [90]
    Toward Human-Centered Design of Automated Vehicles - Frontiers
    This study investigates CAV safety in mixed traffic environments with both human-driven and automated vehicles, focusing particularly on rear-end collisions at ...Missing: computing | Show results with:computing
  91. [91]
    About CUbiC - Center for Cognitive Ubiquitous Computing
    The Center for Cognitive Ubiquitous Computing (CUbiC) at Arizona State University is an inter-disciplinary research center focused on cutting edge research ...
  92. [92]
    CUbiC designs assistive tech for full spectrum of ability - Vimeo
    Jun 24, 2015 · CUbiC designs assistive tech for full spectrum of ability. ASU Research. Researchers at the Center for Cognitive Ubiquitous ...
  93. [93]
    Publications | Cubic - Center for Cognitive Ubiquitous Computing
    Panchanathan, "Enhancing Social Interactions of Individuals with Visual Impairments: A Case Study for Assistive Machine Learning", Workshop on Machine Learning ...Missing: displays | Show results with:displays
  94. [94]
    CUbiC hosts new IGERT program
    The National Science Foundation awarded a grant of approximately $3 million to support the students, and a team of more than 20 faculty and staff will co-advise ...Missing: funding | Show results with:funding
  95. [95]
    Overview ‹ Fluid Interfaces - MIT Media Lab
    The Fluid Interfaces group designs, develops and tests systems and experiences for supporting motivation, attention, memory, learning, creativity, critical ...Projects · Group members · Applicant Information · Human-AI InteractionMissing: tangible | Show results with:tangible
  96. [96]
    Tangible Media Group - MIT
    A Tangible User Interface is like an iceberg: there is a portion of the digital that emerges beyond the surface of the water - into the physical realm - so that ...Projects · inFORM · People · PapersMissing: Fluid | Show results with:Fluid
  97. [97]
    Projects ‹ Fluid Interfaces - MIT Media Lab
    Fact-O-Meter is an AI-powered conversational system designed to enhance critical discernment of visual misinformation. It encourages you to… in Fluid Interfaces ...
  98. [98]
    Two MIT teams selected for NSF sustainable materials cooperative ...
    Apr 25, 2024 · Two MIT-led teams received funding from the National Science Foundation to investigate quantum topological materials and sustainable microchip production.Missing: Fluid Interfaces
  99. [99]
    CMU CREATE Lab
    The CREATE Lab is both a technology breeding ground and a community partner. It is this unique combination that enables a new form of local change.
  100. [100]
    CREATE Lab - Robotics Institute Carnegie Mellon University
    The CREATE lab aims to scale up our efforts in order to create self-sustaining communities of learning, expression and technology empowerment.
  101. [101]
    PROJECTS - CMU CREATE Lab
    The projects by CMU CREATE Lab exemplify a dedication to utilizing technology and collaboration to tackle societal concerns and promote various initiatives.
  102. [102]
    [PDF] Research Findings - BirdBrain Technologies
    2017: With National Science Foundation funding the BirdBrain team, Carnegie Mellon CREATE Lab researchers, 10 teachers, and 140+ students experimented with ...
  103. [103]
  104. [104]
    Annual AI+Education Summit 2025: Human-Centered ... - Stanford HAI
    The summit aims to ignite a global conversation on how to shape a thriving learning ecosystem with human-centered AI technologies.
  105. [105]
    NSF announces $100 million investment in National Artificial ...
    Jul 29, 2025 · The U.S. National Science Foundation, in partnership with Capital One and Intel, today announced a $100 million investment to support five ...