Fact-checked by Grok 2 weeks ago

Ontology engineering

Ontology engineering is the discipline of systematically designing, developing, maintaining, and evaluating ontologies, which are formal, explicit specifications of shared conceptualizations representing the entities, properties, relationships, and constraints within a specific domain of knowledge. These ontologies serve as foundational artifacts for enabling semantic interoperability, knowledge sharing, and automated reasoning in fields such as the Semantic Web, artificial intelligence, and data integration. Unlike traditional database schemas or conceptual models like UML, ontologies emphasize logical consistency, reusability across applications, and machine-readable semantics, often implemented using standardized languages such as OWL (Web Ontology Language). The field emerged in the 1990s, drawing from philosophical ontology and knowledge representation traditions, with significant momentum gained through the Semantic Web vision articulated by Tim Berners-Lee and colleagues in 2001. Early milestones include the development of Description Logics (DLs) in the mid-1980s, which provided the formal basis for ontology languages, and the W3C's standardization of OWL in 2004, followed by OWL 2 in 2009 to enhance expressivity and profiling options. Over the past decade, ontology engineering has evolved with increased adoption in domains like biomedicine—exemplified by the Gene Ontology project launched in 2000 for gene product annotation—and finance, such as the Financial Industry Business Ontology (FIBO). Community efforts, including the Open Biomedical Ontologies (OBO) Foundry established in 2007, have promoted best practices for interoperability and reusability. At its core, ontology engineering encompasses methodologies ranging from top-down approaches that start with foundational ontologies like DOLCE or BFO to establish high-level categories, bottom-up methods that derive concepts from existing data sources or thesauri, and middle-out strategies that balance both for iterative refinement. Key processes include competency questions to define scope, modularization for manageability, and validation techniques like OntoClean to ensure meta-property constraints such as rigidity and unity. Tools such as Protégé, with over 360,000 users, facilitate editing, reasoning, and visualization, while ontology design patterns (ODPs) promote reuse and quality assurance. Reasoning services, powered by DL reasoners like HermiT or ELK, support tasks such as class subsumption, consistency checking, and instance classification under the Open World Assumption. Notable applications span knowledge graphs like DBpedia (with approximately 6 million entities) and Wikidata (over 119 million entities as of 2025), cultural heritage projects such as Europeana, and clinical standards like the International Classification of Diseases (ICD-11). Despite advancements, challenges persist, including the steep learning curve of OWL, low ontology reuse rates (under 9% in biomedicine), and difficulties in alignment, evolution, and integration with emerging technologies like knowledge graphs and explainable AI. Future directions emphasize user-friendly interfaces through human-computer interaction (HCI) collaboration, automated ontology learning via natural language processing and large language models, and enhanced standards like ISO 21127:2023 for conceptual schema representation.

Fundamentals

Definition and Scope

Ontology engineering is the systematic process of developing, maintaining, and evolving ontologies to explicitly represent domain knowledge for computational use. An ontology itself is defined as a formal, explicit specification of a conceptualization, encompassing the objects, concepts, and entities presumed to exist in some area of interest, along with their properties and interrelations. This engineering discipline focuses on creating structured, logic-based knowledge representations that are machine-readable and reusable across applications. The primary objectives of ontology engineering include enhancing interoperability between heterogeneous information systems, facilitating semantic integration of data from diverse sources, and enabling automated reasoning to infer new knowledge or validate consistency. By providing a shared vocabulary and formal constraints, it supports precise knowledge sharing and reduces ambiguity in data exchange, particularly in distributed environments like the Semantic Web. Ontologies enable modeling of domain-specific concepts, allowing for accurate data retrieval and analysis in various fields. Ontology engineering differs from knowledge engineering in its emphasis on formal, logic-based structures optimized for computational inference, whereas knowledge engineering encompasses broader activities like informal knowledge acquisition from experts and diverse representation techniques without strict formalization. While both aim to capture expertise for intelligent systems, ontology engineering prioritizes explicit axioms and machine-interpretable semantics to support automated processes. Core components of an ontology include classes (representing concepts or categories), properties (such as object properties for relations between classes and data properties for attributes), relations (defining how classes interact), instances (specific examples of classes), and axioms (logical rules or constraints that govern the ontology). These elements form a knowledge base typically divided into a TBox (terminological knowledge about classes and properties) and an ABox (assertional knowledge about instances). Within computer science, information science, and systems engineering, the scope of ontology engineering extends to applications such as knowledge representation, data integration, and semantic query answering, yielding benefits like improved data sharing across silos and enhanced decision-making through inferential capabilities. It plays a crucial role in fields requiring precise semantic modeling, including artificial intelligence and database systems, where ontologies promote consistency and scalability in knowledge management.

Historical Development

The roots of ontology engineering trace back to ancient philosophy, particularly Aristotle's work in the Categories, where he outlined a foundational classification of entities into ten fundamental types, such as substance, quantity, and quality, to systematically describe what exists in the world. This approach laid the groundwork for ontological inquiry as a means of categorizing reality. In the 20th century, analytic philosophy further advanced these ideas, with Willard Van Orman Quine's 1948 paper "On What There Is" introducing the concept of ontological commitment, which posits that a theory's commitments to entities are determined by the variables it quantifies over in first-order logic, influencing later computational interpretations of existence and representation. The emergence of ontology engineering in artificial intelligence began in the 1970s and 1980s, driven by the need for structured knowledge representation in expert systems. During this period, frame-based systems, proposed by Marvin Minsky in his 1974 paper "A Framework for Representing Knowledge," provided a method to organize knowledge into reusable structures that capture stereotyped situations and their attributes, facilitating reasoning in early AI applications. A seminal project in this era was the Cyc initiative, launched in 1984 by Douglas Lenat at the Microelectronics and Computer Technology Corporation, which aimed to encode a vast common-sense knowledge base to enable machine understanding of everyday concepts, marking an early large-scale effort in manual ontology construction. The 1990s marked a pivotal shift toward ontology engineering in the context of the semantic web, with Tom Gruber's 1993 paper "A Translation Approach to Portable Ontology Specifications" defining an ontology as "an explicit specification of a conceptualization," emphasizing its role in enabling shared understanding across systems. Key U.S. Defense Advanced Research Projects Agency (DARPA) initiatives, such as the 1990 Summer Ontology Project and the later DARPA Agent Markup Language (DAML) program in the late 1990s, promoted reusable ontologies for knowledge sharing and integration in distributed AI systems. This momentum culminated in the World Wide Web Consortium's (W3C) standardization of the Web Ontology Language (OWL) in 2004, providing a formal framework for web-based ontologies that built on RDF and supported advanced reasoning. Post-2000, the field saw significant growth, particularly in biomedical domains, exemplified by the Gene Ontology project initiated in 1998 but expanding rapidly after 2000 to standardize gene function annotations across species. The early 2000s also witnessed the formal establishment of the ontology engineering community, with dedicated surveys and methodologies emerging around 2006 to address systematic development processes.

Formal Foundations

Ontology Languages and Standards

Ontology engineering relies on standardized languages to formally represent knowledge structures, ensuring interoperability and machine readability across systems. The Resource Description Framework (RDF) serves as a foundational model for this purpose, defining data as directed graphs composed of triples in the form of subject-predicate-object statements. This graph-based approach allows for flexible representation of relationships between resources, identified by URIs, and supports the integration of heterogeneous data sources without a fixed schema. RDF 1.1, published in 2014, provides the current specification with enhancements such as support for named graphs and improved internationalization. Building upon RDF, the Web Ontology Language (OWL) provides a more expressive framework for defining ontologies, enabling the specification of classes, properties, and axioms with precise semantics. OWL was standardized by the W3C in 2004, with OWL 2 published in 2009 to address limitations in expressivity and performance. OWL 2 defines two main semantics: OWL 2 DL, which supports description logic constructs while maintaining computational decidability for automated reasoning; and OWL 2 Full, which allows unrestricted use of RDF vocabulary but at the cost of potential undecidability. Additionally, OWL 2 includes three tractable profiles—OWL 2 EL for existential expressivity in large ontologies, OWL 2 QL for query answering via database technologies, and OWL 2 RL for rule-based implementations—optimized for efficiency in specific applications. Beyond OWL, several other languages facilitate ontology representation, particularly for rule-based and logic-intensive extensions. Common Logic (CL), defined in ISO/IEC 24707, offers a family of first-order logic dialects for interchanging knowledge across systems, emphasizing modular and extensible syntax. The Knowledge Interchange Format (KIF) provides a predicate calculus-based language for sharing knowledge among disparate programs, with declarative semantics that avoid procedural implications. F-Logic extends frame-based systems with logic programming features, supporting rule-based ontology definitions through stratified negation and inheritance mechanisms. Standards bodies play a crucial role in governing these languages and their implementations. The W3C oversees RDF and OWL specifications, ensuring web compatibility, while ISO standardizes broader logic frameworks like Common Logic. The OBO Foundry coordinates domain-specific ontologies, particularly in biomedicine, by enforcing principles such as adherence to shared syntax and serialization standards. Serialization formats like Turtle, a compact textual syntax for RDF graphs, and N-Triples, a line-based format for simple triple encoding, are defined by W3C to promote data exchange. A key distinction in ontology languages lies in their expressivity trade-offs. OWL 2 DL achieves decidability through syntactic restrictions aligned with description logics, enabling sound and complete automated reasoning. In contrast, OWL 2 Full's unrestricted RDF integration leads to undecidability, as it permits constructs that exceed the decidable fragment of first-order logic.

Representation Formalisms

Description logics (DLs) provide the mathematical and logical foundations for representing ontologies in ontology engineering, offering a decidable fragment of first-order logic tailored for knowledge representation. DLs model domain knowledge through concepts (interpreted as unary predicates or sets of individuals), roles (binary predicates or relations between individuals), and axioms that define relationships such as subsumption. The family of DLs, particularly the ALC (Attributative Language with Complements) subfamily, serves as the cornerstone for expressive ontology languages. ALC includes basic constructors such as intersection (\sqcap), which combines concepts into their logical conjunction; union (\sqcup), representing disjunction; negation (\neg), for complementation; universal quantification (\forall R.C), restricting all role fillers to concept C; and existential quantification (\exists R.C), requiring at least one role filler in C. These constructors enable precise definitions of complex concepts while maintaining computational tractability. Formally, an ontology in DLs is often defined as a tuple O = (C, R, A, I), where C denotes the set of concepts, R the set of roles, A the set of axioms (including general concept inclusions C \sqsubseteq D and role inclusions), and I the set of individuals with assertions linking them to concepts and roles. Semantics are provided by interpretations \mathcal{I} = (\Delta^\mathcal{I}, \cdot^\mathcal{I}), where \Delta^\mathcal{I} is a non-empty domain and concepts map to subsets of \Delta^\mathcal{I}, while roles map to binary relations on \Delta^\mathcal{I} \times \Delta^\mathcal{I}. For instance, the intersection constructor satisfies (C \sqcap D)^\mathcal{I} = C^\mathcal{I} \cap D^\mathcal{I}, ensuring set-theoretic consistency. This structure supports the separation of terminological knowledge (TBox axioms in A) from assertional knowledge (ABox involving I). DLs like ALC form the basis for standards such as OWL 2 DL, which corresponds to the more expressive SROIQ(D) logic. Reasoning in DLs relies on tableau algorithms, which construct models (tableaux) by systematically expanding an initial ABox through non-deterministic rules that enforce concept and role constraints. These algorithms apply expansion rules for constructors, such as the \sqcap-rule (branching to satisfy both subconcepts) and \exists-rule (introducing successors), while clash detection identifies contradictions like an individual in both a concept and its negation. Tableau methods are sound, preserving satisfiability from the initial to expanded ABox, and complete, as blocking techniques (e.g., pair-wise or subset blocking) ensure termination by preventing infinite expansions in cyclic models. For expressive DLs like SROIQ(D), underlying OWL 2 DL, reasoning tasks such as concept satisfiability and subsumption are NEXP-complete, reflecting the added complexity from features like transitive roles (S), role hierarchies (H), nominals (O), inverse roles (I), and datatypes (D). Despite this worst-case complexity, optimizations like caching and rule ordering enhance practical efficiency. Extensions to basic DLs address limitations in modeling dynamic or uncertain domains. Hybrid logics augment DLs with nominals (singleton concepts) and binders (state variables for referencing individuals), enabling more flexible ontology representations while preserving decidability in restricted forms; for example, adding nominals to ALC yields ALCO, which supports extensional class definitions common in ontologies. Temporal DLs incorporate linear-time operators (e.g., "always" \Box or "eventually" \Diamond) to capture evolving knowledge in dynamic ontologies, such as business processes, with decidable fragments like \mathcal{T}_\text{U}\text{ALC} achieving EXPTIME complexity for satisfiability over bounded traces. Probabilistic extensions, such as ALC^p, integrate probability bounds (e.g., C_1 [p_1, p_2] C_2 for conditional probabilities) to handle uncertainty, ensuring consistency through non-empty probabilistic models and enabling inferences like range refinement under statistical assumptions. In contrast to full first-order logic (FOL), which offers unrestricted expressivity but is undecidable for satisfiability, DLs are carefully restricted fragments that ensure decidability. ALC, for instance, embeds into the two-variable fragment of FOL (FO_2), which is NEXPTIME-complete, but avoids features like arbitrary quantifier nesting or function symbols that lead to undecidability. While FOL supports full predicate expressivity, DLs prioritize tailored constructors and optimized reasoning, trading some generality for computational feasibility in ontology applications.

Development Methodologies

Engineering Processes

Ontology engineering processes provide structured approaches to the systematic development, refinement, and maintenance of ontologies, ensuring they meet domain-specific requirements while supporting interoperability and scalability. These processes typically follow lifecycle models that emphasize iteration, collaboration, and validation to address the complexity of knowledge representation. Seminal methodologies have emerged to guide practitioners in transforming informal domain knowledge into formal, computable structures. One foundational methodology is METHONTOLOGY, introduced in 1997, which outlines a lifecycle based on evolving prototypes divided into five main phases: specification, where the ontology's purpose, scope, and explicit assumptions are defined; conceptualization, involving the organization of knowledge into abstract models such as hierarchies and taxonomies; formalization, where these models are expressed using semi-formal or formal representations; implementation, translating the formal model into a specific ontology language; and maintenance, encompassing updates and corrections to ensure ongoing relevance. This approach draws from software engineering principles to treat ontology development as an engineering discipline rather than an ad hoc art. Building on such foundations, the NeOn Methodology, developed through an EU-funded project from 2006 to 2010, extends lifecycle support to networked and collaborative ontology engineering. It adopts a scenario-based framework that tailors processes to specific contexts, such as reusing existing ontologies or integrating distributed knowledge sources in team environments, thereby facilitating the construction of ontology networks. The methodology includes guidelines for scheduling activities across the lifecycle, emphasizing adaptability for large-scale, interdisciplinary projects. A widely referenced practical guide is Ontology Development 101, published in 2001, which proposes a seven-step process for beginners: determining the domain and scope via competency questions; considering reuse of existing ontologies; enumerating important terms; defining classes and the class hierarchy; defining properties of classes; defining facets of slots (properties); and creating instances. This iterative process encourages incremental building and testing, starting from core concepts and expanding outward. Across these methodologies, ontology lifecycles incorporate iterative refinement to accommodate evolving domain knowledge, version control mechanisms to track changes and manage dependencies, and evaluation metrics focused on consistency (ensuring no logical contradictions within the ontology) and coherence (verifying the ontology's alignment with domain knowledge through techniques like competency questions). Such aspects promote reliability, with consistency checks often involving automated reasoning to detect contradictions, and coherence evaluations comparing the ontology against competency questions. Key principles guiding these processes include the use of competency questions to elicit precise requirements—what specific queries the ontology must support—to define scope and validate coverage from the outset. Additionally, a middle-out development strategy balances top-down (starting from broad categories) and bottom-up (from specific instances) approaches by prioritizing central, domain-relevant concepts first, then extending to abstractions and details, which enhances modularity and eases integration with reuse techniques like modular ontology design. Recent developments include LLM-based approaches for automated ontology drafting and collaborative engineering, enhancing efficiency in knowledge elicitation and alignment (as of 2025).

Reuse and Integration Techniques

Ontology reuse strategies enable the efficient construction of new ontologies by leveraging existing ones, particularly upper-level or foundational ontologies such as DOLCE (Descriptive Ontology for Linguistic and Cognitive Engineering) and BFO (Basic Formal Ontology), which provide generic conceptual structures applicable across domains. Extension involves adding domain-specific concepts and axioms to an upper ontology while preserving its core structure, allowing for specialization without altering foundational elements. Subsetting, conversely, extracts relevant portions of an upper ontology to focus on a narrower scope, reducing complexity and ensuring alignment with specific requirements. Merging combines multiple upper ontologies or fragments to create a unified representation, often requiring resolution of overlapping concepts to avoid redundancy. These strategies are integral to methodologies like NeOn, which outline scenarios for reusing ontological resources in network-based engineering. Alignment techniques address semantic heterogeneity between ontologies by establishing correspondences between entities, facilitating integration. Entity matching commonly employs string similarity measures, such as Levenshtein distance or Jaro-Winkler, to compare lexical labels and identifiers. Structural analysis extends this by examining relational patterns, like subclass hierarchies or property connections, to infer alignments based on graph isomorphism or subgraph matching. Semantic embedding techniques incorporate external knowledge sources, such as WordNet, to capture contextual meanings through synset mappings and hypernym relations, enhancing accuracy for ambiguous terms. These methods are often combined in hybrid approaches to balance precision and recall, as surveyed in comprehensive ontology matching frameworks. Modularization techniques partition large ontologies into manageable modules to support reuse and maintenance, promoting scalability in engineering processes. Extraction creates modules by selecting subgraphs centered on specific concepts or axioms, ensuring logical consistency through criteria like locality preservation. Pruning removes irrelevant elements from an ontology while retaining core semantics, often guided by relevance metrics to minimize information loss. Merging modules assembles them into a cohesive whole, adhering to principles like ontology double articulation, which ensures bidirectional logical connections between modules without introducing inconsistencies. These partitioning approaches, rooted in formal semantics, enable distributed development and targeted reuse. Tools for ontology integration, such as AgreementMaker, automate mapping tasks by integrating multiple alignment techniques into a unified workflow. AgreementMaker employs a weighted combination of lexical, structural, and semantic matchers to generate entity correspondences, supporting large-scale ontologies through efficient algorithms. Its effectiveness is evaluated using precision (correct mappings among proposed ones) and recall (proposed mappings among true ones) against gold standard references from benchmarks like the Ontology Alignment Evaluation Initiative (OAEI). For instance, AgreementMaker has demonstrated strong performance in OAEI tracks, such as ranking highly in the anatomy task. Challenges in ontology integration arise from heterogeneity in representational choices and axiom structures, complicating seamless merging. Heterogeneity resolution requires reconciling differences in naming conventions, granularity, and modeling paradigms across source ontologies, often demanding manual intervention for ambiguous cases. Conflict detection focuses on identifying inconsistent axioms post-integration, such as contradictory subclass relations or property constraints, using reasoning tools to verify coherence. These issues can propagate errors in downstream applications, underscoring the need for systematic frameworks that prioritize logical consistency during reuse.

Tools and Technologies

Editing and Building Tools

Ontology engineering relies on specialized software environments that facilitate the creation, modification, and maintenance of ontologies, enabling users to define classes, properties, and relationships in a structured manner. These tools typically provide graphical user interfaces (GUIs) to abstract the underlying formalisms, making ontology development accessible to domain experts without deep programming knowledge. Prominent examples include both open-source and commercial platforms that support standards like OWL and RDF, with varying emphases on collaboration, scalability, and integration. Protégé, developed by Stanford University since 2002, is a widely adopted open-source ontology editor that supports OWL 2 and provides extensive plugin architecture for tasks such as visualization of class hierarchies and collaborative editing through WebProtégé. The tool was updated to version 5.6.8 in September 2025, enhancing stability and macOS compatibility. It allows users to build ontologies via intuitive forms for defining axioms, individuals, and annotations, while supporting import and export in formats like OWL/XML and RDF/XML. The tool's plugin ecosystem enables extensions for specific needs, such as diagrammatic representations using tools like OWLviz, enhancing usability for complex ontology structures. TopBraid Composer, an enterprise-grade tool from TopQuadrant, offers advanced editing capabilities for RDF and OWL ontologies, including integrated SPARQL querying and SHACL-based validation to ensure data quality during development. It features a robust GUI for managing large-scale semantic models, with support for modular ontology design and automation through scripting in SPARQL and Java. As part of the TopBraid EDG platform, it emphasizes enterprise deployment, handling interconnected knowledge graphs with high performance. Open-source alternatives include the NeOn Toolkit, a legacy tool from the EU-funded NeOn project (2007–2013) designed for networked ontology development, which supported collaborative workflows across distributed teams and integrated multiple ontology languages like OWL and F-Logic for reuse in modular environments. Another option is VocBench, a web-based editor focused on SKOS vocabularies and OWL ontologies, providing multilingual support for thesaurus management and concept scheme editing in collaborative settings. These tools prioritize accessibility for vocabulary-centric tasks, such as defining skos:Concept hierarchies and broader/narrower relations. Common features across these tools include graphical interfaces for visualizing and editing class hierarchies, asserting properties on instances, and handling relationships like subclassOf or objectPropertyDomain. They support import/export in standard formats such as OWL/XML, Turtle, and RDF/XML, facilitating interoperability with other semantic technologies. Many incorporate version control and diff tools to track changes in ontology evolution. In comparisons, tools like Protégé excel in usability for domain experts due to their intuitive drag-and-drop interfaces and extensive tutorials. Conversely, enterprise tools such as TopBraid Composer demonstrate superior scalability for large ontologies, supporting efficient querying and editing over massive datasets in production environments, though at the cost of a steeper learning curve for non-technical users. Open-source options like NeOn and VocBench balance usability and scalability for collaborative, mid-sized projects, particularly in networked or vocabulary-focused scenarios. These editing tools often serve as front-ends in broader reasoning workflows, where ontologies are loaded into inference engines for consistency checking. Recent advancements include tools leveraging large language models (LLMs) for ontology engineering tasks, such as DeepOnto, a Python package released in 2023 that supports ontology alignment, completion, and other deep learning-based operations.

Reasoning and Validation Tools

Reasoning and validation tools in ontology engineering enable the inference of implicit knowledge from explicit axioms and the assessment of ontology quality through automated analysis. These tools leverage description logics (DLs) underlying languages like OWL to perform tasks such as consistency checking, subsumption computation, and classification, ensuring ontologies are logically sound and free from structural errors. HermiT is a Java-based OWL 2 reasoner employing a hypertableau calculus, a variant of tableau-based algorithms optimized for nondeterministic expansions and model caching to handle OWL DL inferences efficiently. It supports key reasoning services including ontology consistency checking, entailment verification, and class/property classification, often outperforming traditional tableau reasoners on complex ontologies with thousands of axioms. FaCT++ is an open-source C++ reasoner implementing tableau-based procedures for OWL 2 DL, focusing on optimized absorption and automation techniques to compute subsumption hierarchies and detect unsatisfiable concepts. It excels in classification tasks for large-scale ontologies, providing entailment support through modular decomposition of axioms to reduce computational overhead. Pellet serves as a comprehensive OWL 2 DL reasoner with tableau expansion for consistency checking and incremental reasoning capabilities, extending to rule-based inference via integration with SWRL for deriving new facts from horn-like rules alongside DL axioms. This allows validation of ontology coherence while incorporating procedural knowledge, such as deriving class memberships from property assertions. For structural validation, OOPS! (Ontology Pitfall Scanner) automates the detection of common modeling errors in OWL ontologies, including circular hierarchies where cyclic subclass relationships violate acyclic assumptions, as well as inconsistencies like multiple inheritance paths leading to logical paradoxes. It evaluates over 40 pitfalls categorized by severity, providing remediation suggestions based on best practices. OntoMetric facilitates quality assessment through metric computation, measuring aspects like coverage by evaluating the ratio of defined concepts to total entities and relationship density to gauge completeness relative to domain requirements. This tool supports comparative analysis across ontology versions or peers, emphasizing semantic richness without exhaustive enumeration. Performance optimizations in these tools often target tractable DL fragments, such as the OWL 2 EL profile based on EL++, which admits polynomial-time reasoning for subsumption and consistency via completion rules that avoid exponential blowup in existential restrictions and conjunctions. Integration of reasoners into development environments occurs via standardized APIs, such as the OWL API in Protégé, allowing seamless embedding of HermiT or Pellet for on-the-fly inference during ontology authoring without separate invocation. Newer reasoners, such as Whelk (introduced around 2024), support combined OWL EL+RL reasoning, enabling efficient inference for biological and other large-scale ontologies.

Applications

In Life Sciences

Ontology engineering in the life sciences involves developing structured knowledge representations tailored to biomedical and biological domains, facilitating data integration, annotation, and analysis across diverse datasets. These ontologies address the complexity of biological systems by standardizing terminology for genes, proteins, diseases, and clinical concepts, enabling precise querying and inference in research and healthcare applications. Key examples include the Gene Ontology (GO) and SNOMED CT, which exemplify domain-specific adaptations through hierarchical structures and formal axioms. The Gene Ontology (GO), initiated in 1998 by the Gene Ontology Consortium, provides a controlled vocabulary to describe gene and gene product attributes across organisms. It is organized into three independent branches: molecular function, which captures activities such as catalytic or binding activities at the molecular level; biological process, encompassing series of molecular events like signaling or metabolic pathways; and cellular component, denoting locations such as organelles or supramolecular complexes. This structure supports functional annotation of over 1.6 million gene products from 5,495 species (as of October 2025), promoting interoperability in genomic databases. SNOMED CT (Systematized Nomenclature of Medicine—Clinical Terms) serves as a comprehensive clinical terminology ontology designed for healthcare interoperability, representing clinical information such as diagnoses, procedures, and observations. It encompasses over 375,000 unique concepts, organized hierarchically with formal definitions based on description logics, allowing for consistent encoding in electronic health records across more than 80 countries. This scale enables detailed clinical documentation and supports automated decision-making in patient care systems. Engineering approaches in life sciences ontologies emphasize automation and rigor to handle vast biomedical literature and ensure conceptual accuracy. Automated term extraction from scientific texts using natural language processing (NLP) techniques identifies candidate concepts and relations, as seen in pipelines that process PubMed abstracts to populate ontology hierarchies semi-automatically. Quality assurance incorporates logical axioms, such as disjointness constraints, to prevent overlaps between classes—for instance, ensuring that certain cellular components remain mutually exclusive—and detect inconsistencies during ontology maintenance. These methods, including description logic-based checks, enhance the formal validity of ontologies like GO. Recent developments include integration of ontologies with large language models for automated ontology learning and expansion in biomedicine. A prominent case study is the OBO Foundry, a collaborative initiative establishing principles for building orthogonal ontologies in biology that cover distinct domains without redundancy. Core principles include openness for community contributions, orthogonality to minimize overlap (e.g., separating anatomy from phenotype ontologies), and adherence to formal syntax like OWL for interoperability. This framework has coordinated over 100 ontologies, fostering reuse and integration in model organism databases. The impact of these ontologies is evident in enabling semantic queries across large-scale resources, such as UniProt and PubMed. In UniProt, GO annotations allow federated queries to retrieve proteins by function or process, integrating data from multiple species for comparative genomics. Similarly, tools like GO2PUB expand PubMed searches using GO hierarchies, improving retrieval of relevant literature on gene functions through inheritance-based term expansion. Methodologies like METHONTOLOGY have been adapted for bio-ontologies to guide specification and evaluation phases.

In Semantic Web and Knowledge Graphs

Ontology engineering plays a pivotal role in the Semantic Web by providing the schema layer that structures RDF data, enabling the principles of Linked Data through standardized vocabularies like RDFS and OWL. These ontologies define classes, properties, and relationships, allowing machines to interpret and infer new knowledge from distributed datasets represented as triples (subject-predicate-object). For example, RDFS extends RDF with basic schema elements such as rdfs:subClassOf for hierarchical organization, while OWL adds advanced constructs like owl:TransitiveProperty to support complex reasoning and entailment over web-scale data. This formalization facilitates the interlinking of datasets, promoting a web of machine-readable information where inference engines can derive implicit facts, such as subclass relationships or property domains, from explicit RDF statements. In knowledge graphs, ontology engineering involves designing schemas to integrate and populate vast, heterogeneous data sources, often drawing from unstructured text. DBpedia exemplifies this approach, where a crowd-sourced ontology with 768 classes and over 3,000 properties serves as the schema for extracting structured entities and relations from Wikipedia articles and infoboxes using natural language processing techniques. The extraction process populates the graph with billions of RDF triples (approximately 9.5 billion), enabling queries via SPARQL and linking to other Linked Open Data resources. Similarly, the Google Knowledge Graph leverages ontology-inspired schemas, incorporating types from schema.org to organize entities like people, places, and events extracted from web sources, enhancing search results with contextual inferences. Key techniques in this domain include lightweight ontologies for broad applicability and extensions for specialized dimensions. Schema.org, developed collaboratively by major search engines, functions as a lightweight ontology with over 800 types and 1,500 properties, allowing web publishers to annotate content using simple markup formats like JSON-LD or RDFa without heavy formal semantics. This approach supports rapid schema design for everyday web data, focusing on core domains such as CreativeWork and Organization to improve discoverability. For more advanced needs, YAGO incorporates temporal and spatial extensions into its ontology, anchoring facts to time intervals and geographic coordinates extracted from Wikipedia and GeoNames, resulting in a knowledge base with approximately 49 million entities and 109 million facts (as of 2024). A prominent case study is Wikidata, which employs an upper-level ontology to integrate structured data across Wikipedia's multilingual articles, representing knowledge as semantic triples with items, properties, and values. This ontology enables the reconciliation of diverse sources, supporting complex queries like lineage tracing or population statistics, and serves over 119 million entities (as of August 2025) through APIs compatible with Semantic Web tools. By centralizing structured data from Wikimedia projects, Wikidata facilitates cross-language data federation, allowing inferences that bridge linguistic and cultural gaps in global knowledge representation. Recent trends include enhanced use of Wikidata in AI-driven knowledge graphs for real-time data integration. The application of ontologies in knowledge graphs yields significant benefits, particularly in enhancing search precision, recommendation systems, and data federation. In search engines, ontological schemas enable semantic matching beyond keywords, delivering contextually relevant results—such as disambiguating entities via types and relations—which improves user experience in platforms like Google Search. For recommendations, the structured relationships in graphs power personalized suggestions in e-commerce, like Amazon's product graphs, by inferring user preferences through entity linkages. Data federation across silos is streamlined, as ontologies provide a common schema for integrating disparate sources in social platforms, reducing redundancy and enabling scalable analytics without proprietary data movement.

Key Challenges

Ontology engineering faces several persistent technical and practical obstacles that hinder the development and deployment of effective knowledge representations. These challenges span from computational limitations to socio-technical issues, impacting the reliability and adoption of ontologies in diverse domains. Addressing them requires a balance between theoretical rigor and practical implementation, often leveraging automated tools for partial mitigation. Scalability remains a core issue in ontology engineering, particularly when handling large-scale ontologies comprising millions of concepts and relations. For instance, reasoning over ontologies expressed in expressive description logics (DLs), such as SHOIN(D), incurs exponential time complexity due to the inherent computational demands of satisfiability checking and inference tasks, which can become prohibitive for real-world applications with vast axiom sets. In domains like healthcare, where ontologies must integrate massive datasets from genomics and electronic health records, static structures struggle to scale efficiently, necessitating modular designs to manage growing data volumes while preserving performance. Biomedical ontologies, such as the Gene Ontology with 39,354 terms (as of October 2025), exemplify this, where continuous integration pipelines are employed to handle interdependencies, yet full reasoning remains resource-intensive. Interoperability barriers arise from semantic drift and vocabulary mismatches, complicating the alignment and integration of ontologies across domains. Semantic drift occurs as concepts evolve over ontology versions, leading to gradual shifts in meaning that undermine mappings and reuse; for example, changes in concept extensions or intentions can result in up to 25-31% term overlap but less than 9% actual reuse in biomedical contexts. These mismatches manifest in hierarchical misalignments and terminological heterogeneity, as seen in standards like SNOMED CT and HL7 FHIR, where disparate representations of the same domain knowledge impede seamless data exchange without extensive manual reconciliation. Automated alignment techniques, while helpful, often fail to fully resolve these drifts, exacerbating silos in multi-ontology environments. Maintenance overhead poses significant challenges due to the dynamic nature of domains, requiring ontologies to evolve in response to new knowledge or requirements while minimizing disruption. Ontology evolution involves detecting changes from sources like usage logs or external corpora, suggesting modifications, validating them for consistency, and assessing impacts on dependent systems—a process that is resource-intensive and lacks standardized change languages, leading to high manual effort. In practice, tools like Protégé facilitate versioning, but propagating updates across modular or distributed ontologies incurs substantial storage and communication costs, particularly when ensuring backward compatibility with applications and queries. This overhead deters widespread adoption, as seen in Semantic Web projects where fragmented evolution strategies increase long-term costs without automated mechanisms for propagation. Quality assurance in ontology engineering is complicated by the need to detect redundancies, inconsistencies, and incompleteness without relying solely on exhaustive manual reviews. Redundant hierarchical relations, such as multiple paths between concepts in is-a structures, can inflate ontology size and degrade inference accuracy; for example, the SNOMED CT ontology contains hundreds of such redundancies that automated tools like FEDRR can identify in seconds using dynamic programming. Inconsistencies, including logical contradictions from evolving axioms, further compromise trustworthiness, with reasoners providing partial automated detection but struggling at scale due to DL complexity. Incompleteness assessments remain subjective, often requiring domain-specific metrics to evaluate coverage, yet current methods fall short of comprehensive validation, leading to persistent quality gaps in large ontologies like the Gene Ontology. Human factors introduce additional hurdles, particularly in balancing ontology expressivity with usability for non-experts in collaborative settings. The steep learning curve of formalisms like OWL and tools like Protégé alienates domain specialists without technical backgrounds, resulting in modeling errors and low adoption rates in team-based development. Collaborative ontology engineering demands intuitive interfaces for non-experts to contribute without deep semantic knowledge, yet existing methodologies often prioritize expressivity over accessibility, leading to usability issues such as cumbersome annotation processes and poor support for iterative feedback. In distributed environments, these factors amplify coordination challenges, where mismatched expertise levels hinder consensus on concept definitions and alignments. Reasoners and simpler design patterns offer some mitigation by automating consistency checks, allowing focus on conceptual contributions.

Emerging Developments

Recent advancements in ontology engineering have increasingly incorporated large language models (LLMs) for automated ontology learning, enabling the extraction of concepts, relations, and axioms from unstructured text sources. A notable example is the NeOn-GPT pipeline, introduced in 2024, which integrates the structured NeOn methodology with LLMs to translate natural language descriptions into formal ontology representations, facilitating semi-automated schema generation and reducing manual effort in domain-specific ontology development. This approach was highlighted in the LLMs4OL 2024 challenge, the first dedicated evaluation of LLMs for ontology learning tasks, demonstrating improved accuracy in concept extraction from diverse datasets compared to traditional rule-based methods. This progress continued with the second LLMs4OL challenge at ISWC 2025, further evaluating LLMs for ontology learning tasks. Hybrid approaches combining ontologies with graph neural networks (GNNs) have emerged to support dynamic knowledge graphs, allowing for real-time updates and inference over evolving data structures. For instance, neuro-symbolic frameworks leverage GNNs for embedding learning while preserving ontological constraints, enabling scalable reasoning in enterprise knowledge graphs without sacrificing interpretability. These methods address scalability challenges by iteratively refining graph representations through symbolic reasoning, as seen in approaches like ReasonKGE, which corrects inconsistent predictions in knowledge graph embeddings. Advances in ontology modularization have benefited from AI-assisted alignment techniques using embeddings, particularly post-2020 developments with models like BERT. The BERTMap system, for example, fine-tunes BERT on textual ontology elements to predict entity matches in both unsupervised and semi-supervised settings, achieving higher precision in aligning large-scale ontologies than classical string-based matchers. This embedding-driven modularization supports the decomposition and reuse of ontology components, enhancing interoperability in distributed systems. Emerging trends include ontology-based explainable AI (XAI), where ontologies provide structured vocabularies to generate human-interpretable explanations for AI decisions. Ontologies serve multiple roles in XAI, such as defining explanation scopes and anchoring post-hoc interpretations to domain knowledge, as explored in surveys on semantic-based XAI applications in manufacturing. Additionally, federated learning integrated with ontologies enables privacy-preserving knowledge sharing by aligning distributed models through shared ontological schemas without exchanging raw data. Frameworks like ontology-guided federated unlearning use knowledge distillation to remove sensitive information while maintaining model utility across institutions. Recent milestones from the ESWC 2024 conference underscore LLM pipelines for ontology engineering, including end-to-end workflows for automated term extraction and axiom generation. Furthermore, integrations with vector databases enhance semantic search over ontological knowledge graphs by combining graph traversals with vector similarity queries, as in hybrid RAG systems that boost retrieval accuracy for complex queries.

References

  1. [1]
    [PDF] An Introduction to Ontology Engineering - KR 2016
    Jul 6, 2018 · introductory overview of ontology engineering. A secondary aim is to ... outline the general flavour of a semantics than any particular definition ...
  2. [2]
    [PDF] Ontology - Tom Gruber
    Ontology engineering is concerned with making representational choices that capture the relevant distinctions of a domain at the highest level of ...
  3. [3]
    [PDF] Ontology Engineering: Current State, Challenges, and Future ...
    This paper gives an overview of how the ontology engineering field has evolved in the last decade and discusses some of the unsolved issues and ...
  4. [4]
    OWL 2 Web Ontology Language Document Overview (Second Edition)
    Dec 11, 2012 · OWL 2 is an ontology language for the Semantic Web, providing classes, properties, individuals, and data values, and is an extension of OWL 1.
  5. [5]
  6. [6]
    None
    ### Summary of Ontology Engineering from the PDF
  7. [7]
    OWL 2 Web Ontology Language Structural Specification and ... - W3C
    Dec 11, 2012 · Entities, such as classes, properties, and individuals, are identified by IRIs. They form the primitive terms of an ontology and constitute the ...
  8. [8]
    Aristotle's Categories - Stanford Encyclopedia of Philosophy
    Sep 7, 2007 · In the Predicamenta, Aristotle discusses in detail the categories of substance (2a12–4b19), quantity (4b20–6a36), relatives (6a37–8b24), and ...The Four-Fold Division · The Ten-Fold Division · Detailed Discussion
  9. [9]
    [PDF] On What There Is - rintintin.colorado.edu
    On What There Is by Willard Van Orman Quine (1948). A curious thing about the ontological problem is its simplicity. It can be put in three. Anglo-Saxon ...
  10. [10]
    CYC: a large-scale investment in knowledge infrastructure
    Since 1984, a person-century of effort has gone into building CYC, a universal schema of roughly 105 general concepts spanning human reality.Missing: original | Show results with:original
  11. [11]
    [PDF] A translation approach to portable ontology specifications
    An Ontolingua ontology is made up of definitions of classes, relations, functions, distinguished objects, and axioms that relate these terms.
  12. [12]
    Knowledge Representation and Reasoning — A History of DARPA ...
    Jun 1, 2020 · This article highlights several decades of advances in knowledge representation and reasoning methods, paying particular attention to research ...
  13. [13]
    OWL Web Ontology Language Overview - W3C
    The OWL Working Group has produced a W3C Recommendation for a new version of OWL which adds features to this 2004 version, while remaining ...
  14. [14]
    About the GO - Gene Ontology
    GO grew into a large data framework adapted for all living organisms, from bacteria to human. GO was the first of the hundreds of biomedical ontologies that ...Background · The Go Consists Of · The Go And The Alliance Of...Missing: post- | Show results with:post-
  15. [15]
    [PDF] Ontology Engineering: a Survey and a Return on Experience
    May 23, 2006 · Abstract: Ontology is a new object of IA that recently came to maturity and a powerful conceptual tool of Knowledge Modeling.
  16. [16]
    RDF - Semantic Web Standards - W3C
    The RDF 1.1 specification consists of a suite of W3C Recommendations and Working Group Notes, published in 2014. This suite also includes an RDF Primer. See ...
  17. [17]
    ISO/IEC 24707:2018 - Information technology — Common Logic (CL)
    This document specifies a family of logic languages designed for use in the representation and interchange of information and data among disparate computer ...
  18. [18]
    [PDF] Knowledge Interchange Format Version 3.0 Reference Manual
    This document supplies full technical details of KIF. Chapter 2 presents the formal syntax of the language. Chapter 3 discusses conceptualizations of the world.Missing: specification | Show results with:specification
  19. [19]
    [PDF] Rules and Ontologies in F-logic - Stony Brook Computer Science
    This paper gives a brief overview of F-logic and discusses its features from the point of view of an ontology language. 1 Introduction. F-logic [15] extends ...
  20. [20]
    OWL - Semantic Web Standards - W3C
    OWL 2 is an extension and revision of the 2004 version of OWL developed by the [W3C Web Ontology Working Group] (now closed) and published in 2004. The ...
  21. [21]
    OBO Foundry
    The Oral Health and Disease Ontology is used for representing the diagnosis and treatment of dental maladies. The Ontology for Modeling and Representation of ...Ontology tools and resources · Principles: Overview · OBO Foundry Roles · About
  22. [22]
    RDF 1.1 Turtle - W3C
    Feb 25, 2014 · This document defines a textual syntax for RDF called Turtle that allows an RDF graph to be completely written in a compact and natural text form.
  23. [23]
    RDF 1.1 N-Triples - W3C
    Feb 25, 2014 · N-Triples is an easy to parse line-based subset of Turtle [ TURTLE ]. The syntax is a revised version of N-Triples as originally defined in the RDF Test Cases.
  24. [24]
    OWL Web Ontology Language Reference - W3C
    The OWL Working Group has produced a W3C Recommendation for a new version of OWL which adds features to this 2004 version, while remaining ...<|separator|>
  25. [25]
    [PDF] 2 Basic Description Logics
    47. Page 2. 48. F. Baader, W. Nutt of concepts determines subconcept/superconcept relationships (called subsumption relationships in DL) between the concepts of ...Missing: handbook | Show results with:handbook<|separator|>
  26. [26]
    Description Logics - an overview | ScienceDirect Topics
    Description Logics define ontologies using concepts (classes), roles (properties or relations), and individuals (instances). · Concepts are interpreted as unary ...
  27. [27]
    [PDF] OWL: a Description Logic Based Ontology Language
    Following the usual DL naming conventions, the resulting logic is called SHOIN(D), with the different letters in the name standing for (sets of) constructors ...
  28. [28]
    [PDF] Tableau Algorithms for Description Logics
    iff one of the complete ABoxes is open, i.e., does not contain an obvious contradiction (clash). A(x), ¬A(x) local soundness: rules preserve satisfiability.
  29. [29]
    Hybrid Logics and Ontology Languages - ScienceDirect.com
    Description Logics (DLs) are a family of logic based knowledge representation formalisms. Although they have a range of applications, they are perhaps best ...
  30. [30]
    (PDF) Temporal Description Logics - ResearchGate
    Temporal extensions of Description Logics (DL) are relevant to capture the evolving behaviour of dynamic domains, and they have been extensively considered ...
  31. [31]
    [1302.6817] Probabilistic Description Logics - arXiv
    Feb 27, 2013 · This paper presents the language ACP which is a probabilistic extension of terminological logics and aims at closing the gap between the two areas of research.
  32. [32]
    [PDF] A Survey of Decidable First-Order Fragments and Description Logics
    In this short survey we considered the relationship of fragments of first-order logic and description logics. This provides a new perspective of description log ...
  33. [33]
    [PDF] Methontology
    This paper only presents a set of activities that conform the ontology development process, a life cycle to build ontologies based in evolving prototypes, and.Missing: López | Show results with:López
  34. [34]
    Methontology: From Ontological Art Towards Ontological Engineering
    Mar 14, 2023 · This paper gathers the experience of the authors on building an ontology in the domain of chemicals. Spring. Papers from the 1997 AAAI Spring ...Missing: original | Show results with:original
  35. [35]
    NeOn - Ontology Engineering Group
    The NeOn project (Lifecycle support for networked ontologies - FP6-027595) aimed to advance the state of the art in Ontology Engineering and Semantic Web ...
  36. [36]
    NeOn Methodology for Building Ontology Networks - IOS Press
    Dec 1, 2012 · This book presents the NeOn Methodology, a scenario-based methodology, which provides prescriptive guidance for key aspects of the ontology engineering process.
  37. [37]
    [PDF] Ontology Development 101: A Guide to Creating Your First ... - protégé
    In recent years the development of ontologies—explicit formal specifications of the terms in the domain and relations among them (Gruber 1993)—has been moving ...
  38. [38]
    [PDF] Use of Competency Questions in Ontology Engineering: a Survey
    Competency questions (CQs) are questions the ontology should answer, used to define requirements, identify concepts, and verify the ontology's knowledge ...
  39. [39]
    [PDF] Overview Of Methodologies For Building Ontologies
    Therefore, a middle-out strategy can be said to be used for identifying concepts. ... There is no single ontology associated with a set of competency questions.
  40. [40]
    [PDF] Foundational Ontologies meet Ontology Matching: A Survey
    Recent work maintains DOLCE in OWL. – Cyc [23] is a proprietary ontology comprising both an upper-level ontology and a set of domain ontolo- gies ...
  41. [41]
    The Use of Foundational Ontologies in Ontology Development
    Aug 7, 2025 · ... This paper argues that the solution lies in aligning the DPP ontologies with an upper ontology like Basic Formal Ontology (BFO) [5, 6] ...Missing: seminal | Show results with:seminal
  42. [42]
    [PDF] Toward the Use of an Upper Ontology for U.S. Government and U.S. ...
    Recently, the. Defense Advanced Research Projects Agency (DARPA) has focused two ontology-related programs, High Performance Knowledge Bases (HPKB) [11] and ...
  43. [43]
    [PDF] Merging the DOLCE and PSL Upper Ontologies - SciTePress
    Ontology merging allows the creation of a new ontology from two, possibly over- lapping, ontologies (Euzenat and Shvaiko, 2007; Choi et al., 2006). Additionally ...Missing: strategies BFO
  44. [44]
    NeOn Methodology for Building Ontology Networks - ResearchGate
    In this paper, we present the set of nine scenarios identified in the NeOn Methodology framework. Additionally, we present how such scenarios have been followed ...
  45. [45]
    [PDF] Ontology-Alignment Techniques: Survey and Analysis - MECS Press
    Nov 8, 2015 · This paper aims at counting all works of the ontology alignment field and analyzing the approaches according to different techniques ( ...Missing: embedding impact
  46. [46]
    A Review on Ontology Modularization Techniques - ResearchGate
    Aug 5, 2025 · The objective of the paper is to present a comprehensive, albeit high-level, review of ontology modularization techniques. A systematic ...Missing: Double | Show results with:Double
  47. [47]
    Modular Ontology Techniques and their Applications in the ...
    In this paper, we investigate state-of-the-art approaches in modular ontologies focusing on techniques that are based on rigorous logical formalisms as well as ...
  48. [48]
    [PDF] Modularizing Ontologies - Research Archive
    Task 1. Identify purpose of modularization. The goal of this task is to make explicit the reason why the considered ontology should be modularized.<|separator|>
  49. [49]
    [PDF] Efficient Matching for Large Real-World Schemas and Ontologies
    The most effective evaluation technique compares the map- pings found by the system between the two ontologies with a reference matching or “gold standard,” ...
  50. [50]
    Using AgreementMaker to align ontologies for OAEI 2010
    For this we developed a system which converts the two matched ontologies into a graph by mapping their concepts. The evaluation is done using precision, recall ...
  51. [51]
    [PDF] Ontology Integration: Approaches and Challenging Issues
    Feb 9, 2021 · Ontology integration tackles overlapping knowledge by integrating multiple ontologies to build a single coherent one, generating a coherent  ...
  52. [52]
    Toward a systematic conflict resolution framework for ontologies - PMC
    Aug 9, 2021 · In practice, conflict resolution often starts with some issue raised by the ODE, and specifically when an axiom is added or an ontology is ...
  53. [53]
    Toward a systematic conflict resolution framework for ontologies
    Aug 9, 2021 · In practice, conflict resolution often starts with some issue raised by the ODE, and specifically when an axiom is added or an ontology is ...
  54. [54]
    protégé - Stanford University
    A free, open-source ontology editor and framework for building intelligent systems. Protégé is supported by a strong community of academic, government, and ...Software · About · Protege Wiki · Support
  55. [55]
    NeOn Toolkit - Research Archive - The Open University
    The NeOn Toolkit was an ontology engineering environment originally developed as part of the NeOn Project and later supported by the NeOn Foundation, ...
  56. [56]
    Software - protégé - Stanford University
    WebProtégé is an ontology development environment for the Web that makes it easy to create, upload, modify, and share ontologies for collaborative viewing ...
  57. [57]
    What is TopBraid Composer?
    TopBraid Composer, a component of TopBraid Suite, is a professional development tool for semantic models (ontologies).
  58. [58]
    TopQuadrant: Build a Trusted, AI-Ready Data Foundation
    Built entirely on knowledge graph technology, TopBraid EDG creates a connected, AI-ready data foundation across structured and unstructured information.TopBraid EDG · Careers · About · AI Data Platform
  59. [59]
    VocBench: A Collaborative Management System for OWL ontologies ...
    VocBench is a web-based, multilingual, collaborative development platform for managing OWL ontologies, SKOS(/XL) thesauri, Ontolex-lemon lexicons and generic ...VocBench Downloads · Support · About us · Community of usersMissing: Vocabularies | Show results with:Vocabularies
  60. [60]
    Documentation - TopBraid Composer
    Welcome to TopBraid Composer, a standards-compliant tool for the development of Semantic Web applications and domain models.
  61. [61]
    [PDF] Comparison of Ontology Editors
    in this paper some software tools related to Semantic web are considered and compared. In fact, five ontology-editors are described and compared.Missing: NeOn | Show results with:NeOn
  62. [62]
    Comparison of Ontology Editors | Request PDF - ResearchGate
    Alatrish performed a comparison of five ontology editors Apollo, Onto Edit, Protégé, Swoop and TopBraid Composer [1] . The evaluation comprises qualitative ...
  63. [63]
    [PDF] The HermiT OWL Reasoner
    HermiT is an OWL reasoning system based on a novel hypertableau calculus. [12]. Like existing tableau based systems, HermiT reduces all reasoning tasks to.
  64. [64]
    [PDF] HermiT: A Highly-Efficient OWL Reasoner - CEUR-WS
    HermiT is a new OWL reasoner based on a novel “hyper- tableau” calculus. The new calculus addresses performance problems due to nondeterminism and model ...
  65. [65]
    List of Reasoners | - OWL @ Manchester
    Jun 19, 2018 · FaCT++ is a free (LGPL) highly optimised open-source C++-based tableaux reasoner for OWL 2 DL. Supported interfaces: Protege, Command Line, OWL ...
  66. [66]
    [PDF] Pellet: A Practical OWL-DL Reasoner
    For example, while classification requires a degree of entailment support (i.e., certain subclass relations are entailed by the ontology and classification is ...
  67. [67]
    stardog-union/pellet: Pellet is an OWL 2 reasoner in Java - GitHub
    Pellet is the OWL 2 DL reasoner: Pellet can be used with Jena or OWL-API libraries. Pellet provides functionality to check consistency of ontologies.Missing: integration HermiT
  68. [68]
    [PDF] OOPS! (OntOlogy Pitfall Scanner!) - Semantic Web Journal
    OOPS! is a tool for detecting pitfalls in ontologies, extending existing approaches and providing an indicator for each pitfall.Missing: circular hierarchies
  69. [69]
    [PDF] OntoMetrics: Application of on-line Ontology Metric Calculation
    Practical ontology quality assessment ... At its present state, OntoMetrics is a lightweight, handy tool for comparable, metric based ontology evaluation.
  70. [70]
    [PDF] Polynomial Time Reasoning in a Description Logic with Existential ...
    In this paper, we show that even admitting general con- cept inclusion (GCI) axioms and role hierarchies in EL terminologies preserves the polynomial time upper ...
  71. [71]
    ProtegeReasonerAPI - Protege Wiki
    Nov 30, 2009 · This page describes the Protege-OWL Reasoner API that provides programmatic access to a direct or a DIG-compliant reasoner.Missing: HermiT | Show results with:HermiT
  72. [72]
    Gene Ontology: tool for the unification of biology | Nature Genetics
    Ultimately, an ontology can be a vital tool enabling researchers to turn data into knowledge. Computer scientists have made significant contributions to ...Acknowledgements · Author Information · Author Notes
  73. [73]
    What is SNOMED CT
    SNOMED CT Is the most comprehensive, multilingual clinical global healthcare terminology. A resource with comprehensive, scientifically validated clinical ...
  74. [74]
    SNOMED CT - NCBO BioPortal - Biomedical Ontology
    SNOMED CT provides core terminology for EHRs, with over 300,000 unique concepts, organized into hierarchies with formal definitions.
  75. [75]
    Natural Language Processing Methods and Systems for Biomedical ...
    These tasks include term extraction, synonym extraction, concept extraction (both taxonomic and non-taxonomic), relationship extraction and axiom extraction (an ...
  76. [76]
    A Family-Based Framework for Supporting Quality Assurance of ...
    BioPortal contains over 300 ontologies, for which quality assurance (QA) is critical. Abstraction networks (ANs), compact summarizations of ontology ...
  77. [77]
    Commitment To Collaboration (principle 10) - OBO Foundry
    It is expected that Foundry ontologies will collaborate with other Foundry ontologies, particularly in ensuring orthogonality of distinct ontologies, in re- ...<|separator|>
  78. [78]
    Update on activities at the Universal Protein Resource (UniProt) in ...
    The aim of this article is to provide a status report on UniProt activities and some of our plans for the near future.New And Ongoing Developments · Uniprot Biocuration · Gene Ontology Annotation
  79. [79]
    GO2PUB: Querying PubMed with semantic expansion of gene ...
    Sep 7, 2012 · As GoPubMed only considers the GO term(s) provided by the user and ignores the inheritance rules of Gene Ontology, we also expanded queries ...Qualitative Study · Generalization Study · Go2pub Query BuildingMissing: enabling | Show results with:enabling
  80. [80]
  81. [81]
    [PDF] Linked Data and the Semantic Web Standards - Aidan Hogan
    The core languages offered as part of the current Semantic. Web standards are the RDF Schema (RDFS) and Web Ontology Lan- guage (OWL) standards.
  82. [82]
    RDF and the Semantic Web Stack - ScienceDirect.com
    Then we focus on the different ontology languages that serve as a schema solution to RDF facts. We provide an introduction to the reasoning facilities that can ...
  83. [83]
    Home - DBpedia Association
    ### Summary of DBpedia's Use of Ontology Engineering, Schema Design, and Population from Wikipedia Text
  84. [84]
    DBpedia: A Nucleus for a Web of Open Data | SpringerLink
    DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data.
  85. [85]
    Knowledge Graph Search API - Google for Developers
    Apr 26, 2024 · The Knowledge Graph Search API lets you find entities in the Google Knowledge Graph. The API uses standard schema.org types and is compliant with the JSON-LD ...
  86. [86]
    One schema to rule them all: How Schema.org models the world of ...
    Feb 24, 2023 · We provide a semantic network visualization of Schema.org, including an analysis of its modularity and domains, and discuss its global significance concerning ...
  87. [87]
    YAGO2: exploring and querying world knowledge in time, space ...
    We present YAGO2, an extension of the YAGO knowledge base with focus on temporal and spatial knowledge. It is automatically built from Wikipedia, GeoNames, and ...
  88. [88]
    Wikidata as Semantic Infrastructure: Knowledge Representation ...
    It is unique because it is a semantic infrastructure that produces facts using an ontological classification system for structured data, which then serves these ...
  89. [89]
    Knowledge Graphs 101: The Story (and Benefits) Behind the Hype
    May 24, 2024 · Knowledge graphs help businesses make critical decisions based on harmonized knowledge models and data derived from siloed source systems.
  90. [90]
    [PDF] Part 2: Description Logics
    Def.: ExpTime. Set of problems solvable in exponential time by a deterministic TM. This is the first provably intractable complexity class. These problems are ...
  91. [91]
  92. [92]
    [PDF] Ontology Engineering: Current State, Challenges, and Future ...
    This paper is meant to give a retrospective overview of how the ontology landscape and ontology engineer- ing have evolved in the last decade, current ...Missing: 2023-2025 | Show results with:2023-2025
  93. [93]
    A hybrid method and visual tools to measure semantic drift in ...
    Semantic drift is an active field of research, aiming to identify and measure changes in ontologies across versions in time, closely related to several ...
  94. [94]
    (PDF) Ontology evolution: A process-centric survey - ResearchGate
    Aug 10, 2025 · Ontology evolution aims at maintaining an ontology up to date with respect to changes in the domain that it models or novel requirements of ...
  95. [95]
    FEDRR: fast, exhaustive detection of redundant hierarchical ...
    Oct 10, 2016 · FEDRR provides a generally applicable, effective tool for systematic detecting redundant relations in large ontological systems for quality improvement.
  96. [96]
    (PDF) Collaborative Ontology Engineering: A Survey - ResearchGate
    Aug 10, 2025 · We will survey several of the most outstanding methodologies, methods and techniques that have emerged in the last years, and present the most ...
  97. [97]
    (PDF) Semantic Enrichment by Non-Experts: Usability of Manual ...
    Aug 7, 2025 · ... A one-click annotator interface is provided for non-expert users to bridge the gap between objective knowledge (as encoded in an RDF data ...
  98. [98]
    [PDF] A Large Language Model-Powered Pipeline for Ontology Learning
    We address the task of ontology learning by combining the structured NeOn methodology framework with Large Language Models. (LLMs) for translating natural ...
  99. [99]
    The 1st Large Language Models for Ontology Learning Challenge
    Sep 16, 2024 · This paper outlines the LLMs4OL 2024, the first edition of the Large Language Models for Ontology Learning Challenge.
  100. [100]
    [PDF] Hybrid AI Approach for Knowledge Graph Construction - CEUR-WS
    Hybrid AI approaches with neural networks and OWL-DL 2 reasoning like proposed in this work are rare. This and the next paragraphs establish a ...
  101. [101]
    Neuro-Symbolic Reasoning for Enterprise Knowledge Graphs
    Jul 20, 2025 · Our approach introduces a hybrid architecture that leverages graph neural networks for representation learning while maintaining symbolic ...
  102. [102]
    Improving Knowledge Graph Embeddings with Ontological Reasoning
    We present a novel iterative approach ReasonKGE that identifies dynamically via symbolic reasoning inconsistent predictions produced by a given embedding model.
  103. [103]
    (PDF) BERTMap: A BERT-Based Ontology Alignment System
    Aug 6, 2025 · In this paper, we propose a novel OM system named BERTMap which can support both unsupervised and semi-supervised settings. It first predicts ...
  104. [104]
    On the Multiple Roles of Ontologies in Explainable AI - ResearchGate
    Dec 6, 2023 · This paper discusses the different roles that explicit knowledge, in particular ontologies, can play in Explainable AI and in the ...
  105. [105]
    Survey on ontology-based explainable AI in manufacturing
    Feb 1, 2024 · In this survey, we focus on two of the most exciting areas of XAI: ontology-based and semantic-based XAI (O-XAI, S-XAI, respectively), which ...
  106. [106]
    Ontology-Guided Data Sharing and Federated Quality Control With ...
    The proposed method integrates a differential privacy model with federated learning to improve knowledge sharing with privacy protection, thereby using the ...
  107. [107]
    Privacy-Preserving Federated Unlearning with Ontology-Guided ...
    Federated Learning (FL) is a privacy-focused technique for training models; however, most existing unlearning techniques in FL fall significantly short of ...
  108. [108]
    HybridRAG: Integrating Knowledge Graphs and Vector Retrieval ...
    Aug 9, 2024 · We introduce a novel approach based on a combination, called HybridRAG, of the Knowledge Graphs (KGs) based RAG techniques (called GraphRAG) and VectorRAG ...