Omics refers to a family of scientific disciplines in biology that involve the comprehensive, high-throughput characterization and quantification of pools of biological molecules, such as genes, transcripts, proteins, and metabolites, to understand their roles and interactions within living systems.[1] These fields, which include genomics (study of the complete set of genes), transcriptomics (analysis of all RNA transcripts), proteomics (examination of the entire proteome), and metabolomics (profiling of all metabolites), among others like epigenomics and lipidomics, enable global assessments of biological processes rather than targeted analyses of individual components.[1][2]The suffix "-omics" emerged from the need to describe large-scale studies, building on the earlier term "genome" coined in 1920 by botanist Hans Winkler to denote the complete haploid set of chromosomes, with "-ome" implying wholeness or totality.[3] The first use of "genomics" occurred in 1986, proposed by geneticist Thomas H. Roderick at a conference in Bethesda, Maryland, to name the emerging field of mapping and sequencing entire genomes, inspired by the Human Genome Project's ambitions.[3] This was followed by "proteomics" in 1994, introduced by biochemist Marc Wilkins to describe the systematic study of proteins expressed by a genome, marking the expansion of the "-omics" nomenclature as high-throughput technologies like DNA microarrays and mass spectrometry became available in the 1990s.[3]Omics approaches have transformed biomedical research by facilitating systems-level insights into health and disease, particularly through multi-omics integration, which combines data from multiple layers (e.g., genomic, transcriptomic, and proteomic) to model complex interactions and identify biomarkers.[2][4] Key applications include advancing personalized medicine, where omics data guide tailored treatments, and elucidating disease mechanisms in areas like cancer and neurodegeneration via projects such as The Cancer Genome Atlas.[1] Emerging technologies, including single-cell omics and spatial transcriptomics, further enhance resolution to study cellular heterogeneity and tissue organization.[4]
History and Etymology
Origin of the Term
The suffix "-omics" originated with the term "genomics," coined by geneticist Thomas H. Roderick in 1986 during a meeting in Bethesda, Maryland to denote the comprehensive study of an organism's entire genome, evolving from the earlier concept of "genome" as a blend of "gene" and "chromosome." This neologism combined the suffix "-ome," implying a totality or collective mass derived from the Greek "-ωμα" (indicating a group or aggregate), with "-ics" to signify a systematic scientific discipline focused on large-scale analysis of biological entities. The construction emphasized wholeness in biological data, drawing parallels to fields like economics, where "-ics" denotes the study of complex systems. In the late 1980s and 1990s, the suffix gained traction with the introduction of "proteomics" in 1994 by Marc Wilkins, referring to the large-scale study of proteins, and "metabolomics" in 1998 by Stephen G. Oliver and colleagues, describing the comprehensive analysis of metabolites. These terms reflected the growing emphasis on high-throughput technologies for holistic biological profiling, inspired by the need to move beyond reductionist approaches to capture systemic interactions. The broader application of "omics" as a descriptor for integrative, data-intensive biology first appeared in a major publication in a 1998 Science commentary by John N. Weinstein, which framed "-omics" as the aggregate study of biomolecules in high-throughput contexts. By the 2000s, the suffix had expanded to interdisciplinary areas, such as foodomics for the study of food-related molecular profiles.
Historical Development
The foundations of omics studies were laid in the 1970s and 1980s with the development of DNA sequencing technologies that first enabled analysis at the genome scale. In 1977, Frederick Sanger and colleagues introduced the chain-termination method, which allowed for the sequencing of the 5,386-base-pair genome of the bacteriophage phiX174, marking the first complete DNA sequence determination and setting the stage for large-scale genomic investigations.[5] This breakthrough, along with refinements in the 1980s, shifted biology from gene-by-gene analysis to comprehensive genomic profiling, influencing the emergence of the broader omics paradigm.[6]The 1990s saw omics accelerate as a field, propelled by major initiatives and technological innovations. The Human Genome Project, launched in 1990 and completed in 2003, coordinated international efforts to sequence the entire human genome, serving as a pivotal catalyst for genomics by demonstrating the feasibility of whole-genome analysis and inspiring systematic studies of other biological layers.[7] Concurrently, the introduction of DNA microarrays in the mid-1990s enabled high-throughput measurement of gene expression, revolutionizing transcriptomics; for instance, complementary DNA microarrays allowed simultaneous quantification of thousands of transcripts, facilitating the study of cellular responses at scale.[6]In the 2000s, the post-genome era expanded omics to proteins and metabolites through advances in analytical techniques. Mass spectrometry progressed significantly for proteomics, with shotgun proteomics methods enabling the identification of thousands of proteins from complex samples via tandem MS coupled with liquid chromatography, as exemplified by large-scale proteome mapping efforts.[8] For metabolomics, nuclear magnetic resonance (NMR) spectroscopy and liquid chromatography-mass spectrometry (LC-MS) gained prominence, allowing untargeted profiling of small molecules in biological systems and supporting systems-level metabolic studies.[9] The Encyclopedia of DNA Elements (ENCODE) project, initiated in 2003, further advanced functional genomics by systematically annotating non-coding regions of the human genome, bridging genomics with regulatory omics.[10]The 2010s brought transformative scalability to omics through next-generation sequencing (NGS) and genome editing tools. NGS platforms, such as those from Illumina, revolutionized epigenomics by enabling genome-wide mapping of modifications like DNA methylation and histone marks via techniques including whole-genome bisulfite sequencing, which provided high-resolution insights into epigenetic landscapes.[11] In microbiomics, NGS facilitated metagenomic surveys of microbial communities, as seen in expansions of the Human Microbiome Project, allowing characterization of microbial diversity without cultivation.[12] Additionally, the integration of CRISPR-Cas9, developed in 2012, into functional omics enabled high-throughput gene perturbation screens, linking genomic variations to phenotypic outcomes across omics layers.As of 2025, the 2020s have witnessed the rise of single-cell and spatial omics, driven by integrated platforms that resolve heterogeneity at subcellular resolution. Technologies from 10x Genomics, such as the Chromium system for single-cell RNA sequencing and Visium for spatial transcriptomics, have enabled multi-omics profiling of individual cells within tissues, revealing dynamic processes in development, disease, and immunity. These advances have scaled omics to capture spatiotemporal contexts, fostering integrative analyses across genomics, transcriptomics, and beyond.[13]
Conceptual Framework
Definition and Scope
Omics refers to the high-throughput, comprehensive analysis of biological molecules or systems on a global scale, encompassing fields such as genomics, transcriptomics, proteomics, and metabolomics, which study the entirety of specific molecular sets rather than individual components.[14] This approach contrasts with traditional reductionist methods in biology, which focus on isolated pathways or single molecules, by aiming to capture the complexity of biological systems through simultaneous measurement of thousands to millions of elements.[1]The scope of omics extends from molecular levels, such as DNA, RNA, proteins, and metabolites, to cellular and organismal scales, enabling a holistic view of biological processes and their interactions.[15] It emphasizes the integration of diverse datasets to uncover emergent properties and systemic behaviors, particularly within systems biology, where omics data inform models of disease mechanisms, environmental responses, and physiological states.[16] For instance, multi-omics studies combine genomic and proteomic profiles to reveal how genetic variations influence phenotypic outcomes at the organism level.[17]A core principle of omics is its hypothesis-generating nature, producing vast, exploratory datasets that identify patterns and associations for subsequent validation through targeted experiments, unlike hypothesis-testing paradigms in classical biology.[14] These studies generate big data in biology, with individual datasets frequently exceeding 1 terabyte due to the high dimensionality and volume of measurements, necessitating advanced computational tools for analysis and interpretation.[18]In distinction from traditional biochemistry, which typically examines a limited number of molecules using low-throughput techniques, omics operates at scales involving thousands to millions of analytes, rendering it inherently dependent on bioinformatics and computational modeling to handle noise, variability, and integration challenges.[19] This shift prioritizes data-driven discovery over predefined mechanistic assumptions, transforming biological inquiry into a quantitative, systems-oriented discipline.[20]
Methodological Principles
Omics research generally follows a standardized workflow that encompasses sample preparation, high-throughput detection, data acquisition, and subsequent bioinformatics processing to generate comprehensive molecular profiles. Sample preparation is a critical initial step, involving the isolation and purification of biological materials such as DNA, RNA, proteins, or metabolites from tissues, cells, or biofluids, often requiring techniques like lysis, extraction, and quality control to minimize contamination and ensure compatibility with downstream analyses.[21] High-throughput detection then employs scalable platforms to capture vast amounts of molecular data, followed by data acquisition where raw signals are digitized and stored, and bioinformatics processing applies algorithms for alignment, annotation, and interpretation to extract meaningful biological insights.[22]Key technologies underpinning these workflows include sequencing methods, mass spectrometry, chromatography, microarrays, and flow cytometry, each tailored to specific omics layers. For genomics and transcriptomics, Sanger sequencing provided the foundational chain-termination approach for accurate, low-throughput DNA analysis, while next-generation sequencing (NGS) platforms like Illumina's sequencing-by-synthesis enable massively parallel readout of millions of fragments for high-resolution genome and transcriptome profiling. In proteomics and metabolomics, mass spectrometry techniques such as electrospray ionization (ESI-MS) and matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) ionize and separate analytes based on mass-to-charge ratios to identify and quantify biomolecules with high sensitivity.[20] Chromatography methods, including gas chromatography-mass spectrometry (GC-MS) for volatile compounds and liquid chromatography-mass spectrometry (LC-MS) for polar molecules, couple separation with detection to enhance resolution in complex mixtures.[20] Microarrays facilitate hybridization-based detection of nucleic acids or proteins across thousands of probes on a chip, offering a cost-effective alternative for expression profiling, whereas flow cytometry analyzes cell populations by measuring fluorescence and light scatter to profile surface markers and intracellular components in single cells.[20][20]Omics data can be qualitative, indicating presence or absence of features like genetic variants or metabolites, or quantitative, measuring abundance levels such as gene expression or protein concentrations, with the latter often requiring careful handling of technical noise from variability in sample handling or instrument performance. Normalization is essential to account for biases in sequencing depth, gene length, or library size; for instance, reads per kilobase of transcript per million mapped reads (RPKM) adjusts RNA-seq counts to enable comparable expression estimates across genes and samples.Statistical foundations in omics analysis emphasize multivariate approaches to manage high-dimensional data, including principal component analysis (PCA) for dimensionality reduction, which projects data onto principal axes capturing maximum variance to visualize patterns and remove outliers. Given the large number of simultaneous tests, false discovery rate (FDR) correction, such as the Benjamini-Hochberg procedure, controls the expected proportion of false positives among significant results, ensuring robust identification of biologically relevant features.
Types of Omics Studies
Genomics
Genomics is the comprehensive study of an organism's entire genome, encompassing all of its DNA, including the sequencing, assembly, and annotation of genetic material to understand its structure, organization, and function. This field employs high-throughput DNA sequencing methods, bioinformatics tools, and recombinant DNA techniques to analyze the full genetic complement, generating vast datasets that reveal patterns of genetic variation and organization. Unlike traditional genetics, which focuses on individual genes, genomics examines the genome at a systems level to elucidate how genetic elements interact within the broader context of the organism.[23][24]Key aspects of genomics include structural genomics, which involves mapping the physical locations of genes and other genomic features to construct detailed genome maps that guide sequencing efforts; functional genomics, which investigates gene functions and expression patterns often through genome-wide association studies (GWAS) that link genetic variants to traits or diseases; and comparative genomics, which compares genomes across species to identify conserved regions indicative of evolutionary relationships and functional elements. These approaches enable the annotation of genomes by assigning biological roles to sequences and highlighting variations such as single nucleotide polymorphisms (SNPs). Briefly, genomics data can integrate with epigenomics to provide insights into how environmental factors influence gene regulation without altering the DNA sequence itself.[25][26][27]A major milestone in genomics was the completion of the Human Genome Project in April 2003, an international effort that produced the first reference sequence of the human genome, spanning approximately 3 billion base pairs and identifying an estimated 20,000–25,000 genes. This project, coordinated by the U.S. National Institutes of Health and Department of Energy, cost about $3 billion and laid the foundation for subsequent genomic research by demonstrating the feasibility of large-scale sequencing. Since then, technological advances have dramatically reduced sequencing costs; for instance, the price per human genome dropped from around $100 million in 2001 to under $1,000 by 2025, enabling widespread clinical and research applications.[28][29]Central techniques in genomics include whole-genome sequencing (WGS), which determines the complete DNA sequence of an organism to capture all genetic variations, and SNP arrays, which simultaneously genotype hundreds of thousands of SNPs to detect common variants associated with traits. These methods facilitate genome assembly, where short reads are computationally pieced together to reconstruct the original sequence, and annotation, which identifies genes, regulatory elements, and functional motifs. In applications to hereditary diseases, genomics has revolutionized diagnosis by identifying causative mutations in conditions like cystic fibrosis and Huntington's disease through WGS and GWAS, enabling personalized risk assessment and targeted therapies.[30][31][32]
Epigenomics
Epigenomics is the genome-wide study of epigenetic modifications, which are heritable changes in gene expression that do not alter the underlying DNA sequence, such as DNA methylation and histone modifications like acetylation.[33] These modifications, including the addition of methyl groups to cytosine bases in DNA (typically at CpG sites) and chemical tags to histone proteins that package DNA, regulate chromatin structure and accessibility, thereby influencing transcriptional activity without changing the genetic code.[34] Epigenomic profiles provide a dynamic layer atop the static genome, revealing how environmental and developmental cues can modulate gene function across cell types and tissues.[35]Key methods in epigenomics include chromatin immunoprecipitation followed by sequencing (ChIP-seq), which maps the locations of histone modifications and associated proteins by crosslinking, immunoprecipitating, and sequencing DNA fragments bound to specific antibodies.[36] For DNA methylation, bisulfite sequencing converts unmethylated cytosines to uracil while preserving methylated ones, enabling high-resolution genome-wide detection through subsequent sequencing.[37] The Encyclopedia of DNA Elements (ENCODE) project has significantly advanced the field since 2012 by generating comprehensive epigenomic maps, including over 1,000 datasets on histone marks and methylation patterns across hundreds of human cell types, facilitating the annotation of regulatory elements.[38]Epigenomic modifications play crucial roles in biological processes such as embryonic development, where dynamic changes in histone acetylation and DNA demethylation orchestrate cell differentiation and tissue specification.[39] They also mediate genomic imprinting, an epigenetic mechanism that silences one parental allele of certain genes, ensuring parent-of-origin-specific expression essential for growth and metabolism.[40] Additionally, epigenomics responds to environmental stimuli, such as nutrient availability or toxins, by altering methylation patterns that confer adaptive phenotypic plasticity across generations.[41] Aberrant epigenomic changes, particularly hypermethylation of tumor suppressor gene promoters, are hallmarks of diseases like cancer, driving oncogenesis in various tissues through silenced gene expression.[42]By 2025, advances in single-cell epigenomics have enabled the profiling of modifications in individual cells, uncovering heterogeneity within populations that bulk methods obscure, such as varied methylation states in tumor microenvironments.[13] These techniques, integrating ChIP-seq variants with single-nucleus sequencing, reveal cell-type-specific regulatory landscapes and support precision medicine applications. Epigenomic modifications often target specific genomic contexts, like enhancers identified through sequencing, to fine-tune gene regulation.[43]
Transcriptomics
Transcriptomics is the study of the transcriptome, defined as the complete set of all RNA molecules, including messenger RNA (mRNA) and various non-coding RNAs such as long non-coding RNAs (lncRNAs), produced by an organism or in a specific cell or tissue at a given time.[44] This field focuses on quantifying and analyzing RNA transcripts to understand gene expression dynamics, including how genes are turned on or off in response to developmental cues, environmental stimuli, or disease states.[45] Unlike genomics, which examines static DNA sequences, transcriptomics captures the dynamic, functional output of the genome, revealing regulatory mechanisms and cellular responses.[46]The primary technique in transcriptomics is RNA sequencing (RNA-seq), which has become the gold standard by 2025, surpassing older microarray methods due to its superior sensitivity for low-abundance transcripts, broader dynamic range, and ability to detect novel isoforms and non-coding RNAs without prior knowledge of sequences.[47][48] RNA-seq involves reverse transcription of RNA to complementary DNA (cDNA), followed by high-throughput sequencing, enabling comprehensive profiling of the entire transcriptome.[49] For higher resolution, single-cell RNA-seq (scRNA-seq) techniques isolate and sequence transcripts from individual cells, uncovering heterogeneity within tissues and identifying rare cell types or transient states that bulk methods overlook.[50][51]Key insights from transcriptomics include the prevalence of alternative splicing, where a single gene produces multiple mRNA isoforms through differential exon inclusion, affecting up to 95% of multi-exon genes in humans and expanding proteome diversity.[52] LncRNAs, often identified through RNA-seq, play crucial regulatory roles, such as modulating splicing factors, acting as scaffolds for protein complexes, or influencing chromatin structure to control gene expression.[53] These findings have driven applications in biomarker discovery, where transcriptomic profiles identify dysregulated genes and pathways in diseases like cancer, enabling the development of diagnostic signatures from patient samples.[54][55]Data analysis in transcriptomics typically begins with alignment of sequencing reads to a reference genome, followed by quantification of transcript abundance using tools like featureCounts or Salmon.[56] Differential expression analysis compares transcript levels between conditions, employing statistical models such as DESeq2, which uses negative binomial distribution to detect significant changes while accounting for variability and normalizing for library size.[57] For isoform-level insights, methods like DELongSeq or NanoCount enable precise quantification of alternative splicing events by estimating uncertainty in expression levels from sequencing data.[58][59]
Proteomics
Proteomics is the large-scale study of the proteome, defined as the complete set of proteins expressed by a cell, tissue, or organism under specific conditions, encompassing their structures, abundances, modifications, and interactions.[60] Unlike genomics, which focuses on static DNA sequences, proteomics captures dynamic aspects such as post-translational modifications (PTMs) like phosphorylation, glycosylation, and ubiquitination, which vastly expand protein functional diversity and cannot be inferred from the genome alone.[61] These PTMs regulate protein activity, localization, and interactions, enabling the proteome to respond to environmental cues and developmental signals in ways not predictable from transcriptomic data.[62]Central to proteomics are analytical methods that enable high-throughput protein identification and characterization. Two-dimensional gel electrophoresis (2D-GE) separates proteins based on isoelectric point and molecular weight, allowing visualization and quantification of thousands of proteins in complex mixtures, though it struggles with hydrophobic or low-abundance species.[63]Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has become the cornerstone for proteomic analysis, providing sensitive detection and sequencing of peptides for database matching and de novo identification.[64] Proteomics workflows are broadly classified into bottom-up and top-down approaches: bottom-up proteomics digests proteins into peptides for easier fragmentation and analysis, facilitating large-scale profiling but potentially losing information on PTM stoichiometry; top-down proteomics examines intact proteins to preserve full modification patterns and isoforms, though it faces challenges in ionization efficiency for larger molecules.[65]In biomedical applications, proteomics excels at identifying drug targets by mapping protein expression changes in disease states, such as overexpressed kinases in cancer that can be inhibited therapeutically.[66] A key strength lies in PTM mapping, exemplified by phosphoproteomics, which elucidates signaling cascades where phosphorylation events activate or deactivate pathways like MAPK in cell proliferation and apoptosis.[67] These insights have driven precision medicine, such as targeting phosphorylated EGFR variants in lung cancer therapies.Despite advances, proteomics faces challenges from protein instability, including degradation during sample handling and the wide dynamic range of protein abundances that masks low-level species critical for signaling.[60] Membrane proteins, with their hydrophobic nature, are particularly prone to aggregation and poor solubility, complicating extraction and analysis.[63] By 2025, AI-driven tools like AlphaFold3 have transformed the field by predicting protein structures and PTM effects with near-experimental accuracy, accelerating functional annotation and interaction modeling without relying solely on empirical data.[68]
Metabolomics
Metabolomics is the systematic study of the metabolome, which comprises the complete set of small-molecule metabolites—typically under 1,500 Da—present in a biological system, such as cells, tissues, or biofluids.[69] This field captures the end products of cellular processes, providing a direct reflection of the physiological or pathological state of an organism, and is considered the omics discipline closest to the phenotype due to its integration of genetic, environmental, and lifestyle influences.[70] Unlike genomics or proteomics, which focus on potential or intermediary layers, metabolomics reveals functional outcomes, such as responses to disease, diet, or drugs, making it essential for understanding dynamic biological responses.[71]The primary analytical techniques in metabolomics include nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry (MS)-based methods, each offering complementary strengths in metabolite detection and quantification.[72]NMR provides non-destructive, reproducible structural information without extensive sample preparation, ideal for identifying known metabolites in complex mixtures, though it has lower sensitivity for low-abundance compounds.[9]MS, often coupled with chromatography like liquid chromatography (LC-MS) or gas chromatography (GC-MS), excels in high-throughput, sensitive detection of a broader metabolite range, enabling the profiling of hundreds to thousands of compounds per sample.[72] Approaches are classified as untargeted, which aim for comprehensive, hypothesis-generating profiling of all detectable metabolites without prior selection, or targeted, which focus on predefined sets of metabolites for precise quantification and validation.[73]An extension of metabolomics, fluxomics incorporates isotopic labeling to measure the dynamic rates of metabolic fluxes through pathways, revealing how metabolites are transformed over time rather than static snapshots.[74] This approach uses stable isotopes like 13C to trace flux distribution, providing insights into metabolic network regulation and enzyme kinetics that static metabolomics alone cannot capture.[75] Fluxomics thus bridges metabolomics with systems biology, enabling the modeling of pathway efficiencies in response to perturbations.[76]Metabolomics plays a key role in elucidating metabolic pathways by mapping metabolite alterations to specific biochemical routes, such as identifying disruptions in glycolysis or the tricarboxylic acid cycle in diseased states.[77] Proteomic influences, including enzyme expression levels, can modulate these pathways, linking protein activity to observed metabolite changes.[78] In biomarker discovery, metabolomics has identified signatures for diseases like type 2 diabetes, where elevated branched-chain amino acids and reduced lysophosphatidylcholines serve as predictive indicators of insulin resistance years before clinical onset.[79] These applications extend to clinical diagnostics, with targeted panels validating metabolites like acylcarnitines for monitoring therapeutic responses.[80]As of 2025, emerging trends in metabolomics emphasize integration with wearable technologies for real-time monitoring, such as non-invasive sweat or urine sensors that detect metabolite fluctuations during daily activities.[81] This fusion enables continuous profiling of markers like glucose or lactate, supporting personalized interventions in metabolic disorders and advancing precision health.[82]
Lipidomics
Lipidomics is a specialized branch of metabolomics focused on the comprehensive identification, quantification, and characterization of lipids within biological systems. Lipids encompass a diverse array of molecules, including fats, sterols, glycerophospholipids, sphingolipids, and others, which collectively form the lipidome—a vast collection estimated to include tens of thousands of distinct species.[83] This field emphasizes the structural and functional complexity of lipids, distinguishing it from broader metabolomic analyses by targeting these hydrophobic compounds essential to cellular architecture and dynamics.[84]Key methodologies in lipidomics include shotgun lipidomics, which employs direct infusion electrospray ionizationmass spectrometry (ESI-MS) to analyze lipid extracts without prior chromatographic separation, enabling rapid, high-throughput profiling of major lipid classes.[85] For enhanced resolution of isomeric and low-abundance species, liquid chromatography-mass spectrometry (LC-MS) is widely used, often with reversed-phase or hydrophilic interaction liquid chromatography (HILIC) to separate lipids based on hydrophobicity or polar head groups, respectively, coupled with high-resolution MS for precise identification.[85] These approaches leverage databases like LIPID MAPS for annotation, ensuring robust quantification with internal standards per lipid class.[83]Lipids play critical roles in biological processes, forming the structural backbone of cellular membranes through phospholipids and sterols that maintain fluidity and compartmentalization.[86] They also serve as signaling molecules, exemplified by eicosanoids derived from polyunsaturated fatty acids like arachidonic acid, which regulate inflammation, vascular tone, and immune responses.[87] Dysregulation of lipid profiles contributes to diseases such as atherosclerosis, where oxidized lipids and eicosanoids promote plaque formation and endothelial dysfunction.[88]Recent advances in lipidomics include spatial lipidomics enabled by matrix-assisted laser desorption/ionizationmass spectrometry imaging (MALDI-MSI), which maps lipid distributions in tissues at resolutions down to 0.6 μm, revealing localized changes in brain pathologies like Alzheimer's disease.[89] By 2025, integrations with ion mobility spectrometry and multimodal imaging have expanded applications in neurodegeneration and oncology, facilitating biomarker discovery without tissue destruction.[89]
Glycomics
Glycomics is the comprehensive study of the glycome, defined as the entire repertoire of carbohydrate structures, including free glycans, glycoproteins, and glycolipids, produced by a cell, tissue, or organism under specific conditions.[90] This field emphasizes the systems-level analysis of glycan diversity to elucidate their roles in biological processes. Unlike nucleic acids or proteins, the glycome exhibits high heterogeneity due to extensive branching, variable linkages, and modifications such as sialylation and fucosylation, resulting in an estimated 10^6 to 10^12 possible glycan structures across species.[91] This structural complexity arises from non-templated biosynthesis, making the glycome dynamic and responsive to environmental factors like nutrient availability.[90]Key techniques in glycomics include glycan microarrays, which enable high-throughput screening of glycan-binding proteins and comparative profiling of samples using fluorescent labeling.[92]Mass spectrometry methods, such as matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MS, provide detailed structural elucidation by generating glycan profiles from permethylated or native samples, often coupled with liquid chromatography for enhanced resolution.[93]Lectin-based profiling, utilizing immobilized lectins on microarrays or affinity columns, offers a functional readout by detecting specific glycan motifs through carbohydrate-lectin interactions, complementing structural analyses.[94] These approaches are frequently integrated with enzymatic release of glycans from glycoproteins to study site-specific modifications.[95]Glycans mediate essential biological functions, particularly in cell recognition and immunity. In cell recognition, glycans facilitate adhesion and signaling; for instance, sialylated and fucosylated structures like sialyl Lewis X serve as ligands for selectins, enabling leukocyte rolling on endothelial cells during inflammation and tissue homing.[96] In immunity, sialic acid-containing glycans interact with Siglec receptors on immune cells to maintain homeostasis by recognizing self-associated molecular patterns and dampening innate responses.[96] Aberrant sialylation plays a critical role in cancer, where hypersialylation promotes metastasis by enhancing tumor cell adhesion to endothelium via E-selectin and shielding cells from immune surveillance, as observed in elevated sialyl Lewis antigens on metastatic breast and colon cancers.[97]Despite these insights, glycomics faces significant challenges stemming from glycan structural complexity, including isomeric diversity and linkage variability that confound unambiguous identification without advanced separation techniques.[98] The lack of a genetic template further complicates reproducibility in synthesis and analysis. By 2025, progress in automated glycan synthesis has addressed some hurdles, with solid-phase and enzymatic platforms enabling scalable production of complex glycans, such as polyarabinosides up to 1080-mers and therapeutic candidates for vaccines, facilitating better standards for glycomic studies.[99]
Microbiomics
Microbiomics is the comprehensive study of microbiomes, which are the assemblages of microorganisms—including bacteria, archaea, fungi, and viruses—and their collective genetic material in specific environments such as the human gut, skin, soil, or aquatic systems.[100] This field emphasizes the functional roles of these microbial communities in maintaining ecosystem balance and host physiology, often revealing how environmental factors like diet and geography shape microbial diversity.[100] A key aspect of microbiomics involves metagenomics, which enables the analysis of genetic material directly from environmental samples, bypassing the need to culture microbes in the lab and thus accessing the vast majority of uncultured species that dominate microbial diversity.[101]Central methods in microbiomics include 16S rRNA gene sequencing, which targets a conserved region of bacterial ribosomal RNA to identify and classify microbial taxa based on phylogenetic markers, providing a cost-effective snapshot of community composition.[102] In contrast, shotgun metagenomics sequences all DNA in a sample indiscriminately, offering deeper insights into both taxonomic profiles and functional genes across bacteria, viruses, and other microbes, though it is more resource-intensive than 16S approaches.[103] For functional profiling, tools like PICRUSt predict the metabolic capabilities of microbial communities from 16S data by inferring gene family abundances based on known genomic references, bridging taxonomic identification with potential ecological roles without full metagenomic sequencing.[104]The impacts of microbiomics research extend to human health, where dysbiosis—imbalances in microbial composition—has been linked to conditions like inflammatory bowel disease (IBD), with reduced diversity and overgrowth of pathogenic taxa such as Proteobacteria contributing to chronic inflammation.[105] In ecology, microbiomes drive nutrient cycling and resilience in environments like soil, where microbial shifts influence plant growth and carbon sequestration, highlighting their role in broader ecosystem dynamics.[106] The Human Microbiome Project (2007–2013), a landmark initiative by the National Institutes of Health, characterized microbial communities across 300 healthy individuals using metagenomic and 16S methods, establishing reference datasets that advanced understanding of microbiome variability and its ties to disease states.[107]Host genetics can subtly interact with microbiomes, accounting for less than 2% of gut microbial variation but influencing specific taxa that affect metabolism.[108] As of 2025, emerging subsets like viromics—focusing on viral components of microbiomes—have advanced through metagenomic tools to reveal viruses as regulators of bacterial populations in ecosystems, with applications in phage therapy for infections.[109] Similarly, mycobiomics, the study of fungal microbiomes, has gained traction, showing how fungi comprise about 0.1% of gut communities yet modulate immune responses and disease progression in conditions like IBD.[110]
Other Specialized Omics
Beyond the core molecular omics disciplines, several specialized fields have emerged to address niche aspects of biological systems, often integrating high-throughput technologies to profile organismal, elemental, or secreted components. These areas extend omics principles to phenotypic, environmental, and extracellular phenomena, providing insights into complex interactions not captured by genomics or proteomics alone.Phenomics focuses on the systematic study of organismal phenotypes, particularly through high-throughput imaging and sensor technologies to quantify traits at multiple scales, from cellular structures to whole-plant architectures. This approach enables the capture of dynamic traits like growth patterns and stress responses in plants and animals, often using automated platforms for non-destructive analysis. For instance, high-content screening methods, including RGB and hyperspectral imaging, allow for the phenotyping of thousands of samples to link genotypes to visible outcomes. In ecology, phenomics supports biodiversity assessments by analyzing morphological variations across populations, aiding in the identification of adaptive traits under environmental pressures.[111]Ionomics examines the elemental composition of organisms, profiling the concentrations of minerals and trace elements to understand nutrient homeostasis and environmental adaptations. Key methods include inductively coupled plasma mass spectrometry (ICP-MS), which provides high-sensitivity detection of up to 20 elements simultaneously in tissues like leaves or roots. This technique has been instrumental in plant breeding programs, where ionomic profiling identifies varieties with enhanced nutrient use efficiency, such as improved zinc or iron uptake, to combat deficiencies in agriculture.[112]Secretomics investigates the secretome—the full repertoire of proteins secreted by cells, tissues, or organisms—using proteomics workflows to uncover intercellular signaling and pathological mechanisms. Techniques such as mass spectrometry-based analysis of conditioned media or biofluids enable the identification and quantification of low-abundance secreted factors, distinguishing them from intracellular contaminants. This field ties briefly to proteomics by focusing on extracellular extensions of the proteome, revealing roles in immune modulation and disease progression. Applications span cancer research, where secretomic profiles highlight tumor-derived factors influencing metastasis.[113][114]Toxonomics profiles toxin compositions in organisms, particularly venoms and bioactive compounds, to classify and elucidate their molecular diversity and ecological roles. High-throughput sequencing of cDNA libraries combined with proteomics identifies novel peptides and proteins in toxinomes, as seen in databases cataloging thousands of entries from animal sources. In scorpionvenom studies, toxonomic analysis reveals autonomic effects and potential therapeutic peptides, supporting drug discovery for pain management.[115][116]By 2025, emerging fields like nanomics explore nanoscale interactions in biological systems, integrating nanotechnology with omics to achieve ultra-high resolution profiling of molecular assemblies. This involves nano-sensors and single-molecule imaging to detect dynamic processes, such as protein-nanoparticle bindings, with applications in precision agriculture for targeted delivery of agrochemicals. Similarly, bibliomics applies omics-inspired mining to non-biological domains, using text analytics and machine learning to extract patterns from scientific literature, facilitating knowledge synthesis across disciplines without direct biological measurement.[117][118][119]
Applications and Interdisciplinary Uses
Biomedical and Clinical Applications
Omics technologies have revolutionized biomedical and clinical applications by enabling the molecular profiling of diseases at multiple levels, facilitating early diagnosis, targeted treatments, and personalized medicine strategies. In oncology, cancer genomics identifies actionable mutations that guide targeted therapies, such as epidermal growth factor receptor (EGFR) inhibitors for non-small cell lung cancer (NSCLC) patients harboring EGFR exon 19 deletions or L858R mutations, which improve progression-free survival compared to standard chemotherapy.[120] Similarly, pharmacogenomics analyzes genetic variants influencing drug metabolism and efficacy, exemplified by CYP2D6 and CYP2C19 polymorphisms that predict responses to antidepressants and clopidogrel, respectively, reducing adverse events through dose adjustments.[121] These approaches underscore omics' role in shifting from empirical to genotype-informed care, enhancing therapeutic precision across diverse patient populations.[122]Key case studies highlight the integrative power of multi-omics in clinical settings. The Cancer Genome Atlas (TCGA), launched in 2006, has characterized over 11,000 primary tumor samples across 33 cancer types using genomics, transcriptomics, proteomics, and epigenomics, revealing molecular subtypes like the BRCA1/2-deficient profile in ovarian cancer that informs PARP inhibitor use.[123] This project has accelerated discoveries, such as immune-hot tumor classifications, influencing immunotherapy decisions. Complementing tissue-based analyses, liquid biopsies detect circulating tumor DNA (ctDNA) in plasma, enabling non-invasive monitoring of tumor evolution and minimal residual disease in cancers like colorectal and breast, with ctDNA levels correlating to treatment response and relapse risk.[124] For instance, ctDNA assays have achieved 80-90% sensitivity for detecting EGFR T790M resistance mutations in NSCLC, guiding osimertinib therapy switches.[125]In precision medicine, omics-derived tools stratify patient risks and tailor interventions. Polygenic risk scores (PRS) aggregate thousands of genomic variants to estimate disease susceptibility, such as PRS for coronary artery disease that reclassify 10-20% of individuals into higher-risk categories for preventive statins.[126] In neurology, proteomics identifies plasma biomarkers for Alzheimer's disease (AD), including phosphorylated tau at threonine 217 (p-tau217) and neurofilament light chain, which distinguish AD from other dementias with over 90% accuracy in early stages.[127] These markers, validated in large cohorts, support timely interventions like anti-amyloid therapies.[128]By 2025, advances in artificial intelligence (AI) integrated with multi-omics have enhanced drug repurposing efforts. AI models analyzing TCGA-derived multi-omics data have identified repurposed candidates, such as metformin for epigenetic modulation in gynecological cancers, by predicting off-target effects and pathway interactions with 75% accuracy in validation sets.[129] Similarly, deep learning frameworks combining genomics and proteomics have accelerated repurposing for neurodegenerative diseases, uncovering novel applications for existing drugs like cholinesterase inhibitors in AD subtypes.[130] These AI-driven approaches reduce development timelines from years to months, broadening therapeutic options in resource-limited clinical environments.
Nutrition, Pharmacology, and Toxicology
Omics technologies have revolutionized the study of nutrition, pharmacology, and toxicology by providing molecular-level insights into how dietary components, drugs, and toxins interact with biological systems. In nutrition, foodomics integrates metabolomics and other omics approaches to assess nutrient bioavailability and ensure food safety through traceability. For instance, liquid chromatography-mass spectrometry (LC-MS) in metabolomics detects adulterants in food chains, enabling precise identification of contaminants like melamine in dairy products.[131] This approach enhances understanding of how bioactive compounds from food are absorbed and utilized, supporting sustainable nutrition strategies that address bioavailability challenges in plant-based diets.[132]In pharmacology, pharmacometabolomics analyzes endogenous metabolites to predict individual responses to drugs, including adverse reactions. By profiling pre-dose metabolic patterns in biofluids like plasma or urine, this method identifies biomarkers that forecast toxicity risks, such as idiosyncratic drug-induced liver injury.[133] Complementing this, nutrigenomics examines gene-diet interactions to tailor personalized diets, revealing how genetic variants influence responses to nutrients like folate or omega-3 fatty acids, thereby optimizing dietary interventions for metabolic health.[134] These applications extend briefly to clinical trials, where omics data refines participant stratification for drug efficacy.[135]Toxicology benefits from toxicoepigenomics, which investigates epigenetic modifications induced by environmental exposures, such as DNA methylation changes from bisphenol A (BPA). BPA exposure alters histone modifications and gene expression in reproductive and developmental pathways, linking low-dose environmental contaminants to long-term health risks in humans and model organisms like zebrafish.[136] Dose-response modeling integrates multi-omics data to quantify these effects, using tools like DoseRider for benchmark dose estimation in transcriptomic and metabolomic profiles, improving risk assessment for pollutants.[137] Recent 2025 studies highlight microbiome modulation by probiotics, where multi-omics analyses show how strains like Lactobacillus species alter gut metabolomes to mitigate toxin-induced dysbiosis, as seen in precision interventions for environmental exposure recovery.[138]
Environmental and Agricultural Applications
Omics technologies have significantly advanced the study of environmental impacts, particularly through ecotoxicogenomics, which integrates genomics, transcriptomics, proteomics, and metabolomics to assess pollutant effects on ecosystems. This approach enables the identification of molecular responses in organisms exposed to contaminants, revealing mechanisms of toxicity and adaptation at the genetic and biochemical levels.[139] For instance, metagenomics has been pivotal in analyzing microbial community shifts following the Deepwater Horizon oil spill in 2010, where 16S rRNA sequencing of sediment samples from 64 sites demonstrated rapid proliferation of hydrocarbon-degrading bacteria, such as Gammaproteobacteria, highlighting their role in natural bioremediation processes.[140]In biodiversity monitoring, environmental DNA (eDNA) analysis, a form of metabarcoding within the omics framework, offers a non-invasive method to detect species presence and community composition from water or soil samples. By sequencing DNA fragments shed by organisms into the environment, eDNA provides higher resolution for assessing temporal and spatial biodiversity dynamics compared to traditional surveys, as evidenced in studies of aquatic ecosystems where it captured finer-scale variations in species distributions.[141]In agriculture, QTLomics combines quantitative trait locus (QTL) mapping with multi-omics data to identify genetic variants associated with agronomic traits, accelerating marker-assisted breeding programs. In soybean, for example, QTLomics has mapped numerous loci for yield, seed quality, and disease resistance, enabling the development of varieties with enhanced performance through integration of genomic, transcriptomic, and phenotypic datasets.[142] Similarly, metabolomics profiling uncovers biochemical pathways underlying plant stress responses, such as drought tolerance, by quantifying changes in metabolites like osmoprotectants and antioxidants; in crops like maize and wheat, these analyses have identified key compounds, such as proline and sugars, that accumulate to maintain cellular homeostasis under water deficit.[143]Large-scale initiatives like the Earth Microbiome Project, launched in 2010, exemplify omics applications in environmental science by generating a global catalog of microbial diversity through standardized metagenomic sequencing of thousands of samples from diverse habitats. This project has generated extensive multi-omics datasets from over 27,000 samples (as reported in 2017), with the goal of analyzing 200,000 samples, revealing patterns in microbial taxonomy and function across ecosystems and supporting broader ecological research.[144] In agricultural contexts, omics profiling enhances the safety assessment of genetically modified organisms (GMOs) by detecting unintended molecular changes; European field trials of GM soybeans, for instance, used proteomics and metabolomics to compare GMO lines with non-GM counterparts, identifying minimal differential expressions that align with regulatory tolerance intervals and confirming substantial equivalence.[145]Emerging trends as of 2025 emphasize pan-genomics for breeding climate-resilient crops, where comprehensive genome assemblies from diverse accessions capture structural variations and novel alleles absent in single reference genomes. This approach has facilitated the identification of drought- and heat-tolerance genes in staples like wheat and rice, enabling targeted improvements in yield stability under changing climates through integrated multi-omics strategies.[146]
Challenges and Future Directions
Technical and Computational Challenges
Omics technologies generate vast amounts of data, often reaching petabyte scales, which poses significant storage and management challenges for researchers. For instance, the Cancer Genome Atlas (TCGA) project alone produced over 2.5 petabytes of multi-omics data, encompassing genomic, epigenomic, transcriptomic, and proteomic profiles from thousands of cancer samples.[147] Similarly, as of 2015, major genomics institutions collectively utilized more than 100 petabytes of storage for sequencing data, with estimates indicating growth to around 40 exabytes required by 2025, highlighting the exponential growth driven by high-throughput sequencing.[148][149]Standardization efforts, such as the Minimum Information About a Microarray Experiment (MIAME) guidelines, aim to address inconsistencies in data reporting and experimental design across omics studies. Established in 2001, MIAME specifies essential details like sample characteristics, experimental design, and data processing to enable unambiguous interpretation and replication of microarray experiments, with extensions to next-generation sequencing via MINSEQE.[150][151] Despite these, adherence remains uneven, complicating data sharing and meta-analysis in diverse omics fields like proteomics and metabolomics.A reproducibility crisis exacerbates these issues, with many omics findings failing to replicate due to variability in protocols, software versions, and statistical practices. In biomedical research, including genomics, up to 50% of preclinical studies may not reproduce, often stemming from selective reporting and insufficient data transparency.[152] This is particularly acute in high-dimensional omics data, where p-hacking and overfitting inflate false positives, undermining trust in discoveries like biomarker identification.[153]Computationally, algorithm scalability is a bottleneck for processing large omics datasets, as alignment tools must handle billions of short reads against reference genomes efficiently. Bowtie, an ultrafast aligner using Burrows-Wheeler transforms, processes over 25 million 35-bp reads per hour for the human genome while using minimal memory (about 2.2 GB), but scaling to modern datasets with longer reads and higher error rates requires optimizations like multi-threading.[154] Recent enhancements enable Bowtie2 to utilize hundreds of threads on general-purpose processors, achieving near-linear speedup for tasks like RNA-seq alignment.[155]Machine learning approaches further aid pattern detection in omics, with ensemble methods like DeepProg integrating deep learning and traditional ML to predict survival subtypes from multi-omics data, outperforming single-modality models by identifying subtle regulatory patterns.[156]Technical challenges include sample biases and throughput limitations inherent to omics platforms. Sampling biases, such as GC-content effects in sequencing, distort community abundance estimates in metagenomics, leading to skewed functional enrichments if unaccounted for.[157] High-throughput methods like single-cell RNA-seq offer scale but sacrifice accuracy due to dropout events and limited capture efficiency, processing thousands of cells yet introducing noise that propagates to downstream analyses.[158]Solutions like cloud computing mitigate these hurdles by providing scalable infrastructure for omics workflows. Amazon Web Services (AWS) HealthOmics, a HIPAA-eligible service launched in 2022, enables storage, querying, and analysis of petabyte-scale genomic data without local hardware, supporting variant calling and cohort analytics for clinical applications, though as of November 2025, variant and annotation stores are no longer available to new customers.[159][160]Looking to 2025, quantum computing emerges as a potential solution for simulating complex omics interactions, such as protein folding or multi-omics integrations, where classical methods falter due to exponential complexity. Early platforms like those from Quantinuum demonstrate quantum advantages in encoding entire genomes for variant detection, promising faster insights into non-linear biological patterns.[161]
Integration in Multi-Omics Approaches
Multi-omics approaches involve the integrated analysis of multiple biological layers, such as genomics, transcriptomics, proteomics, and metabolomics, to provide a comprehensive view of cellular and organismal function beyond what single-omics studies can achieve.[162] This layered analysis reveals interactions and regulatory relationships across molecular scales, enabling the reconstruction of complex biological networks that underpin health and disease.[2] For instance, combining genomic variants with proteomic expression profiles and metabolomic outputs allows researchers to trace how genetic perturbations propagate through downstream pathways.[163]Key methods for multi-omics integration include correlation-based network analyses and machine learning frameworks. Weighted Gene Co-expression Network Analysis (WGCNA) constructs modules of co-expressed genes or features across omics datasets, identifying correlated patterns that highlight shared regulatory mechanisms; it has been extended to multi-omics contexts for discovering disease-associated hubs in high-dimensional data.[164] Similarly, Multi-Omics Factor Analysis (MOFA) employs unsupervised factor analysis to decompose variation into latent factors that capture shared signals across layers, facilitating the identification of driving biological processes without prior assumptions.[165] These techniques prioritize dimensionality reduction and cross-layer alignment to handle the heterogeneity and scale of multi-omics data.The primary benefits of multi-omics integration lie in enhanced pathway reconstruction and elucidation of disease mechanisms, offering insights unattainable from isolated analyses. By mapping interactions between omics layers, researchers can infer causal pathways, such as how metabolic shifts influence immune responses in pathology.[166] In COVID-19 studies from 2020 to 2025, multi-omics has revealed dynamic immune alterations and viral-host interactions, identifying biomarkers for severity and long-term effects through integrated genomic, proteomic, and metabolomic profiling.[167][168]Practical tools like the Galaxy platform support multi-omics workflows by providing modular pipelines for data harmonization, analysis, and visualization, including plugins for proteogenomic integration.[169] Complementary resources, such as HiOmics, offer cloud-based environments for scalable processing of diverse omics inputs.[170] Recent 2025 advances in spatial multi-omics, exemplified by MERFISH+, enable high-throughput, multiplexed imaging of RNA and protein distributions in tissues, resolving subcellular dynamics and enhancing contextual pathway insights.[171]
Ethical and Societal Implications
Omics research, encompassing fields like genomics, proteomics, and metabolomics, raises significant ethical concerns related to data privacy and informed consent. The vast volumes of sensitive biological data generated in omics studies pose risks of re-identification, even when pseudonymized, prompting stringent compliance with regulations such as the European Union's General Data Protection Regulation (GDPR).[172] For instance, GDPR classifies pseudonymized genomic data as personal information, requiring robust safeguards against breaches that could expose individuals to harm.[172] In biobanks storing omics samples for future research, obtaining broad informed consent remains challenging, as participants must authorize unspecified uses while ensuring ongoing autonomy and protection against misuse.[173] Ethical frameworks emphasize dynamic consent models, allowing participants to update preferences as research evolves, to address these dilemmas.[174]Genetic discrimination represents another critical ethical risk in omics, where revelations from sequencing could lead to adverse outcomes in employment, insurance, or social contexts. In the United States, the Genetic Information Nondiscrimination Act (GINA) of 2008 prohibits such discrimination in health insurance and employment based on genetic information, yet public awareness of its protections remains low, perpetuating fears.[175] Internationally, similar vulnerabilities persist, with reports of insurers requesting family history data despite legal prohibitions, highlighting the need for expanded global safeguards.[176] These risks underscore the imperative for omics researchers to integrate anti-discrimination measures, such as anonymization protocols, into study designs.On the societal front, access disparities in omics research exacerbate inequities, particularly in low- and middle-income countries (LMICs) where infrastructure and funding limitations hinder participation and benefit-sharing. Genomic technologies, while advancing rapidly in high-income settings, often overlook LMIC populations, resulting in datasets biased toward certain ancestries and limiting the applicability of findings to diverse groups.[177] Efforts to bridge this gap include initiatives promoting local capacity-building, yet persistent underrepresentation in omics databases perpetuates health outcome disparities.[178] Additionally, the promise of personalized medicine through omics has been tempered by the gap between hype and reality; while multi-omics integration holds potential for tailored therapies, challenges like data interoperability and validation have slowed clinical translation.[179] Critics argue that overemphasis on genomic personalization diverts resources from environmental and social determinants of health, risking disillusionment among patients and policymakers.[180]Cultural implications of omics research further complicate its societal footprint, especially regarding indigenous data sovereignty. Indigenous communities have historically contested projects like the Human Genome Diversity Project (HGDP), a 1990s initiative linked to the Human Genome Project (HGP), for sampling without adequate consent or benefit-sharing, viewing it as exploitative bioprospecting.[181] Today, principles of indigenous data sovereignty assert communities' rights to govern their genomic data, challenging the "open science" ethos that prioritizes unrestricted sharing.[182] For example, tribes like the Nuu-chah-nulth in Canada have successfully litigated against unauthorized use of their samples, setting precedents for co-governance in omics.[183]Public perception of omics, shaped heavily by media portrayals, often amplifies both optimism and apprehension, influencing research participation and policy. Surveys indicate that while citizens recognize omics' potential for disease prevention, concerns over privacy and unintended consequences dominate, with media coverage frequently sensationalizing breakthroughs like CRISPR without contextualizing risks.[184] This disparity between expert optimism and public skepticism calls for transparent communication to foster trust.[185]In 2024, the World Health Organization (WHO) advanced equity in omics-related fields by releasing principles for ethical human genomic data collection, access, use, and sharing, emphasizing inclusive governance to mitigate disparities in global research. These guidelines promote fair benefit-sharing and community involvement, particularly for underrepresented populations, building on prior frameworks to ensure omics advances serve diverse societies equitably.[186]
Unrelated Terms in -omics
Some terms ending in "-omics" are etymologically and conceptually unrelated to the biological "-omics" disciplines, which derive from the suffix "-ome" (indicating totality or completeness) combined with "-ics" (denoting a field of study). These unrelated terms often stem from different Greek roots, such as "-nomos" (law or management), and predate the modern biological usage.For instance, "economics" originates from the Greek "oikonomia," meaning "management of a household" or "household law," formed from "oikos" (house or household) and "nomos" (law, custom, or management). The term entered English in the late 16th century referring to household management and evolved by the late 18th century to describe the science of production, distribution, and consumption of goods and services.[187]Similarly, "ergonomics" was coined in the 19th century from Greek "ergon" (work) and "nomos" (natural law), referring to the scientific study of designing equipment and workplaces to optimize human efficiency and safety.[188]The term "comics," used for humorous illustrated strips or books, derives from Greek "kōmikos" (relating to comedy or revelry), from "kōmos" (a festivity or merrymaking procession), entering English in the 16th century to describe comedic literature or performers.[189]In addition, the "-omics" ending has been borrowed for portmanteau words in non-biological contexts, particularly economics. "Reaganomics," for example, refers to the supply-side economic policies of U.S. President Ronald Reagan (1981–1989), including tax cuts and deregulation; the term blends "Reagan" with "economics" and was popularized by radio broadcaster Paul Harvey in 1981.[190] Other similar neologisms include "Nixonomics" for Richard Nixon's policies and "coronanomics" for the economic effects of the COVID-19 pandemic.[191]These examples illustrate how the superficial similarity in spelling can lead to confusion, but the biological "-omics" suffix specifically pertains to high-throughput studies of biological wholes, as covered in prior sections.