Fact-checked by Grok 2 weeks ago

Bottom-up proteomics

Bottom-up proteomics, also known as , is a mass spectrometry-based approach in which proteins extracted from biological samples are enzymatically digested into smaller peptides, typically using proteases like , before separation and analysis by liquid chromatography coupled with (LC-MS/MS). This method infers the presence, abundance, and modifications of proteins by identifying and quantifying these peptides, which serve as proxies for the original proteins. Unlike top-down proteomics, which analyzes intact proteins or large proteoforms to preserve sequence and modification connectivity, bottom-up proteomics prioritizes high-throughput coverage of complex proteomes but sacrifices detailed proteoform information due to the loss of peptide-to-protein linkages during digestion.

History

The foundations of bottom-up proteomics trace back to advances in in the mid-20th century, with key innovations including (ESI) developed by John B. Fenn in 1989, enabling efficient protein and peptide analysis. The SEQUEST algorithm for database searching of tandem mass spectra was introduced in 1994 by James Eng and colleagues in John R. Yates III's lab. The term "" was coined by Yates' group in 1999, analogous to shotgun , with early implementations using multidimensional and nanoelectrospray for large-scale protein identification from complex mixtures. Further developments, such as MudPIT in 2001, solidified its role in high-throughput proteomics. Bottom-up proteomics has become the dominant strategy in the field due to its scalability for large-scale studies, such as biomarker discovery and , across diverse organisms. Its integration with proteogenomics—using sample-specific genomic or transcriptomic data—addresses limitations in identifying novel proteins. As of 2023, innovations in continue to enhance and throughput.

Introduction

Definition

Bottom-up proteomics, also known as , is a mass spectrometry-based approach that entails the enzymatic digestion of proteins into smaller peptides prior to analysis, facilitating the identification and characterization of proteins as well as post-translational modifications (PTMs) through (MS/MS). This peptide-centric strategy contrasts with intact protein analysis, such as top-down proteomics, by generating analyzable fragments typically ranging from 7 to 35 in length, which serve as proxies for inferring protein identities while potentially losing information on proteoform connectivity. Peptides are commonly produced using proteases like , which specifically cleaves peptide bonds at the of and residues (except when followed by ), yielding fragments with a basic C-terminal residue that enhances ionization and fragmentation efficiency in MS/MS. This digestion step breaks down complex protein mixtures into more manageable components suitable for high-throughput detection. Often synonymous with , bottom-up methods enable unbiased, comprehensive coverage of proteomes by analyzing from crude extracts without prior knowledge of target proteins, supporting applications in global protein profiling and mapping. The basic workflow involves enzymatic digestion of protein extracts, followed by separation and -based identification, allowing inference of protein presence, abundance, and modifications.

History

Proteomics research emerged in the mid-1990s, driven by parallel advances in for protein separation, the development of comprehensive protein sequence databases from genome projects, improved chromatographic separation techniques, and the maturation of for biomolecular analysis. The bottom-up approach, which indirectly identifies proteins through the enzymatic of samples into peptides for subsequent mass spectrometric analysis, arose as a complementary strategy to direct methods like , leveraging the higher sensitivity and throughput of peptide-level detection. A pivotal milestone in the late 1990s was the integration of digestion— a specific that cleaves proteins at and residues to generate predictable peptides—with (MS/MS) for database-driven protein identification. This built on earlier efforts in peptide sequencing by MS, such as the introduction of data-dependent acquisition (DDA) in the early 1990s, which automated the selection and fragmentation of peptides eluting from liquid chromatography columns. By correlating fragmentation spectra with in silico digests from protein databases using algorithms like SEQUEST, researchers achieved the first high-throughput identifications of proteins from complex mixtures, marking the transition from targeted to discovery-based . In 2001, John Yates and colleagues introduced multidimensional protein identification technology (MudPIT), an automated method that coupled strong cation-exchange and reversed-phase liquid chromatography directly to , eliminating gel-based separations and enabling the analysis of thousands of from lysates with high reproducibility and dynamic range. This innovation facilitated unbiased coverage across diverse protein classes. Building on this, a landmark 2003 review by Ruedi Aebersold and Matthias Mann synthesized the field's progress, emphasizing LC-MS/MS workflows for high-throughput identification and establishing bottom-up proteomics as the dominant paradigm for large-scale protein characterization in complex biological samples. By the 2010s, bottom-up proteomics evolved beyond identification to incorporate quantitative capabilities, integrating techniques like stable isotope labeling by in (SILAC) and isobaric mass tags (TMT) alongside label-free spectral counting methods, which allowed proteome-wide measurement of protein abundances and dynamics in response to biological perturbations. These advancements expanded applications to , enabling comprehensive profiling of cellular states with improved accuracy and scalability.

Methodology

Sample Preparation and Digestion

Sample preparation in bottom-up proteomics begins with protein from biological sources such as cells, tissues, or fluids like . For cellular samples, extraction typically involves cell using mechanical methods like probe or bead beating, combined with buffers containing detergents such as or chaotropes like 8 M in 100 mM Tris-HCl at pH 8.5, along with inhibitors to prevent . homogenization employs similar approaches, often with cryo-milling for samples to maintain . In complex fluids like , initial removes cells and debris, followed by purification steps including acetone precipitation or immunodepletion of high-abundance proteins like to access lower-concentration species. These methods aim to solubilize proteins while minimizing contaminants, typically requiring 50–500 μg of protein for comprehensive analyses, though lower amounts (1–100 μg) are feasible with optimized protocols. Recent advances, such as nano-scale proteomics on a chip (nanoPOTS), enable analysis from sub-100 ng samples, expanding applicability to scarce biological materials. Following extraction, proteins undergo denaturation, , and to unfold structures and break bonds, exposing cleavage sites for enzymatic . Denaturation is achieved using chaotropic agents such as 8 M or like 1% sodium deoxycholate (SDC), which must be diluted below 2 M or removed (e.g., via acid precipitation for ) to avoid inhibiting proteases. employs 5-15 mM (DTT) or tris(2-carboxyethyl)phosphine () at 37-60°C to cleave bridges, followed by with 10-20 mM (IAA) in the dark to carbamidomethylate cysteines and prevent bond reformation. This sequence, standardized in protocols like filter-aided sample preparation (FASP), ensures complete modification and is critical for generating consistent peptides. Enzymatic digestion then converts proteins into peptides, primarily using trypsin, which cleaves at the C-terminus of lysine (K) and arginine (R) residues unless followed by proline (P), yielding peptides of 800-2000 Da suitable for mass spectrometry. Digestion occurs at 37°C in pH 7-9 buffers with a 1:20 to 1:50 enzyme-to-protein ratio, traditionally overnight (18 hours) but optimizable to 3-4 hours using enhanced methods like trypsin/Lys-C combinations or suspension trapping (S-Trap). Alternative proteases, such as Lys-C (cleaves after K), Glu-C (after E/D), or chymotrypsin (after F/Y/W), provide complementary sequence coverage, particularly for hydrophobic or trypsin-resistant regions, and are often used in tandem for broader proteome depth. Device-based approaches like single-pot solid-phase-enhanced sample preparation (SP3) or S-Trap integrate these steps on beads or filters for higher efficiency and reproducibility. Challenges in sample preparation arise from sample complexity and protein abundance disparities, particularly in plasma where high-dynamic-range matrices (e.g., albumin at 35-50 mg/mL) obscure low-abundance proteins. Enrichment via immunoaffinity depletion or combinatorial peptide ligand libraries is essential but can introduce carry-over or loss of bound analytes, reducing reproducibility. Incomplete digestion, missed cleavages, or artifacts like carbamylation from urea further complicate workflows, necessitating tailored protocols and quality controls to handle contaminants such as salts or lipids.

Peptide Separation

In bottom-up proteomics, peptide separation is a critical step following enzymatic digestion to reduce sample complexity, enhance resolution, and improve the detection of low-abundance peptides by . This fractionation minimizes ion suppression and co-elution, allowing for deeper coverage in complex biological samples such as lysates or tissues. Techniques range from single-dimensional liquid chromatography to multidimensional approaches and gel-based methods, each tailored to exploit differences in peptide physicochemical properties like hydrophobicity, charge, or size. Reversed-phase liquid chromatography (RPLC) serves as the standard method for peptide separation due to its compatibility with online coupling to mass spectrometry. Typically, peptides are loaded onto C18 stationary phase columns, where separation occurs based on hydrophobicity, influenced by peptide length, amino acid composition, and sequence. Mobile phases consist of gradients of water and acetonitrile, often with 0.1% formic acid as an ion-pairing agent to promote protonation and enhance electrospray ionization efficiency. This approach provides robust resolution for tryptic peptides, with retention times correlating strongly to hydrophobic surface area. For more complex samples, multidimensional separation techniques like multidimensional protein identification technology (MudPIT) integrate strong cation exchange (SCX) chromatography with RPLC to achieve orthogonal . In MudPIT, peptides are first separated by charge on an SCX column, then eluted stepwise onto a reverse-phase column for secondary hydrophobic separation, all within a single online setup. This method significantly increases depth, as demonstrated in analyses where it identified over 1,400 proteins, including low-abundance and membrane-spanning species, by distributing peptides across multiple dimensions to reduce overlap. Gel-based methods offer an alternative for prefractionation, particularly in workflows emphasizing size or charge separation prior to digestion and LC-MS. separates intact proteins by molecular weight before in-gel digestion, effectively removing contaminants like salts and detergents while fractionating based on size for subsequent peptide analysis. Off-gel electrophoresis, an variant, separates peptides or proteins by charge () into liquid fractions without a gel matrix, providing high-resolution charge-based fractionation suitable for complex mixtures and improving identification of isoforms. Nanoflow liquid enhances sensitivity in separation by operating at low rates of 100–300 nL/min, which concentrates analytes at the column tip and improves efficiency when coupled to . This configuration is particularly advantageous for limited sample amounts, enabling the detection of thousands of proteins with high in bottom-up workflows. Offline , such as using SCX cartridges, contrasts with methods by allowing manual collection of fractions for deeper coverage in large-scale studies, though it may introduce recovery losses compared to integrated systems like MudPIT. Offline SCX typically involves loading digests onto cartridges, eluting fractions with steps, and analyzing each via RPLC-MS, which suits high-complexity samples by pre-reducing before separation.

Mass Spectrometry Detection

In bottom-up proteomics, (MS) serves as the core detection method for analyzing peptide ions generated from protein digests, enabling high-throughput identification and quantification through precise measurement of mass-to-charge ratios (m/z) and fragmentation patterns. The process typically involves coupling (LC) with MS, where peptides elute from the column and are ionized, separated, fragmented, and detected to produce spectra that inform sequence and modification details. Ionization is predominantly achieved via (ESI), a soft ionization technique that generates multiply charged ions ([M+nH]^{n+}) directly from the LC eluate by applying a (typically 2-4 kV) to form charged droplets that desolvate in the gas phase. Introduced in the late 1980s, ESI revolutionized biomolecular analysis by preserving integrity and allowing online LC-MS integration, which is essential for handling complex mixtures in bottom-up workflows. While (MALDI) can be used for offline analysis, producing primarily singly charged ions, ESI remains the standard due to its compatibility with continuous LC flow and superior sensitivity for low-abundance species. Following , s enter the mass analyzer, which separates them based on m/z for initial precursor detection in ^1 scans. Common analyzers include quadrupoles, which offer high transmission but lower (typically 1,000-4,000 FWHM); time-of-flight (TOF) instruments, providing rapid scans (>50,000 ) by measuring flight times; and traps, which enable sequential isolation and fragmentation with moderate . analyzers excel in ultra-high (up to 240,000 FWHM at m/z 200), detecting subtle mass differences (e.g., 6 mDa for isobaric tags) through frequencies, making them ideal for complex proteomic samples. Hybrid instruments, such as quadrupole- (e.g., Q-Exactive) or triple quadrupole-TOF (e.g., TripleTOF), combine these for enhanced performance, integrating precursor selection with high- detection to support both qualitative and quantitative analyses. Tandem mass spectrometry (MS/MS) is employed to fragment selected precursor ions, generating sequence-specific product ions for peptide characterization. Collision-induced dissociation (CID) is a widely used method, where peptides collide with inert gas (e.g., helium or nitrogen) to cleave amide bonds, primarily producing b-ions (N-terminal fragments) and y-ions (C-terminal fragments) via the mobile proton model. Higher-energy collisional dissociation (HCD), a beam-type variant of CID available on Orbitrap hybrids, delivers more efficient fragmentation at higher energies, yielding cleaner b/y ion series and better reporter ion detection for multiplexed quantification. Electron-transfer dissociation (ETD) complements these by transferring electrons to multiply charged precursors, producing c-ions (N-terminal) and z-ions (C-terminal) that preserve labile post-translational modifications (PTMs), though it is less effective for singly or doubly charged peptides. Acquisition modes dictate how ions are selected and fragmented to balance depth and reproducibility. In data-dependent acquisition (DDA), the instrument automatically selects the most intense precursor ions (e.g., top 10-20 per cycle) for MS/MS based on intensity thresholds, enabling targeted analysis but introducing stochasticity that reduces run-to-run consistency. (DIA), such as SWATH-MS, fragments all precursors within predefined m/z windows without selection, providing broader coverage and higher reproducibility for , though it generates more complex spectra requiring advanced computational . Performance hinges on key parameters: mass accuracy (typically 1-5 ppm for /TOF, enabling confident assignments); resolution (e.g., >100,000 for distinguishing co-eluting isomers); scan speed (TOF achieves <100 ms per scan for high-throughput DIA, while requires 50-200 ms); and dynamic range (>10^4 for detecting low-abundance amid high-background signals). These metrics, optimized in modern hybrids, have driven coverage from thousands to over 10,000 proteins per sample in human cell lines.
Mass AnalyzerResolution (FWHM at m/z 200)Scan SpeedKey Application in Bottom-Up ProteomicsExample Instrument
1,000-4,000FastPrecursor selection and transmissionQ-Exactive
Time-of-Flight (TOF)>50,000Very fast (<100 ms)High-throughput DIATripleTOF 5600
Ion Trap1,000-5,000ModerateMS^n fragmentationLTQ-Orbitrap
OrbitrapUp to 240,000Slower (50-200 ms)High-accuracy PTM mappingExploris 480

Data Processing

Peptide Spectrum Matching

Peptide spectrum matching (PSM) is a core computational step in bottom-up proteomics that involves aligning experimental tandem mass spectrometry (MS/MS) spectra—characterized by observed fragment ion masses (m/z values)—to theoretical spectra generated from peptide sequences in a protein database. This process begins with database searching, where software tools such as , , and digest a reference protein database (e.g., ) into in silico peptides, accounting for enzyme specificity, charge states, and potential modifications, to generate predicted fragment ions for comparison against the experimental data. , introduced in 1994, pioneered this approach by preliminarily filtering candidate peptides based on precursor mass tolerance before detailed spectral matching. and extend this by integrating probability-based matching across diverse fragmentation patterns, such as y- and b-ions, to identify the best-fitting peptide sequence. Scoring algorithms rank the quality of these matches by quantifying spectral similarity while considering factors like charge state, post-translational modifications (PTMs), and missed cleavages. In SEQUEST, the cross-correlation score (Xcorr) measures the similarity between experimental and theoretical spectra by correlating their intensity profiles after normalization, with higher values indicating better matches; it explicitly accounts for charge-dependent fragmentation and allows for variable modifications during search. Mascot employs a probabilistic ion score based on the intensity of matched peaks, aggregated into a total score, and derives an E-value that estimates the likelihood of a match occurring by chance, incorporating adjustments for database size, missed cleavages (e.g., incomplete trypsin digestion), and common PTMs like oxidation. MaxQuant uses a similar Andromeda engine for scoring, emphasizing high mass accuracy to refine matches and handle complex samples with multiple missed cleavages. These scores enable ranking of candidate peptides, typically selecting the top match per spectrum for further validation. For spectra involving PTMs, such as , variable modification searches localize the site by generating theoretical spectra for all possible isomers and scoring their fit to the data. Tools like Mascot use the delta score (difference between the highest and next-highest scoring localization) to assess site confidence, where a threshold (e.g., >13) indicates high reliability. A seminal probability-based , the Ascore, further refines this by calculating the likelihood of correct localization from site-determining ions, achieving >99% accuracy for phosphosites when exceeding 19, and is widely integrated into search pipelines for PTMs beyond . This approach ensures precise assignment amid spectral noise or ambiguous fragment ions. To control identification error rates, (FDR) estimation employs the target-decoy approach, where experimental spectra are searched against both the real () database and a reversed or shuffled () version; the proportion of decoy matches to total identifications estimates the FDR, with a common threshold of <1% for high-confidence peptides. This method, formalized in 2007, balances sensitivity and specificity by assuming symmetric false positive rates between target and decoy searches. Recent deep learning models, such as pUniFind and Tesorai Search, further enhance PSM accuracy through end-to-end spectrum prediction and zero-shot capabilities. As an alternative to database reliance, de novo sequencing infers peptide sequences directly from MS/MS spectra without a reference database, useful for novel or non-model organisms, though it is less common in standard bottom-up workflows due to higher computational demands and lower accuracy. Tools like employ graph-based algorithms to assemble de Bruijn-like sequences from fragment ions, achieving amino acid accuracies of ~80-90% on high-quality data, and often hybridize with database searches for validation.

Protein Identification and Quantification

In bottom-up proteomics, protein identification involves the inference problem of assembling identified peptides into specific proteins, particularly challenging due to shared peptide sequences among homologous proteins, isoforms, or paralogs. This process requires algorithms to resolve ambiguities by applying parsimony principles, which select the minimal set of proteins that explain all observed peptides while maximizing unique peptide assignments to avoid overcounting. For instance, shared or redundant peptides, which may map to multiple proteins, are handled by prioritizing unique peptides for unambiguous identification and grouping proteins with indistinguishable peptide sets into protein groups. Software tools like Scaffold employ these parsimony rules to cluster proteins, thinning out redundant entries to focus on parsimonious assemblies, thereby reducing false positives in complex datasets. Similarly, Proteome Discoverer uses a protein grouping inference process to assemble peptides into protein groups, ensuring comprehensive coverage without inflating protein counts. Sequence coverage, defined as the percentage of a protein's amino acid sequence spanned by identified peptides, typically ranges from 10% to 50% in standard bottom-up experiments, influenced by factors such as protein size, abundance, and digestion efficiency. Low coverage can lead to incomplete characterization, especially for post-translational modifications (PTMs), but it suffices for reliable identification in high-throughput analyses. To estimate relative protein abundance from such partial coverage, metrics like the exponentially modified protein abundance index (emPAI) correlate the number of observed peptides with theoretical observable peptides, providing a label-free approximation of absolute amounts; emPAI is calculated as $10^{\text{PAI}} - 1, where PAI is the ratio of sequenced to observable peptides, and has been validated against Western blot data for accuracy. Spectral counting complements this by tallying the number of peptide-spectrum matches (PSMs) per protein, normalized as normalized spectral abundance factor (NSAF) to account for protein length and dataset size, enabling relative quantification with good correlation to absolute levels in diverse samples. Quantification in bottom-up proteomics extends peptide-level measurements to proteins, employing label-free or isotopic labeling strategies to compare abundances across samples. Label-free approaches, such as intensity-based methods, integrate precursor ion signal intensities from extracted ion chromatograms, while label-free quantification (LFQ) algorithms like MaxLFQ normalize data across runs to handle technical variability, achieving precise relative quantification without sample modification. Isotopic labeling enhances multiplexing and accuracy; stable isotope labeling by amino acids in cell culture (SILAC) incorporates heavy (e.g., ^{13}\text{C}_6-lysine) and light isotopes into proteins during cell growth, allowing direct ratio determination from mass shifts in up to three states. For broader multiplexing, isobaric tags like , which support up to 35 samples via reporter ions in MS/MS spectra, and , enabling 4-8 plexing, facilitate high-throughput comparisons by summing reporter signals while minimizing spectral interference through synchronous precursor selection. Statistical validation ensures reliability in protein identification and quantification, with false discovery rate (FDR) estimation at the protein level addressing inference ambiguities from shared peptides. Tools like Percolator apply machine learning to rescore peptide-spectrum matches (PSMs), deriving posterior error probabilities and controlling FDR at 1% typically, which propagates to protein-level assessments via strategies like the "picked" protein group FDR to avoid overestimation in large datasets. Ambiguous inferences are managed by designating protein groups, where indistinguishable members share all peptides, and principal proteins are selected based on unique peptide counts or scores. Integrated pipelines such as MaxQuant provide end-to-end analysis, combining PSM validation, protein inference, LFQ or labeling-based quantification, and PTM site localization probabilities, supporting comprehensive proteome-wide studies with high sensitivity.

Comparisons to Other Approaches

Top-Down Proteomics

Top-down proteomics involves the direct mass spectrometry analysis of intact proteins, typically up to 70 kDa, without enzymatic digestion into peptides, enabling the characterization of proteoforms—all molecular variants arising from a single gene due to processes such as post-translational modifications (PTMs), alternative splicing, or truncations. This approach contrasts with bottom-up proteomics by preserving the full protein structure, using soft ionization techniques like electrospray ionization (ESI) to generate multiply charged ions that facilitate the analysis of larger biomolecules. Seminal work, such as the comprehensive proteoform characterization in human cells reported in 2011, demonstrated the feasibility of this method for mapping thousands of proteoforms across diverse protein classes. Key techniques in top-down proteomics rely on high-resolution mass spectrometers, such as Fourier transform ion cyclotron resonance (FT-ICR) or Orbitrap systems, which provide the necessary mass accuracy (often <1 ppm) to resolve subtle mass differences from PTMs or isoforms. Fragmentation is achieved through methods like electron capture dissociation (ECD) or electron transfer dissociation (ETD), which cleave the protein backbone while largely preserving labile PTMs, such as phosphorylation or glycosylation, unlike collision-induced dissociation that may cause losses. These techniques allow for extensive sequence coverage, frequently exceeding 90% for proteins under 30 kDa, enabling unambiguous identification of proteoforms without the need to reassemble peptide data. Compared to bottom-up proteomics, top-down offers advantages in providing complete, gas-phase sequence ladders that eliminate inference errors from incomplete peptide coverage or missed cleavages, particularly for distinguishing isoforms and localizing multiple PTMs on the same protein. For instance, it excels in targeted studies where full proteoform resolution is critical, such as validating bottom-up identifications of modified variants in complex mixtures. However, limitations include reduced sensitivity for detecting low-abundance proteoforms in large or complex samples, due to challenges in ion transmission and fragmentation efficiency for proteins above 50 kDa. Additionally, it demands specialized, high-cost instrumentation and achieves lower throughput, often identifying fewer than 100 proteins per analysis run, in contrast to the thousands possible via peptide-based workflows.

Middle-Down Proteomics

Middle-down proteomics represents a hybrid strategy in mass spectrometry-based protein analysis that bridges the gap between bottom-up and top-down approaches by employing limited proteolysis to generate larger peptide fragments, typically in the range of 3–15 kDa. This method uses less specific proteases, such as Glu-C or Asp-N, which cleave at fewer sites compared to the trypsin commonly used in full bottom-up digestion, resulting in peptides that retain more of the original protein sequence and post-translational modifications (PTMs). By producing fewer but larger fragments, middle-down proteomics reduces sample complexity while preserving contextual information about PTM coexistence that might be lost in smaller tryptic peptides. In practice, middle-down workflows often integrate top-down fragmentation techniques, such as electron-transfer dissociation (ETD), with high-resolution mass spectrometry instruments like Orbitrap systems to analyze these intact peptide fragments. Separation is achieved through optimized liquid chromatography methods, including strong cation exchange (SCX) or wide-pore reversed-phase columns (e.g., 300 Å pore size), to handle the larger peptides effectively. This approach is particularly suited for applications like PTM stoichiometry assessment and mapping modifications within protein complexes, where it enables the identification of interdependent PTMs on histone tails or other modified domains. Compared to traditional bottom-up proteomics, middle-down offers superior retention of labile PTMs, such as phosphorylations or acetylations, and achieves higher sequence coverage, often reaching 50–80% for targeted proteins, which helps resolve ambiguities in protein inference from overlapping peptides. It also facilitates better characterization of proteoform diversity by maintaining PTM patterns across larger fragments. However, challenges persist, including difficulties in separating and ionizing larger peptides, necessitating specialized LC-MS optimizations that can limit throughput relative to bottom-up methods. Data analysis remains complex due to the need for advanced software to handle fragmentation spectra of extended sequences. As a complementary technique, middle-down proteomics is gaining traction in structural biology for elucidating PTM roles in protein folding and interactions, as well as in detailed PTM studies of histones and transcription factors, where it provides insights into combinatorial modification codes. Its emerging role underscores its value in scenarios requiring balanced resolution and efficiency, such as chromatin biology and proteoform mapping in complex samples.

Advantages and Challenges

Advantages

Bottom-up proteomics excels in high-throughput analysis, enabling the identification and quantification of thousands of proteins from complex biological samples in a single experiment through the . This scalability arises from the digestion of proteins into peptides, which allows for efficient separation and mass spectrometric detection without the need for prior protein-specific targeting, making it particularly suited for large-scale across diverse tissues or cell types. The method offers superior sensitivity and proteome coverage compared to approaches analyzing intact proteins, as peptides exhibit efficient ionization and fragmentation in mass spectrometry, facilitating the detection of low-abundance proteins that might otherwise be obscured in complex mixtures. This results in a broader dynamic range, often spanning several orders of magnitude, which enhances the ability to profile proteins across varying expression levels within the same sample. Bottom-up proteomics integrates seamlessly with quantitative strategies, such as stable isotope labeling by amino acids in cell culture () or isobaric tagging like tandem mass tags (), allowing for precise differential expression analysis between samples with high reproducibility. These labeling techniques leverage the peptide-level resolution to normalize for technical variations, enabling accurate relative or absolute quantification of proteins in multiplexed experiments. Its cost-effectiveness stems from reliance on widely available standard mass spectrometry instrumentation and straightforward peptide separation techniques, such as reversed-phase liquid chromatography, which reduce sample complexity without requiring specialized equipment. This accessibility democratizes proteomic research, lowering barriers for routine implementation in academic and industrial settings. Furthermore, bottom-up proteomics provides robust site-specific identification of post-translational modifications (PTMs), as the generated peptides often localize modifications to precise amino acid residues, supporting detailed characterization of protein function and regulation.

Challenges and Limitations

Bottom-up proteomics typically achieves limited sequence coverage of proteins, often ranging from 10% to 50%, which frequently misses critical regions such as the N- and C-termini and results in incomplete contextual information for (PTMs). This partial coverage arises from incomplete enzymatic digestion, poor ionization of certain peptides, and the inherent bias in (MS) detection toward detectable fragments, leading to an incomplete representation of the proteome akin to a "tunnel vision" view. Consequently, subtle variations in protein isoforms or low-abundance modifications may go undetected, complicating the reconstruction of full . A significant drawback is the loss of labile PTMs during the fragmentation process in MS analysis. Common ionization and dissociation techniques, such as , often cause the neutral loss of modifications like or , as these labile groups absorb fragmentation energy preferentially over peptide bonds. For instance, sites can be entirely destroyed, yielding spectra that lack diagnostic ions and result in false negatives for PTM localization. This limitation is particularly pronounced for complex PTMs, where the peptide backbone fragmentation dominates, obscuring the modification's presence and site-specific details. Protein inference in bottom-up proteomics introduces ambiguity due to shared peptides among homologous proteins, isoforms, or paralogs, leading to challenges in distinguishing unique proteoforms and potential over- or underestimation of protein abundances. Redundant peptide sequences can map to multiple proteins in a database, resulting in grouped protein identifications rather than precise assignments, especially in eukaryotic samples with high sequence similarity. Algorithms attempt to parsimoniously assemble peptide evidence into protein lists, but this process often inflates the number of inferred proteins or misses isoform-specific variants, reducing the reliability of quantitative analyses. The method exhibits a bias toward hydrophilic peptides, as trypsin's cleavage specificity generates peptides with basic residues that enhance ionization and chromatographic retention, while hydrophobic or transmembrane domains are systematically underrepresented. Transmembrane proteins, rich in hydrophobic segments, often fail to solubilize adequately or produce peptides that aggregate and evade detection, skewing proteome coverage away from membrane-associated components critical for cellular function. This bias can be partially addressed through alternative proteases, though such approaches remain secondary to trypsin in standard workflows. Handling sample complexity poses another major challenge, as the vast dynamic range of protein abundances—spanning over 10 orders of magnitude in biological samples—overwhelms MS instruments, necessitating extensive peptide fractionation to achieve deep proteome coverage. Techniques like liquid chromatography or off-gel electrophoresis are required to reduce complexity prior to MS, but these steps significantly increase experimental time (often extending to days) and costs (potentially hundreds of dollars per sample), limiting throughput for large-scale studies. Without such prefractionation, low-abundance proteins remain undetectable amid dominant signals from high-abundance species.

Applications

Research Applications

Bottom-up proteomics has been instrumental in large-scale proteome mapping efforts, particularly through contributions to the (HPP), where it enables the high-confidence identification of proteins across human tissues and cell types. For instance, a high-stringency analysis using bottom-up mass spectrometry identified over 17,000 human proteins, providing a comprehensive blueprint that integrates with genomic data to map the functional proteome. In post-translational modification (PTM) discovery, bottom-up proteomics excels at site-specific analysis of modifications like , which regulates signaling pathways. Seminal studies have employed enrichment strategies, such as immobilized metal affinity chromatography, followed by bottom-up sequencing to map thousands of phosphorylation sites in response to stimuli, elucidating kinase-substrate networks in pathways like and . Similarly, for involved in protein degradation, bottom-up methods with di-glylysine remnant tagging have identified ubiquitylation sites on thousands of proteins, quantifying chain topologies that dictate proteasomal targeting and cellular homeostasis. For studying protein-protein interactions, bottom-up proteomics is often combined with cross-linking mass spectrometry (XL-MS) to capture complex stoichiometries and architectures in native environments. This hybrid approach covalently links interacting residues before digestion, allowing identification of intra- and inter-protein distances, as demonstrated in mapping the nuclear pore complex where cross-links resolved subunit arrangements. Such applications provide structural insights into dynamic assemblies, like signaling hubs, without requiring crystallization. Quantitative bottom-up proteomics, leveraging methods like stable isotope labeling by amino acids in cell culture () or tandem mass tags (), has advanced the study of cellular responses to stimuli such as drug treatments in cell lines. In experiments, differential incorporation of heavy isotopes tracks proteome changes, as seen in analyses of cisplatin-resistant ovarian cancer cells where hundreds of upregulated proteins revealed resistance mechanisms involving DNA repair pathways. multiplexing enables parallel quantification of multiple samples, identifying dynamic shifts in protein abundance during cellular stress responses. In microbial proteomics, bottom-up approaches support strain identification and virulence factor profiling from single runs, aiding infectious disease research. By matching peptide spectra against genomic databases, these methods distinguish closely related bacterial strains, such as differentiating pathotypes based on unique protein markers. Furthermore, bottom-up analysis of secreted proteins in pathogens like has profiled virulence factors, such as type III secretion effectors, revealing their roles in host invasion through quantitative changes post-infection.

Clinical and Industrial Applications

Bottom-up proteomics has emerged as a powerful tool for biomarker discovery by enabling the profiling of complex biofluids such as plasma and urine to identify disease-specific protein signatures in conditions like cancer and neurodegeneration. In cancer research, quantitative approaches have been applied to urine samples from endometrial cancer patients, revealing proteomic patterns that distinguish malignant cases from symptomatic controls with high sensitivity, facilitating non-invasive early detection. Similarly, plasma proteome analysis using data-independent acquisition has identified circulating biomarkers for growing tumors across multiple cancer types, highlighting proteins like apolipoproteins and complement factors as potential indicators of tumor progression. For neurodegeneration, consortium-led efforts have utilized bottom-up proteomics on cerebrospinal fluid and plasma to discover shared biomarkers across , , and related disorders, emphasizing pathways involved in protein aggregation and neuroinflammation. These applications leverage the high-throughput capabilities of bottom-up methods to process large cohorts, providing scalable insights for clinical translation. In personalized medicine, bottom-up proteomics supports the quantification of pharmacodynamic responses and elucidation of drug resistance mechanisms by analyzing patient-derived samples at the protein level. Dose-response proteomic profiling via methods like has decoded how drugs alter protein expression in cellular models, informing tailored therapies by mapping individual variability in drug efficacy and toxicity. In oncology, mass spectrometry-based targeted proteomics has quantified therapeutic and their pharmacodynamic effects in serum, enabling monitoring of treatment responses and adjustments for resistance in patients with hematologic malignancies. Furthermore, integrative pharmacokinetic/pharmacodynamic models incorporating bottom-up MS data have predicted target engagement for , allowing customization of dosing regimens based on real-time protein-level feedback from patient biopsies. Such approaches bridge preclinical insights with clinical outcomes, enhancing precision in therapeutic decision-making. Within the biopharmaceutical industry, bottom-up proteomics is routinely employed for quality control, particularly in monitoring residual host cell proteins (HCPs) during monoclonal antibody production to ensure product purity and safety. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) workflows have been optimized to detect and quantify HCPs across five orders of magnitude, identifying impurities from Chinese hamster ovary cells that could elicit immunogenicity in therapeutics. Comparative proteomic analyses using data-independent acquisition have revealed distinct HCP profiles between fed-batch and perfusion bioreactors, guiding process optimization to minimize critical impurities like chaperones and proteases. These orthogonal methods complement traditional ELISA assays, providing comprehensive coverage of low-abundance HCPs and supporting regulatory compliance in biomanufacturing. Bottom-up proteomics aids pathogen detection in clinical settings by identifying antibiotic resistance genes and virulence factors through proteomic signatures in microbial isolates. Tandem mass spectrometry-based typing has enabled rapid species identification and detection of resistance-associated proteins, such as beta-lactamases in Gram-negative bacteria, directly from culture or clinical samples. In antimicrobial resistance surveillance, bottom-up approaches have profiled proteomes of priority pathogens like Staphylococcus aureus, pinpointing virulence factors and resistance determinants to inform targeted therapies and outbreak responses. Additionally, shotgun proteomics has facilitated the characterization of bacterial effector proteins in host-pathogen interactions, revealing mechanisms of virulence in isolates from sepsis cases. Comparative proteomics using bottom-up methods has advanced understanding of disease mechanisms in model organisms, exemplified by spleen profiling in muscular dystrophy mouse models. In mdx mice, a dystrophin-deficient model of Duchenne muscular dystrophy, LC-MS/MS analysis of spleen tissue has identified differentially expressed immune-related proteins between wild-type and dystrophic models, highlighting altered cytokine signaling and extracellular matrix remodeling as key pathological features. These studies underscore the utility of bottom-up proteomics in preclinical validation of therapeutic interventions for translational research.

Recent Developments

Advances in Sample Preparation

Recent advances in bottom-up proteomics sample preparation have focused on accelerating enzymatic digestion while preserving high efficiency, with protocols reducing digestion times from traditional overnight incubations (approximately 18 hours) to as little as 1 hour. High-temperature methods, such as digestion at 47°C with calcium stabilization of , achieve this acceleration by enhancing enzyme stability and specificity, resulting in a 94% time reduction and up to 29% more peptide identifications compared to standard 37°C overnight protocols, while maintaining over 90% digestion efficiency as measured by sequence coverage. Similarly, alginate-based hydrogel entrapment of enzymes like enables room-temperature digestion in just 1 hour, yielding comparable protein group identifications (up to 1960 groups) and sequence coverage (around 11%) to conventional methods, with even shorter 1-minute digestions possible using complementary enzymes like . Microwave-assisted approaches further support these rapid workflows by facilitating efficient proteolysis in minutes without thermal degradation, though non-thermal effects remain unconfirmed. Bead-based digestion protocols, such as the single-pot, solid-phase-enhanced sample preparation (SP3) method using carboxylated magnetic beads, have gained prominence for their ability to handle low-input samples with minimal bias and rapid cleanup. In SP3, proteins bind to beads in high-acetonitrile conditions, enabling detergent removal (e.g., SDS) and on-bead trypsin digestion in 2-16 hours, which quantifies 500-1,000 proteins from as few as 100-1,000 cells with high reproducibility (coefficient of variation <30% for over 91% of proteins). Recent refinements, including automated implementations on robotic platforms like the Opentrons OT-2, further streamline this process for clinical samples such as FFPE tissues, reducing hands-on time and improving consistency across 96-well formats. These methods minimize protein loss and quantification biases inherent in solution-based digestions, particularly for heterogeneous or limited biological materials. On-membrane digestion techniques, including filter-aided sample preparation (FASP) and StageTip-based approaches, serve as efficient alternatives to in-gel methods by retaining proteins on ultrafiltration membranes or stacked disks for direct enzymatic processing and cleanup. FASP uses 3-10 kDa cutoff filters to tolerate strong detergents while minimizing sample loss, with miniaturized variants like micro-FASP identifying 1,895-3,069 protein groups from 100-1,000 cells and reducing processing time to under 30 minutes in spin-filter formats. StageTip integrates C18/SCX fractionation on Empore disks within pipette tips, enabling low-input (~1 μg) workflows with high reproducibility (R² >0.96) and up to 9,667 protein identifications from cell digests, effectively replacing gel-based losses in commercial proteomics kits. These on-membrane strategies enhance depth by facilitating buffer exchange and enrichment without additional transfers. To enable single-cell compatibility, nano-scale and protocols have been developed to analyze proteomes from as few as 1,000 cells, capturing cellular heterogeneity that bulk methods overlook. Microfluidic platforms like the iPhosChip perform in sodium deoxycholate buffers followed by 2-hour digestion and on-chip enrichment, detecting up to 15,869 phosphopeptides from 1,000 cells and as few as 193 from single cells, revealing pathway-specific signatures such as remodeling in cancer models. These approaches address input limitations by scaling volumes to nanoliters (130-530 nL) and integrating with , improving sensitivity for rare cell types without desalting steps. As of 2024-2025, reviews emphasize immobilized enzymatic reactors (IMERs) for automating in high-throughput settings, where enzymes like are covalently bound to micro-supports such as beads or to boost and reduce to minutes. Micro-IMERs, including multi-enzyme systems (e.g., trypsin/Glu-C), achieve 91.3% sequence coverage for model proteins like BSA in 2 minutes and maintain activity for up to 12 months, with integrated platforms like cFAST minimizing carryover (<6%) across thousands of samples daily. These advancements support automated workflows in labs, enhancing consistency for large-scale studies while integrating with bead- or membrane-based cleanups.

Improvements in Instrumentation and Analysis

Recent advancements in (DIA) methods, such as narrow-window DIA (nDIA) and Scanning SWATH, have significantly enhanced high-throughput capabilities in bottom-up (MS), enabling unbiased sampling across the . These techniques utilize fast-scanning instruments like the Astral, achieving isolation windows as narrow as 2 Th and MS/MS scan rates up to 200 Hz, which facilitate the identification of over 10,000 protein groups in human cell lines from single-shot analyses of low-input samples (e.g., 100 ng digests). For instance, nDIA workflows have demonstrated the profiling of approximately 48 human proteomes per day at depths exceeding 9,800 proteins per run, with multi-fractionated approaches reaching 12,300 proteins while covering 80% of known protein complexes. Improvements in fragmentation techniques have bolstered the analysis of post-translational modifications (PTMs), particularly labile ones, through hybrid electron transfer/higher-energy collisional (EThcD) on advanced hybrid platforms like the Excedion Pro. This method combines (ETD) for preserving modification sites with supplemental HCD to enhance fragment ion yields, resulting in up to 25% more identifications of PTMs such as and improved sequence coverage (e.g., 75% vs. 60% with HCD alone) in immunopeptidomics datasets. Activated ion ETD variants, implemented since 2023, further optimize reaction efficiencies for bottom-up workflows, reducing neutral losses and enabling deeper PTM localization in complex samples. Artificial intelligence (AI) integration has transformed data analysis in bottom-up proteomics, with tools like DIA-NN and Prosit enabling spectrum prediction and refined (FDR) control. DIA-NN employs neural networks for chromatogram extraction and correction in data, achieving sensitive quantification with minimal bias and FDR estimation via target-decoy competition enhanced by loss. Prosit, a model, predicts tandem mass spectra proteome-wide, supporting library-free sequencing and reducing false positives by up to 20% in high-throughput runs when integrated into pipelines like DIA-NN. These tools have become widely adopted for their ability to handle large-scale datasets, improving identification accuracy without extensive computational overhead. The fusion of cross-linking MS with bottom-up strategies has provided novel insights into and interactions, capturing spatial constraints (e.g., <30 Å for BS3 cross-linkers) to model 3D structures and dynamics. This approach digests cross-linked proteins into peptides for LC-MS/MS analysis, identifying intra- and inter-molecular links that inform simulations, as seen in studies of complexes like α-synuclein dimers revealing β-sheet-rich conformations. Recent 2025 reviews highlight its synergy with AI-driven modeling (e.g., ) and DIA for applications, such as mapping mitochondrial proteomes and complexes with residue-level resolution. Key milestones in 2025 include accelerated scan speeds on platforms like the and Excedion, enabling sub-minute gradient analyses while maintaining high resolution, and expanded with TMTpro reagents supporting over 40 samples per run for quantitative comparisons. A February 2025 review underscores these instrumental upgrades, noting their role in achieving unprecedented depths and coverage in bottom-up workflows, often synergizing with upstream for single-cell applications.

References

  1. [1]
    Overview and Considerations in Bottom-Up Proteomics - PMC
    In this review, we provide a broad overview of mass spectrometry-based proteomics in general, and highlight four developing areas of bottom-up proteomics.Missing: paper | Show results with:paper
  2. [2]
    Comprehensive Overview of Bottom-Up Proteomics Using Mass ...
    Jun 4, 2024 · “Shotgun proteomics” or “bottom-up proteomics” is the prevailing strategy, in which proteins are hydrolyzed into peptides that are analyzed by ...
  3. [3]
    A Critical Review of Bottom-Up Proteomics: The Good, the Bad ... - NIH
    ... proteomics research did not start until the mid-1990s. The beginning of proteomics research was sparked due to parallel developments in four areas: (i) two ...
  4. [4]
  5. [5]
  6. [6]
    The Evolution of Proteomics from 2010 to 2020 - ResearchGate
    Aug 9, 2025 · This paper presents a brief introduction to proteomics and defining the concept of proteomics. After that, listing some important techniques ...
  7. [7]
    Bottom-Up Proteomics: Advancements in Sample Preparation - PMC
    Mar 10, 2023 · In this review, we have outlined the current methods used for sample preparation in proteomics, including on-membrane digestion, bead-based digestion, ...
  8. [8]
    A Comparative Survey of Bottom-Up Proteomics Sample ...
    Aug 25, 2022 · We present a comprehensive comparison of the 16 most widely used sample preparation methods, covering in-solution digests, device-based methods, and ...
  9. [9]
  10. [10]
  11. [11]
  12. [12]
  13. [13]
    Recent Advances in Mass Spectrometry-Based Bottom-Up Proteomics
    ### Summary of Recent Advances in Mass Spectrometry-Based Bottom-Up Proteomics
  14. [14]
    UniProt
    UniProt is the world's leading high-quality, comprehensive and freely accessible resource of protein sequence and functional information.UniProtKB 199,579,900 resultsDownloadsUniProtKBRetrieve/ID mappingAbout UniProt
  15. [15]
    Focus on the 20-Year Anniversary of SEQUEST
    Sep 25, 2015 · This landmark database search method was developed and reported by Eng, McCormack, and Yates in 1994 and is the most highly cited paper ever ...
  16. [16]
    Confident Phosphorylation Site Localization Using the Mascot Delta ...
    In Mascot, ion scores are computed for each spectrum to peptide match, which may include alternative PTM site localizations within a peptide but the ion score ...
  17. [17]
    Protein inference: a review | Briefings in Bioinformatics
    In summary, the protein inference problem can be divided into three steps: peptide identification post-processing, peptide assembly and result validation. ...
  18. [18]
    Protein Grouping and Clustering in Scaffold
    May 4, 2022 · The following document describes the way Scaffold groups and thins out the list of proteins shown in the Samples table so the user can focus on the most likely ...Missing: inference | Show results with:inference
  19. [19]
    Protein grouping algorithm - Proteome Discoverer Software
    Proteome Discoverer uses a protein grouping inference process to group proteins when you use the Protein Grouping node in the consensus workflow.
  20. [20]
    A Critical Review of Bottom-Up Proteomics: The Good, the Bad, and ...
    Proteomics is the field of study that includes the analysis of proteins, from either a basic science prospective or a clinical one.
  21. [21]
    Exponentially modified protein abundance index (emPAI ... - PubMed
    Exponentially modified protein abundance index (emPAI) for estimation of absolute protein amount in proteomics by the number of sequenced peptides per protein.Missing: original paper
  22. [22]
    Identifying differences in protein expression levels by spectral ... - NIH
    Spectral counting is a strategy to quantitate relative protein concentrations in pre-digested protein mixtures analyzed by liquid chromatography online with ...Missing: original paper
  23. [23]
    Accurate Proteome-wide Label-free Quantification by Delayed ... - NIH
    MaxLFQ is a generic label-free quantification technology that is readily applicable to many biological questions; it is compatible with standard statistical ...
  24. [24]
    Stable isotope labeling by amino acids in cell culture, SILAC, as a ...
    SILAC is a simple, inexpensive, and accurate procedure that can be used as a quantitative proteomic approach in any cell culture system.
  25. [25]
    Tandem Mass Tags: A Novel Quantification Strategy for ...
    ORIGINAL ARTICLE. This notice is a correction. Addition/Correction August 8, 2003. Tandem Mass Tags: A Novel Quantification Strategy for Comparative Analysis ...
  26. [26]
    Protein labeling by iTRAQ: A new tool for quantitative mass ...
    Feb 2, 2007 · A novel, MS-based approach for the relative quantification of proteins, relying on the derivatization of primary amino groups in intact proteins using isobaric ...Missing: original | Show results with:original
  27. [27]
    Reanalysis of ProteomicsDB Using an Accurate, Sensitive, and ...
    Oct 31, 2022 · We demonstrate that the new Picked Protein Group FDR method produces accurate protein group-level FDR estimates regardless of the size of the data set.
  28. [28]
    Top-down Proteomics: Challenges, Innovations, and Applications in ...
    A significant benefit of top-down proteomics is the comprehensive characterization of the protein primary sequence. Effective ion activation for MS/MS analysis ...
  29. [29]
    Top-Down Proteomics and the Challenges of True Proteoform ...
    A substantive challenge in the computational analysis of top-down data, particularly for kindred proteoforms, is understanding and defining PTM site ambiguity.
  30. [30]
  31. [31]
  32. [32]
  33. [33]
    Toward an Optimized Workflow for Middle-Down Proteomics
    Feb 24, 2017 · As top-down proteomics typically covers relatively small proteins (10 < Mw < 30 kDa) and trypsin based bottom-up proteomics focuses generally ...
  34. [34]
    Middle-Down Proteomics: A Still Unexploited Resource for ... - NIH
    Jun 28, 2017 · However, all these methods have all the typical limitations of antibody-based techniques ... middle-down proteomics. Proteomics. 2016;16(3):459– ...
  35. [35]
    Advances in Quantitative Histone Proteoform Mass Spectrometry
    Jun 1, 2025 · Middle-down proteomics offers a compromise between top-down and bottom-up approaches. It generates longer peptides or “peptidoforms,” ...
  36. [36]
  37. [37]
    A high-stringency blueprint of the human proteome - Nature
    Oct 16, 2020 · To make proteomics an integrated component of multi-omics studies to advance life sciences, biomedical sciences and precision medicine.
  38. [38]
    The One Hour Human Proteome
    Apr 2, 2024 · We describe deep analysis of the human proteome in less than 1 h. We achieve this expedited proteome characterization by leveraging ...
  39. [39]
    Illuminating the dark phosphoproteome | Science Signaling
    Jan 22, 2019 · MS-based phosphoproteomics has revealed that most proteins are phosphorylated, many on multiple sites. Dysregulated phosphorylation is a ...
  40. [40]
    Proteomic approaches to study ubiquitinomics - PMC
    The most commonly used mass spectrometry-based method for characterizing ubiquitinated and other post-translationally modified proteins is 'bottom-up', whereby ...
  41. [41]
    Cross-Linking Mass Spectrometry for Investigating Protein ...
    Nov 19, 2021 · We summarize the most important cross-linking reagents, software tools, and XL-MS workflows and highlight prominent examples for characterizing proteins.
  42. [42]
    Leveraging crosslinking mass spectrometry in structural and cell ...
    Jan 6, 2022 · Crosslinking mass spectrometry (crosslinking-MS) is a versatile tool providing structural insights into protein conformation and protein-protein interactions.
  43. [43]
    Quantitative Proteomic and Interaction Network Analysis of Cisplatin ...
    Here we employ a quantitative proteomics technique using stable isotope labeling with amino acids in cell culture (SILAC) coupled with mass spectrometry.
  44. [44]
  45. [45]
    Typing and Characterization of Bacteria Using Bottom-up Tandem ...
    Bottom-up tandem mass spectrometry (MS) proteomics enables rapid characterization of large parts of the expressed genes of microorganisms.
  46. [46]
    Identification of secreted bacterial proteins by noncanonical amino ...
    Dec 17, 2013 · Here, using a shotgun, bottom-up proteomics approach, we show that noncanonical amino acid labeling enables enrichment of secreted virulence ...
  47. [47]
  48. [48]
  49. [49]
    Fast protein analysis enabled by high-temperature hydrolysis
    Sep 10, 2020 · To accelerate the incubation process, approaches such as microwave-assisted trypsin digestion and immobilization of trypsin into ...
  50. [50]
  51. [51]
  52. [52]
    Ultra-fast label-free quantification and comprehensive proteome ...
    Feb 1, 2024 · This method achieves nearly complete coverage of the expressed human proteome of ~12,000 proteins within 3–4.5 h of analysis. Results. Orbitrap ...
  53. [53]
    Increased EThcD Efficiency on the Hybrid Orbitrap Excedion Pro ...
    We demonstrate that the new instrument not only enables efficient and fast ETD reactions but, when combined with collision-based supplemental activation, i.e. ...
  54. [54]
    Acquisition and Analysis of DIA-Based Proteomic Data - NIH
    This review provides a comprehensive overview of recent advances in both the experimental and computational methods for DIA proteomics.
  55. [55]
    State-of-the-Art and Future Directions in Structural Proteomics - PMC
    Sep 3, 2025 · Bottom-up proteomics is the most widely used MS-based strategy for large-scale protein identification and quantification. In a standard bottom- ...