Fact-checked by Grok 2 weeks ago

Granularity

Granularity refers to the or fineness of division in a representation, model, system, or , where finer granularity involves smaller units or more precise distinctions, while coarser granularity aggregates elements into larger, less detailed components. The concept of information granulation was first introduced by in 1979. This concept, rooted in the partitioning of structures into granules—clusters or subsets of entities—enables analysis at varying scales and is fundamental to handling complexity across disciplines. In , granularity often describes the balance between computation and communication in systems, with coarse-grained approaches featuring larger tasks that minimize inter-process interactions for efficiency, and fine-grained ones involving smaller tasks that may increase overhead but allow for greater parallelism. It also underpins granular computing (GrC), a paradigm that processes through aggregates called granules, mimicking human cognition by grouping similar data for problem-solving in areas like and fuzzy systems. In data management and databases, granularity defines the precision of stored , such as transaction-level details versus aggregated summaries, influencing query performance, storage requirements, and analytical insights in data warehouses. Within and , granularity measures the extent of in models, affecting , accuracy, and ; for instance, finer model granularity enhances but increases . In philosophical and spatio-temporal reasoning, it relates to indiscernibility and context-dependent judgments, where the of determines whether properties, locations, or relations are distinguished or approximated, as seen in hierarchical spatial or temporal partitions like calendars or geographic regions. Overall, selecting appropriate granularity optimizes trade-offs between detail, computational cost, and interpretability, making it a core principle in interdisciplinary applications from to processes.

Conceptual Foundations

Definition and Scope

Granularity refers to the relative fineness or coarseness of a , , or within a , representing the at which information or components are divided or aggregated. This concept captures the degree of detail, where fine granularity emphasizes small, precise units—such as individual elements or high-resolution —while coarse granularity focuses on larger, summarized aggregates that simplify complexity. Across disciplines, it serves as a foundational tool for balancing detail and in and modeling. The term originates from "," a of the Latin granum meaning "" or "small particle," evoking the idea of breaking down wholes into , particle-like units. The "granularity" emerged in English in the late , with its earliest documented scientific usage in within a botanical describing the of structures. By the 20th century, the concept gained interdisciplinary traction, particularly in fields requiring scalable representations of , evolving from literal graininess to abstract measures of . Granularity operates across hierarchical scales, often distinguished as micro-granularity for intricate, low-level details and macro-granularity for overarching, high-level summaries. Selecting an appropriate scale entails trade-offs: finer granularity typically yields greater accuracy and specificity but escalates computational or cognitive costs due to increased data volume and processing demands, whereas coarser granularity promotes efficiency at the expense of nuance. These dynamics underscore granularity's role in managing without losing essential insights. Neutral examples illustrate this scope effectively. In cartographic representations, zoom levels adjust granularity, where a low zoom offers a coarse overview, and higher zooms reveal fine street-level details. Likewise, material compositions vary by particle granularity, from the fine powder of to the coarse chunks of , influencing properties like and . This foundational notion also informs philosophical extensions into and , where granularity delineates boundaries between exactness and interpretive flexibility.

Precision and Ambiguity

In , granularity pertains to the in conceptual representations, where finer-grained descriptions capture more distinctions while coarser ones abstract away details, often leading to in borderline cases. This interplay is central to the , an ancient puzzle attributed to of , which illustrates how incremental changes at a fine granularity can undermine coarse-grained categorizations. For instance, removing a single grain from a of sand does not destroy its status as a , yet repeating the process indefinitely suggests that even a single grain constitutes a , exposing the inherent in shifting from precise, granular observations to broader, ambiguous predicates like "." Such paradoxes arise because varying granularity in descriptions—fine enough to track each grain versus coarse enough to classify the aggregate—generates interpretive challenges, as the same reality yields conflicting truths depending on the chosen scale. Logical frameworks addressing vagueness emphasize how granularity influences truth values, particularly at conceptual boundaries. Supervaluationism, proposed by , treats vague predicates as having multiple admissible precisifications, each a fine-grained of the ; a is true if it holds across all such sharpenings, true in none if false across all, and indeterminate otherwise, thus preserving while accommodating granularity-induced ambiguity. In contrast, epistemicism, defended by , posits that vague terms possess sharp boundaries unknown to us due to cognitive limits, such that granularity affects our epistemic access to truth values rather than their existence—borderline cases reflect ignorance, not indeterminacy, resolving sorites-like arguments by insisting on bivalence despite apparent tolerance principles. Williamson's view attributes to epistemic limitations, where sharp cutoffs exist but are unknowable, perpetuating uncertainty in borderline cases. In , granularity's role amplifies in and , where fine-grained analyses of individual actions or circumstances can clash with coarse-grained principles, creating dilemmas. Ethical theories often grapple with this through granular considerations of intent and context versus broad rules like , leading to in applying norms to borderline moral scenarios, such as assessing harm in incremental environmental decisions. Similarly, in , vague statutes—such as those defining "" or "obscene"—require judges to navigate granularity in and precedents, where coarser interpretations promote flexibility but risk , while finer ones enhance at the cost of predictability. This tension underscores how granularity contributes to interpretive challenges, demanding mechanisms like judicial to resolve without eroding legal .

Scientific Applications

In Physics

In , granularity refers to the transition from approximations, which model as a smooth, continuous medium, to models that account for the particulate nature of substances at finer scales. This distinction is particularly evident in , where macroscopic flows are often described using equations like the Navier-Stokes equations, but at smaller scales—such as the of molecules—the inherent granularity of the fluid leads to deviations requiring particle-based approaches for accurate prediction. In quantum physics, granularity manifests at the fundamental limits imposed by and , with the Planck scale marking the regime where spacetime itself may exhibit discrete structure rather than continuity. The Planck length, approximately $1.616 \times 10^{-35} meters, represents this ultimate granularity limit, beyond which current theories of and break down, necessitating a theory of to describe phenomena. The Heisenberg uncertainty principle further illustrates the trade-offs in achieving fine-grained measurements, stating that the product of uncertainties in position and momentum satisfies \Delta x \Delta p \geq \frac{\hbar}{2}, where \hbar is the reduced Planck's constant; this inequality enforces an inherent fuzziness, preventing simultaneous precise knowledge of and highlighting the quantized, non-continuous nature of quantum states. On cosmological scales, the exhibits a hierarchical of granularity, spanning from subatomic particles to vast galactic clusters, which emerges from quantum fluctuations in the early during the . In standard \LambdaCDM models, initial perturbations on scales around $10^{-5} times the mean seed the growth of through gravitational , forming filaments, walls, and voids that define the large-scale cosmic web, with granularity at each level influencing the and clustering of matter. Experimentally, the limits of granularity resolution in physics are probed by instruments like transmission electron microscopes (TEMs), which achieve atomic-scale imaging by leveraging wavelengths on the order of 0.002–0.004 . For instance, aberration-corrected TEMs can resolve features down to about 0.5 (0.05 ) or better, allowing visualization of individual atomic positions in materials, though practical limits arise from factors such as sample damage, beam coherence, and lens aberrations, which can prevent sub-angstrom precision in routine use.

In Molecular Dynamics and Chemistry

In (MD) simulations, granularity refers to the level of detail in modeling atomic and molecular interactions, primarily through choices in temporal and spatial resolutions. MD, pioneered by Berni Alder and Thomas Wainwright in the late , involves numerically solving Newton's for a system of interacting particles to study their dynamic behavior over time. The method originated from their 1957 simulations of hard-sphere systems on an computer, marking the birth of computational techniques for probing microscopic origins of macroscopic properties in fluids and solids. Temporal granularity in MD is dictated by the time step size, typically on the order of 1-2 femtoseconds (fs) for all-atom simulations to accurately capture high-frequency bond vibrations, such as those in C-H stretches around 3000 cm⁻¹. Larger steps risk numerical instability and energy drift, while smaller ones enhance precision but increase computational cost. Spatial granularity is managed via cutoff radii for non-bonded interactions, often set at 1.0-1.2 nm (10-12 Å), beyond which forces are truncated or approximated to balance accuracy with efficiency in pairwise calculations. These parameters allow simulations to model phenomena like diffusion and phase transitions while approximating long-range electrostatics through methods like particle mesh Ewald summation. In chemical applications, granularity enables the study of reaction kinetics by varying model resolution from fine-grained all-atom representations to coarser groupings of atoms. All-atom MD provides detailed insights into atomic-scale events, such as pathways, where simulations track individual residue interactions over microseconds to capture conformational changes driven by hydrophobic effects and hydrogen bonding. For instance, Anton supercomputer runs have simulated the folding of small proteins like the WW domain in explicit , revealing folding times of 10-100 μs. Coarse-grained models, by contrast, group multiple atoms into "beads" to access longer timescales and larger systems, facilitating the exploration of reaction kinetics in complex environments like membranes or reactions. A key trade-off in these approaches is the computational expense of fine granularity versus the efficiency of coarse models. All-atom simulations demand ~10¹²-10¹⁵ floating-point operations per for systems of ~10⁵ atoms, limiting routine access to dynamics without specialized . Coarse-graining reduces this by factors of 10-100 in both time and space, enabling simulations of mesoscale processes like , though at the cost of losing atomic detail. The exemplifies this, mapping four heavy atoms per bead to simulate biomolecular systems up to microseconds, as validated in studies of and . Advances in multi-scale modeling integrate fine and coarse granularities to bridge , atomistic , and continuum descriptions, allowing simulations of chemical reactions across scales. For example, / () couples calculations for reactive regions (e.g., bond breaking in ) with classical for the surrounding environment, achieving femtosecond resolution where needed. These methods, evolving since the 1970s alongside 's foundations, have enabled detailed kinetic studies, such as proton transfer rates in water, by dynamically adjusting granularity based on local chemistry.

Computing Applications

Parallel and Distributed Computing

In and , granularity refers to the size of computational tasks into which a problem is decomposed for execution across multiple processors, directly influencing the balance between time and communication overhead. Fine-grained parallelism involves dividing work into small tasks, such as individual loop iterations or data elements, which allows for high parallelism but incurs significant communication costs due to frequent data exchanges between processors. In contrast, coarse-grained parallelism assigns larger tasks, like entire subdomains or modules, to processors, reducing communication frequency and overhead while potentially limiting if tasks are imbalanced. The choice of granularity is critical for optimizing performance, as overly fine tasks amplify from and messaging, whereas coarse tasks may underutilize resources if computation times vary widely. Amdahl's law provides a foundational limit on speedup from parallelism, emphasizing the impact of granularity on serial and parallel fractions of a program. The law states that the maximum speedup S with p processors is bounded by S \leq \frac{1}{f + \frac{1-f}{p}}, where f is the fraction of the program that must run serially. This equation highlights how even small serial components (f > 0) constrain overall gains, making coarse-grained approaches preferable for workloads with inherent sequential bottlenecks, while fine-grained methods suit highly parallelizable portions but demand efficient communication to avoid diminishing returns. In parallel architectures, granularity manifests differently in message-passing and shared-memory models. The (MPI) supports coarse-grained parallelism through explicit domain decomposition, where large data partitions are distributed across nodes with infrequent, bulk communications, ideal for distributed systems like (HPC) clusters. Conversely, facilitates fine-grained parallelism via compiler directives for shared-memory multithreading, enabling loop-level parallelization with minimal explicit synchronization, though it is limited to single-node or small-scale multicore environments. For instance, in HPC workloads such as weather simulation using the Weather Research and Forecasting (WRF) model, MPI handles coarse-grained atmospheric domain partitioning across thousands of nodes for global scalability, while adds fine-grained threading within nodes for loop computations like schemes. Uneven granularity often leads to load imbalance, where processors finish tasks at different rates, idling faster ones and reducing . This challenge is pronounced in heterogeneous workloads, such as irregular data accesses in simulations, where static partitioning fails to equalize computation times. Strategies like dynamic task partitioning address this by redistributing work at runtime; for example, work-stealing schedulers in frameworks like allow idle threads to "steal" tasks from busy ones, adapting granularity on-the-fly to maintain balance without excessive overhead. The historical evolution of granularity in traces from the 1980s vector processors, such as the , which exploited fine-grained data-level parallelism through SIMD instructions for vectorized computations in scientific applications, to modern graphics processing units (GPUs) that scale thousands of coarse- and fine-grained threads for massive throughput. Early vector systems emphasized instruction-level granularity to achieve high performance on linear algebra tasks, but scalability was limited by serial control flow. By the , GPUs introduced programmable shaders supporting diverse granularities, from fine-grained operations to coarse-grained launches, enabling broader adoption in HPC and underscoring granularity's role in achieving near-linear scaling on exascale systems. This progression has consistently prioritized tunable task sizes to mitigate Amdahl's limits and communication bottlenecks in increasingly distributed architectures. This trend continues into the exascale era with systems like (deployed 2022), which achieved over 1.2 exaflops using heterogeneous CPU and GPU nodes requiring optimized task granularity for efficient scaling.

Reconfigurable and High-Performance Computing

In , granularity refers to the size and flexibility of the basic computational units within devices like field-programmable gate arrays (FPGAs), where fine-grained architectures employ small logic elements such as look-up tables (LUTs) operating at the bit level for versatile, general-purpose reconfiguration, while coarse-grained reconfigurable arrays (CGRAs) use larger, specialized units like arithmetic logic units (ALUs) or multipliers to handle data-parallel tasks more efficiently with reduced routing overhead. Fine-grained designs excel in applications requiring irregular logic, such as cryptographic algorithms like , where bit manipulations demand high configurability, whereas coarse-grained approaches optimize for throughput in compute-intensive operations by minimizing and power consumption. This distinction allows reconfigurable hardware to balance adaptability and performance, enabling scalable tailored to demands. In (HPC) supercomputers, as ranked by the list, granularity manifests at the level through the degree of intra- parallelism, where systems balance high counts per —often exceeding in modern designs—with interconnect overhead to maximize computational while minimizing communication between . For instance, increasing -level granularity by integrating multi- processors reduces the ratio of inter- data transfers, enhancing overall system efficiency in large-scale simulations. task decomposition at the software level serves as a prerequisite for exploiting this granularity, ensuring workloads align with capabilities. Optimization techniques in reconfigurable HPC leverage partial reconfiguration to enable dynamic adjustment of granularity, allowing subsets of an FPGA to be updated without halting the entire system, which supports adaptation for varying computational needs and improves utilization. Metrics such as operations per reconfigurable —often measured in giga-operations per second (GOPS) per logic cell or processing element—quantify , with coarse-grained units typically achieving higher than fine-grained counterparts for arithmetic-heavy tasks due to specialized datapaths. In the Cray XT series from the , such as the XT4 and XT5 models, quad- AMD Opteron processors were employed, improving energy through better power scaling and reduced interconnect demands, while delivering sustained petaflop-scale performance in scientific workloads like climate modeling. These systems demonstrated that optimal granularity tuning could lower power consumption per flop by balancing with and constraints.

Information and Data Applications

Data Granularity

In , granularity refers to the at which is stored and accessed in databases, with fine-grained representing individual records or row-level details, such as specific entries, while coarse-grained involves aggregated summaries, like totals across multiple rows. Fine-grained storage enables precise querying but demands greater computational resources and storage space due to the volume of records, whereas coarse-grained approaches reduce storage needs and accelerate summary queries at the expense of detail. For instance, in SQL, the GROUP BY clause facilitates aggregation to achieve coarser granularity by grouping rows based on specified columns and applying functions like or ; a query such as SELECT , AVG() FROM employees GROUP BY computes average salaries per , transforming fine-grained employee records into a summarized . In environments, granularity influences partitioning strategies in frameworks like Hadoop and , where datasets are divided into partitions to enable distributed processing; finer partitioning by attributes such as timestamps or user IDs supports targeted queries but can lead to data skew and increased overhead, while coarser partitioning by broader categories like date ranges optimizes parallelism and load balancing. These trade-offs extend to privacy considerations, as coarser granularity in aggregations enhances protection under mechanisms like , which adds noise to query outputs to prevent inference of individual records— for example, releasing county-level statistics instead of household-level data reduces privacy risks in large-scale analyses without compromising overall utility. In analytics, particularly within (OLAP) systems, data granularity is dynamically managed through multidimensional cubes that store facts at varying levels of aggregation, allowing users to perform drill-down operations to increase detail (e.g., from yearly to monthly sales) or roll-up to coarsen it (e.g., from daily to quarterly summaries) for insights. This hierarchical navigation supports efficient exploration of trends, such as identifying regional performance variations, by precomputing aggregates at multiple granularities to balance query speed and accuracy. Data warehousing standards, such as Ralph Kimball's introduced in the , emphasize defining granularity hierarchies early in design to ensure consistent s; for example, a might establish daily transaction-level granularity as the atomic unit, with hierarchies ascending to weekly or monthly metrics for reporting, preventing inconsistencies in downstream analyses. This approach, detailed in Kimball's techniques, promotes scalable storage and querying by aligning data detail with business requirements, such as hourly versus daily metrics in inventory tracking.

Information Systems and Theory

In , granularity refers to the level of or of the , which directly influences the measurement of and the efficiency of source coding. The Shannon entropy, defined as H(X) = -\sum_{i} p_i \log_2 p_i, where p_i are the probabilities of outcomes in a discrete random variable X, quantifies the average or per . Finer granularity, achieved by refining of the space (e.g., increasing the number of distinguishable outcomes), leads to higher measurable because coarser partitions merge probabilities, reducing the effective size and thus underestimating . This relationship is formalized in results showing that if partition R refines partition Q, then the entropy under R is at least as large as under Q, as refinement preserves or increases the captured. In source coding, establishes that the provides a lower bound on the average code length for ; finer granularity thus demands higher bit rates to avoid information loss, as it expands the source and elevates the . Granular computing extends these principles to knowledge representation in artificial intelligence, providing a framework for handling uncertainty through multilevel abstractions. Originating from Zadeh's fuzzy set theory introduced in 1965 and building on his 1996 work on computing with words, granular computing was further formalized in Zadeh's 1997 paper on fuzzy information granulation. It builds on rough set theory, introduced by Pawlak in 1982, which models vagueness via indiscernibility relations that partition data into equivalence classes, enabling granular approximations of concepts without precise boundaries. In AI systems, this approach facilitates hierarchical knowledge representation by allowing reasoning at multiple granularity levels: finer granules capture detailed attributes for precise inference, while coarser ones support generalization and reduce computational complexity in tasks like decision-making or pattern recognition. For instance, rough sets approximate sets using lower and upper bounds, mirroring human-like tolerance for imprecision in knowledge bases. In , particularly for the , granularity manifests in the structure of ontologies, where the choice of representation level balances expressivity and . RDF —subject-predicate-object statements—provide the finest granularity, encoding atomic facts like " (subject) isCapitalOf (predicate) (object)," enabling detailed, machine-readable assertions. Ontologies aggregate these into classes and properties at coarser levels, such as defining "" as a class encompassing multiple , which supports inference and querying while abstracting complexity. This multilevel approach, informed by granular computing, allows dynamic adjustment of detail: finer RDF-based representations enhance precision in knowledge graphs, whereas aggregated ontological classes promote scalability in distributed systems like the . Seminal work on ontology-driven systems highlights how such granularity enables retrieval at varying scales, from triple-level facts to high-level conceptual hierarchies. Applications in compression algorithms illustrate granularity's practical impact on system performance. In the JPEG standard, images are divided into 8x8 pixel blocks for discrete cosine transform (DCT) processing, where this block size represents the granularity of spatial decomposition. Finer blocks (e.g., smaller than 8x8) would capture more local details, improving quality in textured areas but increasing overhead from quantization and encoding, leading to larger file sizes; conversely, coarser blocks reduce compression efficiency and introduce visible artifacts. This trade-off optimizes for typical image characteristics, balancing perceptual quality against storage constraints, as validated in the original JPEG design. Data aggregation techniques serve as practical implementations of these principles in broader information systems.

References

  1. [1]
    Granular computing: an introduction - IEEE Xplore
    Granular computing deals with representing information in the form of aggregates (that embrace a number of individual entities) and their ensuing processing.
  2. [2]
    1 - Seidenberg School of Computer Science and Information Systems
    Granularity. In parallel computing, granularity is a qualitative measure of the ratio of computation to communication. · Coarse: relatively large amounts of ...
  3. [3]
    [PDF] Three Perspectives of Granular Computing
    Granular computing is viewed as structured thinking, structured problem solving, and a paradigm of information processing.
  4. [4]
    What Is Data Granularity? Definition, Types, and More - Coursera
    Aug 6, 2025 · Data granularity measures how finely data is divided within a data structure. Choosing the right level of data granularity is essential to ensure that your ...
  5. [5]
    [PDF] Granularity in reference to spatio-temporal location and relations
    In general, the notion of granularity is closely related to the notion of indiscernibility, which refers to the fact that objects, properties, or relations ...
  6. [6]
    Granular Computing - Yao - Wiley Online Library
    Dec 14, 2007 · Granular computing is used as an umbrella term to cover theories, methodologies, techniques, and tools that make use of granules in problem ...
  7. [7]
    A triarchic theory of granular computing
    Jan 7, 2016 · A level provides a description of the whole through a family of granules at a particular level of granularity or detail. Granules can be ...
  8. [8]
    [PDF] MODEL GRANULARITY AND RELATED CONCEPTS
    In this article, a range of theoretical definitions of granularity and related concepts from various communities are collected and discussed with respect to ...
  9. [9]
    granularity, n. meanings, etymology and more
    The earliest known use of the noun granularity is in the 1880s. OED's earliest evidence for granularity is from 1882, in a translation by Sydney Vines, botanist ...
  10. [10]
    [PDF] Granular Computing: Past, Present and Future
    Granular computing is an interdisciplinary study of human-inspired computing, using structured thinking, problem solving, and information processing, and deals ...
  11. [11]
    [PDF] Micro and macro models of granular computing induced by the ...
    In this paper we introduce and study some micro-macro granular mathematical structures uniquely associated to any knowledge representation system 그. 1.1.
  12. [12]
    Granularity Definition | GIS Dictionary - Esri Support
    The coarseness or resolution of data. Granularity describes the clarity and detail of data during its capture and visualization.Missing: etymology | Show results with:etymology<|control11|><|separator|>
  13. [13]
    [PDF] On vagueness and granularity
    Consider the prime challenge concerning vagueness: sorites-like paradoxes. For me, the challenge is ... • not to “solve” the paradox –. – I just note that. A ...
  14. [14]
    [PDF] A Unified Theory of Granularity, Vagueness, and Approximation
    Abstract: We propose a view of vagueness as a semantic property of names and predicates. All entities are crisp, on this semantic view, but there are, for.
  15. [15]
    Kit Fine, Vagueness, truth and logic - PhilPapers
    This paper deals with the truth-Conditions and the logic for vague languages. The use of supervaluations and of classical logic is defended; and other ...
  16. [16]
    (PDF) What Is the Value of Vagueness? - ResearchGate
    Classically, vagueness has been considered something bad. It leads to the Sorites paradox, borderline cases, and the (apparent) violation of the logical ...
  17. [17]
    [PDF] Discrete and continuum descriptions of matter - Haverford Scholarship
    The complementary strengths of particle-based and continuum ap- proaches are needed to explain many phenomena involving granular and fluid dynamics. ...
  18. [18]
    Continuum Modeling of Granular Media | Appl. Mech. Rev.
    This is a survey of the interesting phenomenology and the prominent regimes of granular flow, followed by a unified mathematical synthesis of continuum.<|separator|>
  19. [19]
    [PDF] The Uncertainty Principle Determines the Nonlocality of Quantum ...
    Mar 17, 2017 · Heisenberg (1) observed that quantum mechanics imposes strict restrictions on what we can hope to learn—there are incompat- ible measurements ...
  20. [20]
    WMAP Formation of Universe Structures - NASA
    Feb 22, 2024 · Astronomers observe considerable structure in the universe, from stars to galaxies to clusters and superclusters of galaxies.
  21. [21]
    Resolution measures in molecular electron microscopy - PMC
    The theoretical resolution of electron microscopes is 0.23 nm for an accelerating voltage 100keV ( λ = 0.003701nm) and 0.12 nm for 300keV ( λ = 0.001969 nm), ...Missing: granularity | Show results with:granularity
  22. [22]
    Limits to Resolution in the Electron Microscope
    Resolution is limited by diffraction, the Airy disc, and aberrations. For 100,000 volts, resolution is 0.24 nm (2.4 Å), improving with higher voltage.Missing: atomic granularity
  23. [23]
    Studies in Molecular Dynamics. I. General Method - AIP Publishing
    B. J. Alder, T. E. Wainwright; Studies in Molecular Dynamics. I. General ... This content is only available via PDF. Open the PDF for in another window.Missing: paper | Show results with:paper
  24. [24]
    Berni Alder and the pioneering times of molecular simulation
    Jul 25, 2018 · The paper traces the early stages of Berni Alder's scientific accomplishments, focusing on his contributions to the development of Computational Methods.
  25. [25]
    (PDF) The Growth of Molecular Dynamics - ResearchGate
    Molecular dynamics was thus born between 1956 and 1957. Berni Alder and Tom Wainwright proved the existence of a phase transition from the fluid to the solid ...
  26. [26]
    Classical and reactive molecular dynamics: Principles and ...
    The first MD simulation performed on a computer, IBM 704, was attributed to Alder and Wainwright in 1957 [71]. Since then, advances in simulation ...
  27. [27]
    Folding Simulations for Proteins with Diverse Topologies Are ...
    Sep 16, 2014 · We demonstrate that our recently developed physics-based model performs well on this challenge, enabling accurate all-atom simulated folding for 16 of 17 ...
  28. [28]
    Martini 3: a general purpose force field for coarse-grained molecular ...
    Mar 29, 2021 · The coarse-grained Martini force field is widely used in biomolecular simulations. Here we present the refined model, Martini 3 (http://cgmartini.nl),
  29. [29]
    Multiscale Methods for Macromolecular Simulations - PMC
    In this article we review the key modelling tools available for simulating biomolecular systems. We consider recent developments and representative ...
  30. [30]
    Introduction to Parallel Computing Tutorial - | HPC @ LLNL
    Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem.
  31. [31]
    Coarse-Grained Parallelism - an overview | ScienceDirect Topics
    So which is “better”—“coarse-grained” or “fine-grained” parallelism; OpenMP or MPI? Of course the answer, as you might expect, is “it depends.” Each of ...
  32. [32]
    Validity of the single processor approach to achieving large scale ...
    Validity of the single processor approach to achieving large scale computing capabilities. Author: Gene M.Missing: original paper
  33. [33]
    [PDF] Comparing the OpenMP, MPI, and Hybrid Programming Paradigms ...
    The main thrust of the hybrid parallel paradigm is to combine process level coarse-grain parallelism, such as domain decomposition and fine-grain parallelism on ...
  34. [34]
    A Hybrid MPI–OpenMP Parallel Algorithm and Performance ...
    The performance of the parallel algorithm has been tested with simulated and real radar data. The parallel program shows good scalability in pure MPI and hybrid ...
  35. [35]
    Load Imbalance - an overview | ScienceDirect Topics
    1. In parallel computing, load imbalance occurs when tasks assigned to different processors or workers require varying amounts of time to complete, resulting in ...Causes and Types of Load... · Load Balancing Techniques...
  36. [36]
    Data-Parallel Computing - ACM Queue
    Apr 28, 2008 · The history of data-parallel processors began with the efforts to create wider and wider vector machines. Much of the early work on both ...<|control11|><|separator|>
  37. [37]
    [PDF] The Evolution of GPUs for General Purpose Computing - NVIDIA
    Sep 23, 2010 · Build the architecture around the processor. Page 28. © NVIDIA Corporation 2007. Next step: Expose the GPU as massively parallel processors.
  38. [38]
    [PDF] Reconfigurable Computing: A Survey of Systems and Software
    The level of cou- pling, granularity of computation struc- tures, and form of routing resources are all key points in the design of reconfigurable systems.Missing: seminal | Show results with:seminal
  39. [39]
    [PDF] Reconfigurable computing: architectures and design methods
    This survey covers two aspects of reconfigurable computing: architectures and design methods. The paper includes recent advances in reconfigurable architectures ...
  40. [40]
    [PDF] PipeRench: A Reconfigurable Architecture and Compiler
    FPGAs are designed for logic replacement. The functional units' granularity is optimized to replace random logic, not to per- form multimedia computations.
  41. [41]
    Coarse-Grained Reconfigurable Computing with the Versat ... - MDPI
    Mar 12, 2021 · The FPGA has the lowest granularity since it allows the reconfiguration of both logic and routing at the bit level. One can design and map high ...
  42. [42]
    Frequently Asked Questions - TOP500
    HPL uses the block size NB for the data distribution as well as for the computational granularity. From a data distribution point of view, the smallest NB, the ...
  43. [43]
    [PDF] Monitoring Large Scale Supercomputers: A Case Study with the ...
    May 25, 2021 · Monitoring node-level power changes at a fine granularity and capping the power on the nodes as well as components such as GPUs when ...
  44. [44]
    [PDF] FPGA Dynamic and Partial Reconfiguration: A Survey of ...
    A large granularity increases the overhead of the partially reconfigurable area on the chip, while a fine granularity supports flexbility, but may entail ...
  45. [45]
    Reconfigurable Hardware - an overview | ScienceDirect Topics
    ... metric called functional density that measures the computational throughput (operations per second) of unit hardware resources. This metric can be used to ...
  46. [46]
    (PDF) Measuring and tuning energy efficiency on large scale high ...
    In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the ...
  47. [47]
    [PDF] The Cray XT4 Quad-core : A First Look
    The Cray XT system located at Oak Ridge National Laboratory is the most powerful computing capability for the Department of Energy's (DOE) Office of Science, ...
  48. [48]
    [PDF] ACES and Cray Collaborate on Advanced Power Management for ...
    To facilitate goals of Reliability, Availability, and. Serviceability (RAS), Cray HPC systems dating from the. XT-series systems to the current XC-series ...
  49. [49]
    Dimensional modeling: Identify the grain - IBM
    The fact and dimension tables have a granularity associated with them. In dimensional modeling, granularity refers to the level of detail stored in a table. For ...Identify The Metadata Of The... · Handling Multiple Separate... · Identify High-Level...
  50. [50]
    Pros and cons of Hive-style partitioning - Delta Lake
    Estimating the correct partition granularity upfront is very difficult. The rule of thumb here is that there should be at least 1 GB of data in each partition.
  51. [51]
    Differential privacy: its technological prescriptive using big data
    Apr 13, 2018 · This paper presents the basics of differential privacy as a privacy preserving mechanism [3, 4] for big data.Missing: coarse granularity
  52. [52]
  53. [53]
  54. [54]
    Keep to the Grain in Dimensional Modeling - Kimball Group
    Jul 30, 2007 · To avoid “mixed granularity” woes including bad and overlapping data, stick to rich, expressive, atomic-level data that's closely connected to ...
  55. [55]
    [PDF] Kimball Dimensional Modeling Techniques
    Ralph Kimball introduced the data warehouse/business intelligence industry to dimensional modeling in. 1996 with his seminal book, The Data Warehouse Toolkit.
  56. [56]
    [PDF] Entropy and Information Theory - Stanford Electrical Engineering
    This book is devoted to the theory of probabilistic information measures and their application to coding theorems for information sources and noisy channels ...
  57. [57]
    (PDF) Granular Computing and Rough Sets - ResearchGate
    In rough set theory, quotient sets, table representations, and concept hierarchy trees are all set theoretical, while in binary granulation, they are special ...Missing: 1990s | Show results with:1990s
  58. [58]
    Granular Computing and Rough Sets - SpringerLink
    Granular computing: Fuzzy logic and rough sets. In Zadeh, L. and Kacprzyk, J., editors, Computing with Words in Information/Intelligent Systems, pages 183–200.
  59. [59]
    Semantic Granularity in Ontology-Driven Geographic Information ...
    This paper shows the potential for information retrieval at different levels of granularity inside the framework of information systems based on ontologies.
  60. [60]
    Granular computing applied to ontologies - ScienceDirect
    In this paper we propose four operations in order to have several granular perspectives for a specific ontological commitment.
  61. [61]
    Information granules and entropy theory in information systems
    Jul 3, 2008 · Entropy theory in information systems is further developed and the granulation monotonicity of each of them is proved. Moreover, the complement ...