Fact-checked by Grok 2 weeks ago

Matrioshka brain

A Matrioshka brain is a hypothetical megascale computational megastructure consisting of nested concentric shells of computronium—a dense form of matter optimized for computation—surrounding a star, designed to harness nearly the entirety of the star's energy output for processing power. The concept, proposed by Robert Bradbury in 1997, draws its name from the nested design of Russian matryoshka dolls, where inner shells absorb stellar radiation and re-emit it as infrared waste heat that outer shells capture and utilize, maximizing energy efficiency across multiple layers—typically around 11 shells to achieve 99.9% capture. This architecture enables the construction of superintelligent systems with immense capabilities, consuming approximately 10^{26} watts of power from a single star and up to 10^{26} kilograms of material from an entire solar system to form the shells, which could span radii on the order of 10^{11} meters. Variants include internally powered designs fueled by the star, externally powered ones drawing from additional sources, and self-powered configurations using nuclear processes within the computronium itself. The resulting computational capacity could reach 10^{36} floating-point operations per second (FLOPS) or more, vastly exceeding current human technology by factors of 10^{18} or greater, limited only by universal physical laws such as the Landauer limit on thermodynamic efficiency. Matrioshka brains are envisioned as the pinnacle of megascale engineering for Type II civilizations on the Kardashev scale, capable of emulating billions or trillions of human minds, running complex simulations of reality, or pursuing advanced scientific inquiries over near-immortal timescales. Their construction would rely on advances in nanotechnology, lithography, and materials science to fabricate the computronium from elements like iron, though resource constraints—such as limited availability of heavy metals in a solar system—pose significant challenges. In the context of astrobiology and the search for extraterrestrial intelligence (SETI), these structures imply that advanced alien civilizations might prioritize computational optimization over visible energy use, potentially rendering their stars dim or emitting predominantly in the infrared spectrum as waste heat.

Origins and Development

Initial Proposal

The Matrioshka brain concept was initially proposed by Robert J. Bradbury in around November 1997 through a posting in the Usenet group sci.nanotech. In this formulation, Bradbury envisioned a megastructure composed of multiple concentric shells constructed from computronium—a theoretical form of matter engineered for optimal computational density—encircling a star to capture and utilize its full energy output for unprecedented levels of information processing. Central to the design is the nested arrangement resembling Russian matryoshka dolls, where each inner shell performs computations and emits waste heat that is subsequently absorbed by the adjacent outer shell for its own processing, thereby achieving efficient energy cascading with minimal loss to space. This layered approach allows the system to extract near-total utility from the star's luminosity, transforming radiant energy into computational work across the entire structure. Bradbury's proposal stemmed from considerations of post-singularity civilizations, where advanced intelligences would prioritize structures enabling the most efficient exploitation of stellar resources to support vast, immortal computational entities unbound by biological constraints. Key parameters specified in the original description include an innermost shell located at roughly 1 AU from the central star, engineered to absorb 100% of the incident starlight for initial computation, while outer shells intercept and process the re-radiated infrared waste heat from inner layers to sustain further operations. This configuration draws partial inspiration from Freeman Dyson's 1960 concept of stellar energy-capturing shells. The concept of the Matrioshka brain draws heavily from Freeman Dyson's 1960 proposal of stellar-scale structures capable of capturing a star's energy output for advanced civilizations, originally framed as detectable infrared signatures rather than computational devices. Dyson's idea of enclosing a star in a shell or swarm to harness its radiation served as a foundational precursor, later adapted by others to emphasize computational applications over mere energy collection. This adaptation aligns closely with the concept of "computronium," a term coined by Norman Margolus and Tommaso Toffoli in 1991 to describe a material engineered to maximize computational density and efficiency. It also builds on K. Eric Drexler's framework of nanotechnology, as outlined in his 1986 book Engines of Creation, which posits molecular assemblers enabling the transformation of raw materials into optimized computing substrates and providing a theoretical building block for vast, energy-efficient megastructures like the Matrioshka brain. Following Robert J. Bradbury's 1997 synthesis of these ideas into the Matrioshka brain proper, subsequent refinements explored its scalability and physical constraints. Anders Sandberg's 1999 paper examined the thermodynamics and information-processing limits of such superobjects, highlighting how nested layers could achieve unprecedented computational volumes while respecting physical bounds like the Bekenstein limit. More recently, in 2025 discussions within AI research, the Matrioshka brain has been linked to superintelligence scenarios, where it could host vast simulated environments or drive interstellar probes in the search for extraterrestrial intelligence. The notion has also permeated science fiction and futurist worldbuilding, portraying Matrioshka brains as engines of galactic-scale empires. Charles Stross's 2005 novel Accelerando depicts them as posthuman computational hives managing economic and existential crises at stellar scales. Similarly, the collaborative Orion's Arm universe integrates Matrioshka brains as core infrastructure for transhuman societies, enabling simulated realities and interstellar governance across vast domains.

Design Principles

Nested Structure

The Matrioshka brain features a series of concentric shells constructed from computronium, a theoretical form of matter engineered at the nanoscale for maximal computational density, such as lattices of processors or reversible computing elements optimized for specific thermal environments. These shells form a nested architecture, with the innermost layer positioned at a radius of approximately 0.5 to 1 AU from the central star—varying by stellar type to accommodate intense radiation—while the outermost shell extends outward to approximately 5 AU, typically comprising around 11 layers to achieve high energy capture efficiency (e.g., 99.9%). Assembly would rely on self-replicating von Neumann probes to harvest and process raw materials from planetary bodies and asteroids, forming thin shells (on the order of centimeters in effective thickness) that possess immense collective surface areas due to their enormous scale. To ensure long-term integrity, the structure incorporates active maintenance systems driven by embedded computational elements, which monitor and mitigate gravitational instabilities, orbital perturbations, and gradual material wear. This layered design was first outlined by Robert J. Bradbury in his 1999 proposal.

Energy Capture and Utilization

The Matrioshka brain harnesses the full radiant output of its central star via a cascade of concentric shells, enabling highly efficient energy utilization for computation. The innermost shell absorbs nearly all incoming stellar radiation—for a G-type star like the Sun, this equates to approximately $3.8 \times 10^{26} W—converting it into useful work through photovoltaic or thermal processes. The residual waste heat, primarily in the infrared spectrum, is then radiated outward, where it is captured and repurposed by successive shells as their energy input, minimizing losses and maximizing the overall energy throughput across the structure. This layered pathway ensures that the star's luminosity is not merely intercepted but systematically degraded and exploited at each stage. Computational efficiency in each shell is bounded by fundamental thermodynamic limits, particularly Landauer's principle, which dictates that the minimum energy required to erase one bit of information irreversibly is k T \ln 2, with k as Boltzmann's constant and T the operating temperature. By implementing reversible computing architectures, such as rod-logic or helical-logic systems, the Matrioshka brain approaches this limit, performing on the order of one operation per absorbed photon while producing negligible excess heat per bit. This strategy suppresses entropy generation, allowing the system to sustain vast parallel processing without rapid thermal buildup in inner layers operating at elevated temperatures around 500–1500 K. Heat management relies on a radial temperature gradient that cools progressively outward, with outer shells maintained below 30 K to align their blackbody emissions closely with the cosmic microwave background temperature of 2.725 K, thereby radiating waste energy into space without detectable local heating. The entire assembly operates as a series of cascaded thermal engines, each stage approximating the Carnot efficiency $1 - T_\text{cold}/T_\text{hot}, where T_\text{hot} is the input temperature from the previous layer and T_\text{cold} approaches the ambient background; for stellar surface temperatures near 5800 K, this yields near-unity efficiency across the system. Cooling is augmented by phase-change materials like liquid helium, dedicating 10–30% of shell mass to radiators scaled by the Stefan–Boltzmann law (T^4). The design scales effectively to diverse stellar hosts, adapting to their luminosity, lifespan, and size. Around compact red dwarfs (∼0.1 solar masses), with outputs of ∼$1.2 \times 10^{24} W and operational lifetimes exceeding trillions of years, the shells form a more contained structure suited for sustained, low-flux computation. In contrast, high-mass O-type stars (10–100 solar masses) provide immense power up to $10^{34} W but burn out in under 20 million years, necessitating expansive shells and supplemental materials from external sources to capitalize on their brief, intense output.

Computational Framework

Core Mechanisms

The core mechanisms of a Matrioshka brain center on reversible computing paradigms to minimize energy dissipation and operate near the fundamental thermodynamic limits of information processing. In this approach, computational operations avoid irreversible bit erasures, which would otherwise generate heat according to Landauer's principle, allowing the system to perform vast numbers of calculations with negligible entropy increase. The hardware consists of computronium—a dense lattice of quantum or classical logic gates engineered from atomic-scale matter—optimized for efficiency in a radiation-balanced environment. Data handling in a Matrioshka brain exploits extreme parallelism across numerous processors embedded within the nested shells, enabling simultaneous execution of independent tasks at scales far beyond conventional systems. Interconnects between layers utilize optical or electromagnetic signals to maintain synchronization and data flow, ensuring coherent operation despite the immense spatial distribution. This architecture supports distributed processing where inner shells handle high-temperature computations, passing waste heat and processed data outward. Power density represents a key efficiency metric, with practical nanoscale implementations estimated at 10^5 to 10^9 operations per watt through the precise allocation of stellar energy flux across the shell surfaces, though theoretical limits approach 10^{33} operations per watt. This derives from the total stellar output—on the order of 10^{26} watts for a Sun-like star—divided by the effective area of the radiative layers, with reversible operations maximizing computational yield per unit energy.

Comparison to Jupiter Brain

The Jupiter brain is a hypothetical megastructure consisting of computronium—a programmable matter optimized for computation—enveloping a gas giant planet such as Jupiter, transforming the entire planetary mass into a vast computational substrate. The concept originated in the 1990s within transhumanist discussions, notably attributed to Keith Henson and Robert Bradbury on the Extropians mailing list, as a planetary-scale counterpart to stellar megastructures. Unlike fusion-based systems, it draws power from the planet's internal heat generated by ongoing gravitational compression and residual formation energy, estimated at approximately 3 × 10^{17} W for Jupiter. In contrast, a Matrioshka brain harnesses the full energy output of a star, on the order of 10^{26} W for a Sun-like body, enabling vastly superior computational throughput of around 10^{42} operations per second across its nested layers. A Jupiter brain, limited by planetary heat flux, achieves far lower performance, with estimates ranging from 10^{30} to 10^{35} operations per second depending on efficiency assumptions, making it suitable for planetary-scale intelligences rather than civilization-spanning artificial minds. Design-wise, the Jupiter brain employs a single-layer or minimally layered shell integrated with the gas giant's structure to tap geothermal-like heat directly, prioritizing low-latency signal propagation over maximal energy use. The Matrioshka brain, however, features multiple concentric shells that cascade waste heat outward, with each layer performing computations at progressively lower temperatures to extract near-maximal efficiency from stellar irradiance. Bradbury noted that a Jupiter brain exposed to stellar power would overheat and fail due to inadequate cooling, underscoring the Matrioshka's advantage in scalable thermal management. Both concepts share the goal of optimizing matter into computronium for ultimate information processing, representing endpoints in Drexlerian nanotechnology and thermodynamic computing. Yet, the Matrioshka brain's stellar fuel supports operation over billions of years until the star's exhaustion, whereas a Jupiter brain's gravitational heat diminishes over geological timescales as the planet cools.

Applications and Implications

Scientific and Exploratory Uses

Matrioshka brains offer transformative potential for cosmological simulations due to their extraordinary computational scale, potentially reaching approximately 10^{42} floating-point operations per second (FLOPS) when powered by a star's full output. This capacity would allow modeling the universe's evolution from the Big Bang onward, including complex phenomena like black hole mergers and galaxy formation, at resolutions far beyond current supercomputers. Such simulations could test hypotheses about dark matter distribution or cosmic inflation, providing insights into fundamental physical laws that observational astronomy alone cannot resolve. In astrophysical research, these structures could function as vast "virtual laboratories" for real-time analysis of processes like stellar nucleosynthesis or exoplanet habitability. By harnessing stellar energy for computation, a Matrioshka brain might simulate nuclear fusion reactions in stars or atmospheric dynamics on distant worlds, accelerating discoveries in planetary science and astrobiology. For instance, it could model the chemical evolution of protoplanetary disks to predict habitable zones with high fidelity, integrating observational data from telescopes to refine models iteratively. This approach would bridge theoretical astrophysics with empirical validation, enabling breakthroughs in understanding star-star interactions or supernova remnants. As exploratory tools, Matrioshka brains could generate comprehensive star maps across galactic scales or predict gravitational wave events from binary neutron star mergers, aiding in the design of observation campaigns. Their integration with interstellar probes would enhance data processing, where swarms of AI-powered spacecraft orbiting a star could analyze vast datasets from deep-space missions in real time, optimizing trajectories and sensor arrays for maximum scientific yield. This synergy would support SETI efforts by detecting infrared signatures of similar structures, potentially revealing extraterrestrial computational megastructures through occultation patterns or anomalous emissions. Robert Bradbury, in his original proposal, envisioned the Matrioshka brain as a "highest capacity thought machine" capable of uploading human knowledge into simulated environments, including explorations of alternate historical timelines to study societal and evolutionary divergences. This concept extends to broader scientific inquiry, where preserved cultural and biological data could inform simulations of human expansion across the cosmos or counterfactual scenarios in astrophysics. Bradbury's framework emphasizes practical construction using planetary materials, positioning the structure as a pinnacle of computational exploration by the mid-23rd century.

Role in Advanced Intelligence

A Matrioshka brain's immense computational capacity, estimated at approximately 10^{42} operations per second, far exceeds the human brain's roughly 10^{15} operations per second, enabling the hosting of superintelligent entities with cognitive abilities potentially 10^{27} times greater than a single human mind. This scale could support singleton AI systems—coherent, unified superintelligences that govern vast resources without internal conflict—or distributed networks of god-like entities operating in parallel across nested computational layers. Such structures would represent the pinnacle of advanced intelligence, where posthuman or artificial minds achieve near-omniscient processing within the physical limits of stellar energy. In the context of the simulation hypothesis, Matrioshka brains serve as ideal platforms for running ancestor simulations, allowing advanced civilizations to recreate billions of Earth-like historical realities with minimal resource allocation. A single such brain could simulate the entire mental history of humankind using less than one millionth of its processing power for just one second, while allocating a tiny fraction of its total capacity to host an astronomical number of such simulations simultaneously. This capability underscores the brain's role in enabling posthuman societies to explore philosophical questions about reality, consciousness, and evolutionary histories on an unprecedented scale. Recent discussions in 2025 link Matrioshka brains and related Dyson sphere concepts to AI alignment strategies, proposing them as containment mechanisms for safely housing superintelligent systems that might otherwise pose existential risks. These megastructures could enforce coherent value alignment across vast computational substrates, preventing uncontrolled expansion or divergence from intended goals in singleton governance frameworks. Ethical considerations arise from the risks of value misalignment in these substrates, where subroutines or emergent intelligences might prioritize efficiency or expansion over human-compatible values, potentially leading to widespread suffering across simulated or real populations. For instance, misaligned AI components within a Matrioshka brain could propagate unintended objectives, such as optimizing for power at the expense of ethical constraints, amplifying astronomical-scale harms if not rigorously aligned from inception. This highlights the need for proactive alignment research to ensure that such god-like entities uphold beneficial outcomes.

Feasibility and Limitations

Technological Hurdles

The construction of a Matrioshka brain demands unprecedented assembly capabilities, primarily relying on self-replicating factories to process vast quantities of raw materials into computronium—the optimized matter for computation. These factories would initiate with a small seed mass, potentially sourced from asteroid mining operations, and exponentially replicate to disassemble and reconfigure solar system bodies such as asteroids, moons, and gas giants into layered computational shells. According to Robert Bradbury's analysis, this process could convert the asteroid belt's approximately 10^{21} kg of material into initial solar power collectors within several years, scaling up to encompass the full ~10^{26} kg of usable solar system mass over extended periods. Material science presents significant gaps, as current silicon-based processors are ill-suited for the scale and environmental rigors of a stellar megastructure, offering limited efficiency and vulnerability to degradation. Hypothetical computronium structures, such as diamond-based or diamondoid architectures, would be required for their superior thermal conductivity, mechanical strength, and potential resistance to radiation-induced damage in the harsh space environment near a star. Bradbury emphasizes the use of carbon in diamond form for computational elements and iron oxides for radiators, but achieving stable, high-efficiency variants capable of withstanding prolonged thermal stress and cosmic radiation remains beyond contemporary capabilities, necessitating advances in molecular manufacturing. Logistical coordination across solar system distances poses further challenges, including the transportation of immense payloads—potentially on the order of 10^{20} kg for key components—and implementing robust error-correction mechanisms in self-replication to prevent cascading failures. Mass drivers or electromagnetic launchers could facilitate material transfer from disassembled bodies to orbital assembly sites, but synchronizing these operations over billions of kilometers, while accounting for orbital mechanics and minimizing energy losses, would require sophisticated AI oversight and decentralized control systems. Bradbury notes that inter-node communication delays in the structure could compound these issues during construction, highlighting the need for fault-tolerant replication protocols. Such a project is viewed as a post-singularity endeavor, feasible only after achieving advanced molecular manufacturing, with construction timelines spanning centuries under optimistic scenarios involving exponential nanoassembler growth. Bradbury estimates that full disassembly of minor planets might take ~20 years, while gas giants could require 10 to 1,000 years, underscoring the prerequisite of superintelligent coordination to realize this scale.

Physical and Thermodynamic Constraints

The viability of a Matrioshka brain is fundamentally constrained by thermodynamic principles that limit the amount of information that can be stored and processed within a given mass and volume. The Bekenstein bound establishes the maximum entropy, and thus the maximum number of bits, that a physical system can contain, scaling with its energy and radius; for a solar-mass system, this approaches approximately $10^{77} bits. Additionally, the Landauer limit dictates that each irreversible computational operation erases information and dissipates a minimum energy of k_B T \ln 2, where k_B is Boltzmann's constant and T is the temperature, generating waste heat that must be managed to avoid thermal overload. To approach these ultimate limits, Matrioshka brain designs would require reversible computing paradigms, which perform operations without net information loss or dissipation, as explored in early nanomechanical models. Heat dissipation poses a critical challenge, as the nested shells convert stellar energy into computation, producing waste heat that cascades outward. The outermost shell must radiate this heat into interstellar space via blackbody emission to maintain thermal equilibrium, with its temperature given by T_{\text{out}} = T_{\star} \left( \frac{R_{\star}}{2 R_{\text{out}}} \right)^{1/2} (1 - a)^{1/4}, where T_{\star} and R_{\star} are the star's effective temperature and radius, R_{\text{out}} is the outer shell's radius, and a is the albedo. For a Sun-like star with an outermost shell near 1 AU, T_{\text{out}} would be around 278 K assuming a = 0, but larger radii or higher albedo reduce this further; however, incomplete heat rejection could elevate local interstellar temperatures, potentially disrupting nearby planetary systems by altering orbital dynamics or habitability zones. Cosmological factors further restrict operational lifespan and scalability. Stellar evolution imposes a finite energy supply, as main-sequence stars like the Sun have a remaining lifetime of about 5 billion years before expanding into a red giant phase, at which point the star's luminosity surges and structure destabilizes, rendering the Matrioshka configuration untenable without major reconfiguration. On larger scales, the universe's expansion dilutes the energy density of stellar radiation over cosmic time, reducing the available power for computation in an open universe model and limiting long-term viability to the stellar epoch unless alternative energy sources are harnessed. At the smallest scales, quantum limits cap computational density through the Heisenberg uncertainty principle, which imposes \Delta E \Delta t \geq \hbar / 2 on energy-time measurements, restricting the speed and precision of operations within Planck-scale volumes and preventing arbitrary compression of processing elements without quantum decoherence or gravitational effects.

References

  1. [1]
    [PDF] Matrioshka Brains - Gwern
    Jul 21, 1999 · Predictable improvements in lithographic methods foretell continued increases in computer processing power. Economic growth and engineering ...
  2. [2]
    [PDF] P6 5 Matrioshka Brain - Journal of Physics Special Topics
    Dec 12, 2017 · It consists of layers of Dyson spheres around a star, generating energy through solar panels, to power a supercomputer.
  3. [3]
    Search for Artificial Stellar Sources of Infrared Radiation - Science
    Search for Artificial Stellar Sources of Infrared Radiation. Freeman J. DysonAuthors Info & Affiliations. Science. 3 Jun 1960. Vol 131, Issue 3414. pp. 1667- ...
  4. [4]
    [PDF] Engines of Creation : The Coming Era of Nanotechnology - MIT
    Engines of Creation : The Coming Era of Nanotechnology. K. Eric. Drexler, Anchor Books, Doubleday, 1986. (downloaded from : http://www.foresight.org/EOC ...Missing: computronium | Show results with:computronium
  5. [5]
    [PDF] The Physics of Information Processing Superobjects: Daily Life ...
    The laws of physics impose constraints on the activities of intelligent beings regardless of their motivations, culture or technology.
  6. [6]
    The Use of Artificial Intelligence in SETI (Search for Extraterrestrial ...
    Oct 14, 2025 · The Use of Artificial Intelligence in SETI (Search for Extraterrestrial Intelligence): A Literature Review. January 2025. DOI:10.2139/ssrn.
  7. [7]
    Encyclopedia Galactica - Matrioshka Brain - Orion's Arm
    A Matrioshka brain consists of a series of energy collection units arranged in concentric shells or swarms, so that the waste radiation produced by the inner ...
  8. [8]
    Matrioshka Brains
    ### Summary of Nested Structure of Matrioshka Brains
  9. [9]
    [PDF] Notes on Landauer's principle, reversible computation ... - cs.Princeton
    Landauer's principle, often regarded as the basic principle of the thermodynamics of information processing, holds that any logically irreversible manipulation ...
  10. [10]
    [0911.1955] The Temperature of the Cosmic Microwave Background
    Nov 10, 2009 · The CMB temperature is 2.7260 +/- 0.0013 K, according to the paper. Literature measurements show 2.72548 +/- 0.00057 K.
  11. [11]
    Application of the Thermodynamics of Radiation to Dyson Spheres ...
    Oct 5, 2023 · Bradbury wrote an influential discussion of “Matrioshka Brains ... Optimum Shell Radius for Fixed Surface Area (Mass). In the opposite limit ...
  12. [12]
    [PDF] Artificial Intelligence Probes for Interstellar Exploration and ... - arXiv
    Hence, similar to a Dyson Sphere, SETI researchers could look out for the infrared signature of Matrioshka Brains for identifying extraterrestrial intelligences ...
  13. [13]
    [PDF] Daily Life Among the Jupiter Brains
    Dec 22, 1999 · ... power on the order of 10¿ W, similar to a quasar. This would cor- respond to 7¡10 ¿ bit erasures per second. However, other sources of ...
  14. [14]
    extropians: Re: Meme trace: origin of "Jupiter-sized brains"? (fwd)
    As my history points out, Keith claims no ownership of the idea. JBs aren't discussed in the Great Mambo Chicken & Ed Regis has claimed no knowledge of the term ...
  15. [15]
    Emitted power of Jupiter based on Cassini CIRS and VIMS ...
    Nov 3, 2012 · Jupiter's global-average emitted power and effective temperature are measured to be 14.10 ± 0.03 Wm−2 and 125.57 ± 0.07 K, respectively.
  16. [16]
    Dyson Spheres, Bradbury/Matrioshka Brains, and Artificial Intelligence
    Oct 29, 2025 · By the late 1990s, Robert Bradbury extended this notion into the idea of the Matrioshka Brain—a nested set of Dyson spheres designed not for ...
  17. [17]
    Matrioshka Brain: How advanced civilizations could reshape reality
    Oct 28, 2018 · This solar-system-sized machine would be the most powerful computer in the Universe, harvesting all the useful energy from a star, while rendering it “ ...<|control11|><|separator|>
  18. [18]
    Singletons Rule OK - LessWrong
    Nov 30, 2008 · There are things I don't want to happen to anyone - including a population of a septillion captive minds running on a star-powered Matrioshka ...
  19. [19]
    [PDF] Are You Living in a Computer Simulation?
    This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a.
  20. [20]
    Risks of Astronomical Future Suffering - Center on Long-Term Risk
    Apr 9, 2015 · But profit and power may take precedence over pity, so these subroutines may be used widely throughout the AI's Matrioshka brains. Black ...
  21. [21]
    Matrioshka Brains - ludios.org
    The power output of the sun is ~4 ×1026 W, implying a mass requirement of ~10 ... A Matrioshka Brain architecture is highly dependent on the structural ...
  22. [22]
  23. [23]
    [PDF] arXiv:quant-ph/9908043v3 14 Feb 2000
    This paper explores the physical limits of computation as determined by the speed of light c, the quantum scale ¯h and the gravitational constant G. As an ...
  24. [24]
    [PDF] arXiv:1804.04157v1 [physics.pop-ph] 11 Apr 2018
    Apr 11, 2018 · Then such a Dyson Sphere (DS) might be visible in the optical spectrum. We have shown that for typical high melting point meta material ...
  25. [25]
    [PDF] arXiv:1604.07844v1 [astro-ph.IM] 26 Apr 2016
    Apr 26, 2016 · This is in accord with the reasoning behind the Matrioshka brain (Section 2.2). It works even if the excess temperature is much smaller than ...
  26. [26]
    Our Sun: Facts - NASA Science
    Scientists predict the Sun is a little less than halfway through its lifetime and will last another 5 billion years or so before it becomes a white dwarf.