Introduction
For decades, the trajectory of computing has followed a singular path: shrink the transistor, increase the clock speed, and stack more layers of silicon. This paradigm, governed by Moore’s Law, delivered exponential performance gains until it didn’t. As we approach the physical limits of semiconductor miniaturization and confront the staggering energy costs of training large language models, a fundamentally different approach has emerged from Melbourne, Australia.
Cortical Labs, founded in 2019 by Hon Weng Chong, has built what many consider the world’s first commercial biological computer: the CL1. This device integrates approximately 800,000 living human neurons grown from stem cells derived from adult donor blood or skin samples onto a multi-electrode array within a self-contained life-support system. The neurons are not metaphors or simulations; they are real, living cells that fire electrical impulses, form synaptic connections, and learn from their environment.
What makes this story particularly compelling in 2026 is not just the hardware itself, but what independent developers have begun building on top of it: a bridge between biological neural networks and large language models (LLMs), creating a hybrid system where living neurons influence how an AI model selects its next token.
1. The Architecture of the CL1: A Body in a Box
The CL1 is a self-contained biological computing unit that requires no external computer to operate. Each unit houses roughly 800,000 human neurons cultivated on a chip known as a multi-electrode array (MEA), which consists of 59 electrodes capable of both stimulating neurons and recording their electrical responses.
The neurons are derived from induced pluripotent stem cells (iPSCs) mature cells reprogrammed back to an embryonic-like state — sourced from real adult donors. Inside the CL1, a sophisticated life-support system manages:
- Temperature regulation to maintain optimal cellular conditions
- Gas mixing (CO₂/O₂ balance) for metabolic stability
- Nutrient delivery through continuous perfusion of growth media
- Waste filtration to remove metabolic byproducts
This environment keeps the neurons viable and electrically active for up to six months. The entire system draws approximately 30 watts per unit less than a handheld calculator, according to Cortical Labs CEO Hon Weng Chong. A full rack of 30 CL1 units consumes between 850 and 1,000 watts, compared to the approximately 6,000 watts required by a single GPU in a typical AI data center.
The software layer managing the biological substrate is called biOS (Biological Intelligence Operating System). biOS creates a simulated environment, relays information to the neurons via electrical stimulation patterns, and interprets their responses to close the feedback loop a paradigm rooted in Karl Friston’s Free Energy Principle.
2. From Pong to Doom: The Learning Curve of Living Neurons
The intellectual lineage of the CL1 traces back to Cortical Labs’ landmark 2022 experiment: DishBrain. Published in the journal Neuron, this study demonstrated that in vitro neuronal cultures could learn to play the classic video game Pong within a closed-loop environment. The neurons received electrical stimulation patterns representing the game state and responded with signals that controlled the paddle. Remarkably, the system learned to play in approximately five minutes.
The DishBrain experiment was significant not merely as a demonstration but as an experimental validation of the Free Energy Principle the theoretical framework proposed by neuroscientist Karl Friston suggesting that all living systems act to minimize surprise (or “free energy”) in their environment. The neurons weren’t programmed to play Pong; they self-organized to reduce unpredictable stimulation, which effectively meant learning the game.
Fast forward to late February 2026, and independent developer Sean Cole achieved something far more ambitious: teaching CL1 neurons to play Doom the iconic 1993 first-person shooter that requires navigating a complex 3D environment, identifying enemies, and making real-time combat decisions. Cole, who had essentially no prior experience in biological computing, completed the implementation using Cortical Labs’ Python-based Cortical Cloud API in under a week. The neurons learned to navigate and engage enemies within approximately one week a dramatic compression compared to DishBrain’s timeline with a far simpler game.
The progression from Pong (2D, single-axis movement) to Doom (3D, multi-variable decision-making) represents an exponential increase in computational complexity, demonstrating that biological neural networks can scale to significantly more demanding cognitive tasks than initially demonstrated.
3. BioLLM: When an LLM Starts Thinking With Living Neurons
Perhaps the most provocative development in the CL1 ecosystem is the BioLLM project, an experimental initiative by an independent developer operating under the GitHub handle 4R7I5T. The project, documented in the CL1_LLM_Encoder repository, represents the first known attempt to integrate living human neurons directly into the inference pipeline of a large language model.
How It Works
The system architecture consists of three interconnected layers:
The Biological Layer (CL1): Approximately 200,000 human neurons on the multi-electrode array serve as the biological processing substrate. The developer accesses the CL1 remotely through Cortical Labs’ Wetware-as-a-Service (WaaS) cloud platform at $300 per unit per week.
The Language Model: A relatively small 350-million-parameter language model generates token candidates during inference. Unlike massive models such as GPT-4 or Claude, this compact model makes the hybrid approach computationally feasible for experimental purposes.
The Encoder Bridge: A custom-built encoder translates the model’s internal state and token candidates into electrical stimulation patterns delivered to the CL1’s electrode array. When the neurons respond, their activity patterns are read back and used to re-weight the model’s token probability distribution.
In practical terms, when the LLM generates a set of possible next tokens, the encoder converts these candidates into spatiotemporal stimulation patterns. The neurons process these patterns integrating them with their ongoing activity and learned biases and produce a response. This neural response is then decoded and applied as a modulation layer on top of the LLM’s original probability distribution, effectively allowing the biological network to push and pull the model’s word choices.
What This Means
The BioLLM system does not replace the LLM’s reasoning; it augments it with a biological noise and bias layer. The neurons provide a form of stochastic modulation that is fundamentally different from the deterministic or pseudo-random sampling strategies typically used in language model inference (temperature scaling, top-k, nucleus sampling).
The developer’s stated goals extend beyond mere text generation. The CL1_LLM_Encoder project also aims to measure consciousness metrics quantitative indicators of neural complexity and integration as part of broader AI safety and sentience testing research. While the system is explicitly experimental and the developer emphasizes no affiliation with Cortical Labs, the project opens a conceptual door that the AI community has only theorized about: a hybrid intelligence where biological and artificial neural networks co-process information.
4. Energy Economics: The Case for Biological Computing
One of the most compelling arguments for biological computing is energy efficiency. Training a frontier LLM today requires megawatts of power sustained over weeks or months. A single training run for a model like GPT-4 is estimated to consume energy equivalent to the annual electricity usage of thousands of homes.
Biological neurons, by contrast, operate on an entirely different energy scale:
| Parameter | CL1 (Single Unit) | Single Nvidia H100 GPU | CL1 Rack (30 Units) |
|---|---|---|---|
| Power Consumption | ~30 W | ~700 W | 850–1,000 W |
| Learning Speed | Minutes to days | Hours to months | — |
| Data Requirements | Minimal | Massive datasets | — |
| Adaptability | Real-time plasticity | Requires retraining | — |
| Operating Temperature | 37°C (body temp) | Up to 83°C | — |
The human brain — with its 86 billion neurons operates on approximately 20 watts. While the CL1’s 800,000 neurons represent a tiny fraction of this biological computer, they already demonstrate the principle that biological substrates can achieve learning and adaptation at energy costs orders of magnitude below silicon-based alternatives.
Cortical Labs has recognized this advantage at scale. In March 2026, the company announced the construction of the world’s first biological data centers: a 120-unit facility in Melbourne and a smaller prototype installation at the National University of Singapore’s Yong Loo Lin School of Medicine in partnership with DayOne Data Centers. These facilities will serve as proof-of-concept for a future where biological computing supplements or potentially replaces traditional GPU clusters for specific workloads.
5. Theoretical Foundations: Free Energy, Active Inference, and Biological Intelligence
The theoretical underpinning of Cortical Labs’ approach is the Free Energy Principle (FEP), formulated by Karl Friston at University College London. FEP posits that all self-organizing systems from single cells to complex organisms act to minimize variational free energy, which can be understood as the difference between a system’s predictions about its environment and the actual sensory input it receives.
In the context of the CL1, the biOS software creates a simulated environment and delivers sensory information to the neurons via electrical stimulation. The neurons, following the imperatives of FEP, self-organize to minimize surprise forming connections, adjusting firing patterns, and developing internal models of the environment they inhabit. This is not programming in the traditional sense; it is learning through thermodynamic and information-theoretic principles that govern all living systems.
The DishBrain experiment provided empirical validation of this framework: neurons that received structured feedback (reduced randomness when they performed “correctly” in Pong) showed measurably increased organization and predictive capacity. Those that received only random stimulation did not.
This theoretical foundation distinguishes the CL1 from purely neuromorphic computing approaches (such as Intel’s Loihi or IBM’s TrueNorth), which mimic neural architectures in silicon but lack the intrinsic self-organizing properties of biological tissue. The CL1 doesn’t simulate a neural network it is one.
6. Ethical Dimensions: Consciousness, Consent, and Moral Status
The integration of living human neurons into computational systems raises ethical questions that no previous technology has necessitated. Cortical Labs has been notably proactive in addressing these concerns their first published paper was an ethics paper, predating the DishBrain technical publication.
Chief Scientific Officer Brett Kagan has consistently emphasized that these neuronal networks are not conscious. The 200,000 to 800,000 neurons in a CL1 unit, while capable of learning and adaptation, represent a minuscule fraction of the brain’s 86 billion neurons and lack the structural complexity (cortical layers, specialized regions, long-range connectivity) associated with consciousness.
However, the ethical landscape extends beyond the binary question of consciousness:
Moral Status of Biological Substrates: Even if not conscious, should living human neural tissue be afforded any moral consideration? The cells are derived from donated biological material, raising questions about informed consent and the boundaries of permissible use.
Sentience Terminology: The original DishBrain paper’s use of terms like “sentience” drew significant scientific criticism. The distinction between behavioral adaptation (responding to stimuli to minimize surprise) and sentient experience (subjective awareness of that process) remains a critical philosophical boundary.
Scaling Concerns: If 800,000 neurons can learn Doom in a week, what happens when millions or billions of neurons are networked? At what point does quantitative scaling produce qualitative changes in the system’s cognitive properties?
Regulatory Frameworks: Current regulatory structures were not designed for systems that are neither purely biological nor purely artificial. The development of governance frameworks for hybrid intelligence is an urgent necessity.
Cortical Labs collaborates with independent bioethicists, philosophers, and regulatory experts to navigate these questions. The company’s approach represents a model for responsible innovation addressing ethical implications before, not after, they become crises.
7. The Road Ahead: From Laboratory Curiosity to Computational Infrastructure
The CL1 and the BioLLM experiment represent early chapters in what could become a transformative narrative for computing. Several trajectories are now visible:
Near-Term (2026–2028): Expansion of biological data centers, refinement of the WaaS model, and growing adoption in pharmaceutical research (drug compound testing on human neurons rather than animal models). Integration with larger language models and more sophisticated encoder architectures.
Medium-Term (2028–2032): Development of multi-CL1 networked systems where thousands of units collaborate on complex computational tasks. Potential emergence of specialized biological co-processors for AI inference, offering energy-efficient alternatives for specific workload types.
Long-Term (2032+): Theoretical possibility of biological computing reaching sufficient scale and integration to fundamentally alter the AI infrastructure landscape. Emergence of true biohybrid intelligence systems where biological and silicon components are architecturally inseparable.
The challenges remain substantial: long-term neuronal viability beyond six months, manufacturing consistency at scale, development of standardized biological computing interfaces, and resolution of the ethical and regulatory questions outlined above.
Conclusion
The convergence of living neurons and large language models is not science fiction it is happening now, in laboratories in Melbourne, in cloud-accessible APIs, and in the GitHub repositories of independent developers who are exploring the frontier of hybrid intelligence. The CL1 represents something genuinely new: not an incremental improvement in silicon performance, but a fundamental rethinking of what computation can be.
When Sean Cole taught 200,000 neurons to play Doom in a week, he demonstrated that biological substrates can handle complex, real-time decision-making. When the BioLLM project wired those same neurons into a language model’s inference pipeline, it showed that biological and artificial intelligence need not remain separate domains.
We stand at the beginning of an era where the question is no longer whether machines can think like brains, but whether brains and machines can think together. The answer, it appears, is yes — and the implications for energy efficiency, learning paradigms, AI safety, and our understanding of intelligence itself are only beginning to unfold.
References
-
Cortical Labs - CL1 Official Product Page - https://corticallabs.com/cl1
-
Kagan, B.J., Kitchen, A.C., Tran, N.T. et al. (2022) - In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron, 110(23), 3952-3969. - https://pubmed.ncbi.nlm.nih.gov/36228614/
-
Tom’s Hardware (2026) - 200,000 living human neurons on a microchip demonstrated playing Doom - https://www.tomshardware.com/tech-industry/artificial-intelligence/200-000-living-human-neurons-on-a-microchip-demonstrated-playing-doom-cortical-labs-cl1-video-shows-the-gameplay-and-explains-how-the-neurons-learn-the-game
-
4R7I5T - CL1_LLM_Encoder: Experimental BioLLM Project - https://github.com/4R7I5T/CL1_LLM_Encoder
-
Tom’s Hardware (2026) - Human brain cells set to power two new data centers - https://www.tomshardware.com/tech-industry/artificial-intelligence/human-brain-cells-set-to-power-two-new-data-centers-thanks-to-body-in-the-box-cl1-cortical-labs-targets-the-ai-energy-crisis-with-biological-computer-that-reportedly-uses-less-energy-than-a-calculator
-
IEEE Spectrum - Biological Computer: Human Brain Cells on a Chip - https://spectrum.ieee.org/biological-computer-for-sale
-
Denise Holt - Exclusive Look at CL1: One-on-One with Cortical Labs’ Chief Scientist - https://deniseholt.us/exclusive-inside-look-one-on-one-with-cortical-labs-chief-scientist-from-dishbrain-to-cl1/
-
Nature Communications (2023) - Experimental validation of the free-energy principle with in vitro neural networks - https://www.nature.com/articles/s41467-023-40141-z
-
New Atlas - World’s first Synthetic Biological Intelligence runs on living human cells - https://newatlas.com/brain/cortical-bioengineered-intelligence/
-
PMC - Playing Brains: The Ethical Challenges Posed by Silicon Sentience and Hybrid Intelligence in DishBrain - https://pmc.ncbi.nlm.nih.gov/articles/PMC10602981/