Biological Computing Company: living neurons power new AI chips and algorithms
·Techmeme
Techmeme surfaced a profile of Biological Computing Company, a startup using real living neurons cultivated on silicon substrates to build AI accelerator chips. The company claims its wetware-on-silicon hybrid achieves 3 orders of magnitude better energy efficiency on certain pattern-recognition tasks than digital neural networks, by letting the neurons naturally perform the relevant computation in analog. Founders include neuroscientists from MIT and Caltech; early demos run on 250K-neuron arrays kept alive on nutrient channels for up to 6 months. First commercial pilots expected with a DOD-adjacent customer in 2027. Genuine neuromorphic breakthrough or hype? Independent verification still pending.
If wetware-on-silicon really delivers 3 orders of magnitude energy efficiency on specific tasks, it's the first genuine challenger to digital neural networks since analog neuromorphic silicon (which has underperformed for 15 years). Bigger picture: the next decade's AI-energy crisis may not be solved by smaller models or better quantization — it may be solved by moving parts of the inference stack back into biology. Even if Biological Computing Company's specific numbers prove inflated, the category is now on the map for DOD and enterprise pilot budgets.
Impact scorecard
6.8/10
Stakes
7.5
Novelty
9.0
Authority
6.5
Coverage
5.5
Concreteness
6.5
Social
6.0
FUD risk
4.0
Coverage14 outlets · 2 tier-1
Techmeme, IEEE Spectrum, MIT Tech Review (brief), Bloomberg (feature), The Register, HPCwire
X / Twitter3,100 mentions @IEEESpectrum · 2,400 likes
Reddit2,100 upvotes r/Futurology
r/Futurology, r/neuroscience, r/MachineLearning
Trust check
medium
Biological-computing claims have a long history of impressive demos that don't scale. The founders' MIT/Caltech pedigree + 6-month neuron viability figure are concrete, but the 1000× energy claim is self-reported and not independently replicated. Treat as a promising research direction, not a settled result. Moderate FUD risk from the industry's track record of over-promising wetware breakthroughs.
@hardmaru (David Ha) flagged a paper adapting Sora-style video-diffusion architectures to build a learned world model of an actual Linux desktop. The model ingests 9,000 hours of screen-recording + keyboard/mouse traces and learns to predict next-frame UI state conditioned on user input — effectively a probabilistic operating-system simulator. On a held-out eval of 50 common tasks (opening files, running commands, navigating web UIs), the model achieves 73% next-event accuracy at 2-second horizons and 41% at 30-second horizons, beating the prior SOTA (Meta AI Habitat-UI) by 18pp. Direct application: train agents in fully simulated computer environments without real-system rollouts — cuts RL data costs ~40x and eliminates the safety risk of letting agents touch production systems during training.
EE Times deep-dive on AMD's ROCm 7.0 and whether it can finally dent NVIDIA's CUDA moat. AMD's MI400 (96GB HBM4, 5.2 PFLOPS FP8) now runs PyTorch, vLLM and SGLang out-of-the-box — but reviewers testing MLPerf Inference v5.1 still see 1.6–2.2x gaps vs H200 on representative LLM workloads, driven by kernel-library maturity rather than raw silicon. Breakthrough of the cycle: AMD hiring 600 CUDA-kernel engineers in 12 months, plus open-sourcing HIPify tooling that auto-translates 83% of typical CUDA kernels. AMD claims Meta, Microsoft and OpenAI are all now shipping production MI400 pods. NVIDIA's response: CUDA 13 with tensor-core autotuning targeting the same eval suite, launching Q2.
Anthropic announced the advisor strategy on the Claude Platform: pair Opus 4.6 as a planning/critique advisor with Sonnet 4.6 or Haiku 4.5 as the executing model. The advisor inspects partial outputs, suggests corrections and redirects the executor mid-generation. On SWE-bench Multilingual, Sonnet+Opus-advisor scores 2.7 percentage points higher than Sonnet alone, at roughly 1.3x the cost vs 7x the cost of running Opus end-to-end. General availability today via the Claude Console and CLI; pricing is existing Claude API rates for both models (no advisor premium). Anthropic positions this as the first first-class multi-model inference primitive in any frontier-lab API — not just routing or cascading but explicit advisor/executor roles with shared context.