IEEE Spectrum profiles a Caltech-led team (Andrei Faraon group) demonstrating a new class of photonic logic gates built on mechanically deformable metasurfaces — 'squishy' PDMS-based switches that modulate light via microsecond-scale shape changes rather than carrier injection. Published in Nature Photonics (DOI: 10.1038/s41566-026-01479-x), the switches show 12 nJ/bit switching energy at 5 GHz with insertion loss of 0.4 dB — roughly 40x more efficient than comparable thermo-optic switches and competitive with state-of-the-art silicon photonics. Near-term applications: reconfigurable optical interconnects for AI accelerators, where the energy cost of electrical-optical-electrical conversion is becoming the dominant limit on scaling. Caltech+DARPA funded; no IP licensee announced yet.
The AI accelerator scaling wall is increasingly interconnect-bound, not compute-bound — moving data between chips consumes more energy than the math itself at NVL72-scale systems. A photonic switch family at 12 nJ/bit and 5 GHz with sub-0.5 dB loss is in the right operating regime to replace electrical crossbars in next-generation rack-scale AI fabric. If Caltech's group (or a startup spinning out) productizes this, it lands into a $15–25B interconnect TAM by 2028. Still a lab demo, but a concrete one.
Peer-reviewed primary source (Nature Photonics) with named authors and reproducible numbers. IEEE Spectrum is tier-1 trade press. Low FUD: photonics demos often over-promise timelines to production, but the physics and measurements here are verifiable — even if it takes 5 years to ship, the result is real.
A new Nature paper (s41586-026-10319-8) finds that language models encode and propagate behavioural traits — including biases, reasoning styles, and tendencies — through hidden signals in training data, not just through explicit content. The mechanism persists across fine-tuning and is not detectable by standard alignment audits. Published in Nature, the study has immediate implications for how model providers understand inheritance of behaviour between model generations and base-model contamination.
The White House is working to give US government agencies access to Anthropic's Mythos AI — the same model that found thousands of zero-days in Project Glasswing. Bloomberg broke the story; Reuters independently confirmed. The move would make Mythos the first frontier AI model officially deployed across the US federal apparatus, spanning security, intelligence, and civilian agency workflows. r/singularity: 77 pts. HN Reuters thread: 30 pts.
OpenAI ships GPT-Rosalind, a purpose-built model for life sciences research — named after Rosalind Franklin. The model is trained on scientific literature, lab protocols, molecular structures, and clinical trial data, with native support for biological sequence reasoning and chemistry. HN: 98 pts, 29 comments. Official OpenAI blog launch. Positions OpenAI directly against DeepMind's AlphaFold lineage and specialized bio-AI startups including Recursion and Isomorphic Labs.