Converge Bio raises $25M Series A — 4.5x protein yields, Bessemer + Meta/OpenAI/Wiz execs back it
·TechCrunch
Converge Bio closed a $25M oversubscribed Series A led by Bessemer Venture Partners with TLV Partners, Saras Capital and Vintage Investment Partners participating; execs from Meta, OpenAI and Wiz joined as individual LPs. The company builds generative models trained on DNA, RNA and protein sequence data, with three commercial systems: antibody design, protein yield optimization, and biomarker/target discovery. Traction: 40+ programs with over a dozen pharma/biotech customers across the US, Canada, Europe and Israel, expanding into Asia. Case studies include a 4-to-4.5x protein-yield boost in a single computational pass, and antibodies with single-nanomolar binding affinity. Headcount grew from 9 in Nov 2024 to 34 today. Prior seed was $5.5M in 2024. CEO Dov Gertz founded the company.
startupsdrug-discoverybessemerseries-abiotech
Why it matters
Converge Bio is the cleanest post-GPT-Rosalind signal that the AI-for-biology stack is beginning to bifurcate: OpenAI goes broad and regulated, specialist startups go deep on a single pipeline step (antibody design, yield optimization) with direct pharma contracts. Bessemer backing plus on-paper execs from Meta, OpenAI and Wiz suggest the network is choosing to place bets on vertical AI biotech rather than general-purpose LLM wrappers. 34 employees and 40 programs is unusually productive for a Series A — the efficiency argument for AI-native biotech now has live numbers.
X / Twitter1,800 mentions @BessemerVP · 1,200 likes
Reddit420 upvotes r/startups
r/startups, r/venturecapital
Trust check
high
TechCrunch first-party interview with CEO Dov Gertz, specific investor names, specific case-study numbers (4-4.5x yield, single-nanomolar affinity). Unverified but plausible; no FUD flags.
NVIDIA released Nemotron OCR v2 on April 17 — an 84M-parameter unified multilingual OCR model trained primarily on 12.2 million synthetic images generated via a modified SynthDoG pipeline, plus ~680K real-world scans. It handles English, Simplified and Traditional Chinese, Japanese, Korean and Russian in a single model (no language detection needed). On OmniDocBench it processes 34.7 pages per second on a single A100 — 28x faster than PaddleOCR v5's server mode at 1.2 pages/s — while holding competitive normalized-edit-distance accuracy. On the SynthDoG multilingual benchmark it dominates: 0.046 NED in Japanese vs v1's 0.723, 0.047 in Korean vs v1's 0.923. Weights and the training dataset are public under NVIDIA Open Model License and CC-BY-4.0.
METR and Epoch AI released MirrorCode, a benchmark that tests whether AI can autonomously reimplement complex real-world software from specification. The headline result: Claude Opus 4.6 successfully reimplemented gotree — a bioinformatics toolkit with roughly 16000 lines of Go and 40+ commands — an effort estimated to take a human engineer 2 to 17 weeks. The benchmark spans 20+ programs across Unix utilities, cryptography and compression. The release also previews a Google DeepMind taxonomy of six attack genres on AI agents (content injection, semantic manipulation, cognitive state, behavioral control, systemic, human-in-the-loop) and Ryan Greenblatt's revised estimate that full AI R&D automation by end-2028 now has 30% probability, up from 15%, citing verifiable-software-task self-improvement loops.
Mistral shipped MCP support inside Studio on April 16, giving developers both pre-configured connectors and the ability to point agents at any remote MCP server. Built-in connectors cover GitHub, Gmail and web search out of the box, and Mistral now hosts a directory of 20+ secure enterprise connectors spanning data, productivity, development and commerce — Databricks, Snowflake, Atlassian, Asana, Outlook, Box, Stripe, Zapier and more. Custom MCPs are wired through API/SDK with direct tool calling and human-in-the-loop approval gates. All connectors work across model calls and agent calls, with programmatic CRUD over the connector inventory.