€54,000 billing shock in 13 hours: unrestricted Firebase browser key drained by Gemini API abuse
·Google AI Developer Forum / HN
A developer reports a €54,000 unexpected billing spike in just 13 hours after a Firebase browser key without API restrictions was used to make Gemini API requests — presumably by a malicious third party. The Google AI developer forum post goes viral with 386 HN pts and 281 comments. The incident exposes a critical gap in Google's abuse detection and billing caps for Gemini APIs: client-side Firebase keys often have no restrictions by default, and Gemini does not enforce spending caps out of the box.
Gemini is now embedded in millions of Firebase projects. This incident demonstrates that Google's billing and abuse-control infrastructure hasn't kept pace with AI API adoption — a €54k loss in 13 hours could bankrupt a solo developer or small startup. It pressures Google to ship hard spending caps and anomaly alerts, and will accelerate scrutiny of how major AI platforms handle key security.
Impact scorecard
6.9/10
Stakes
7.0
Novelty
7.0
Authority
6.0
Coverage
5.0
Concreteness
9.0
Social
7.0
FUD risk
1.0
Coverage6 outlets · 1 tier-1
Google AI Forum, HN, Reddit/technology
X / Twitter800 mentions
Reddit620 upvotes r/programming
r/technology, r/programming
Trust check
high
First-party developer forum report with specific Euro amounts and 13-hour window. Corroborated by 386 HN upvotes and 281-comment community thread. No FUD flags — concrete billing incident.
OpenAI publishes 'Codex for almost everything', a major capability expansion for its Codex coding agent. The post details how Codex can now handle a far broader range of software engineering tasks end-to-end, including autonomous debugging and deployment steps. A companion demo 'Codex Hacked a Samsung TV' shows the agent autonomously reverse-engineering and exploiting a consumer device — drawing 100+ HN points. HN main thread: 874 pts, 449 comments on launch day.
Alibaba's Qwen team releases Qwen3.6-35B-A3B as fully open-source on HuggingFace (Apache license). The model uses a Mixture-of-Experts architecture with 35B total parameters but only 3B active per token — making it runnable on consumer hardware. Simon Willison's post 'Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7' lands 404 HN pts and 84 comments, while the original release thread hits 100+ on r/LocalLLaMA. Pitched as 'agentic coding power, now open to all.'
Anthropic ships Claude Opus 4.7, its most capable Opus model yet. The release centres on long-running agentic tasks: more thinking tokens, an extended thinking mode, and increased API rate limits across all subscriber tiers to match. HN erupts with 1,752 points and 1,257 comments — the biggest AI model thread in weeks. @bcherny: 'Dogfooding Opus 4.7 the last few weeks, I've been feeling incredibly productive.' System card and model card published simultaneously.