← Back to feed
AI

Karpathy's nanochat hits 51.7K stars — ChatGPT clone trainable end-to-end for $100

Andrej Karpathy's nanochat repo — a minimal, from-scratch full-stack training/inference pipeline for a ChatGPT clone — passed 51.7K GitHub stars. In ~8,000 lines of code it covers tokenizer, pretraining, SFT, RL and eval. Karpathy says you can train your own ChatGPT clone for roughly $100 of compute in four hours, and it's the capstone project for his upcoming Eureka Labs LLM101n course. llm.c (pure C/CUDA training) sits alongside at 29.5K stars. Karpathy's "make LLMs legible" mission keeps reshaping what developers build.

KarpathynanochatEureka LabsLLM101nOpen Source

Why it matters

The $100 ChatGPT clone is the democratization proof Karpathy has been building toward since nanoGPT. When an undergrad can train a real chatbot end-to-end on a single rented H100, the barrier from "curious learner" to "competent LLM practitioner" collapses. Expect a cohort of developers to move from using LLMs to building them within a year — which redistributes where AI talent comes from.

Impact scorecard

8.2/10
Stakes
7.5
Novelty
8.0
Authority
9.5
Coverage
7.5
Concreteness
9.5
Social
9.0
FUD risk
1.0
Coverage22 outlets · 3 tier-1
GitHub, The Pragmatic Engineer, Hacker News, Every.to, VentureBeat, Simon Willison's Weblog, …
X / Twitter14,000 mentions
@karpathy · 58,000 likes
@simonw · 7,200 likes
Reddit8,900 upvotes
r/MachineLearning
r/MachineLearning, r/LocalLLaMA, r/learnmachinelearning

Trust check

high

First-party Karpathy repository; star count and code verifiable on GitHub. The "$100 in 4 hours" claim is documented in the README with training curves and hardware specs; reproducible. No FUD risk — this is code + writeup.

Primary source ↗