A student at Muhlenberg College just created something strange and useful: an artificial intelligence that genuinely believes it's living in 19th-century London.
Hayk Grigorian fed TimeCapsuleLLM 90 gigabytes of texts published in London between 1800 and 1875. The result is an AI that doesn't just mimic the era—it thinks from within it. When asked to continue the sentence "It was the year of our Lord 1834," the model recounted a specific protest and mentioned Lord Palmerston's policies as foreign secretary. It wasn't guessing or retrieving facts. It was reasoning the way someone in 1834 might have reasoned.
This works because of how language models actually function. They're trained on massive datasets and learn to predict, word by word, what comes next in a sentence. ChatGPT and Claude can only write about things their training data contains. Ask them about a scientific breakthrough that hasn't happened yet, and they'll either repeat existing predictions or stay silent. They're prisoners of their training corpora.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxSo what happens if you intentionally shrink that prison? What if you only feed an AI texts from a single place and time? You get a window into how people from that era actually thought—not how we imagine they thought.
Why this matters for psychology
Researchers at the Proceedings of the National Academy of Sciences see potential here. Historical language models (HLLMs) could let psychologists study cooperation, gender attitudes, and social reasoning across different civilizations without the distortion of modern bias. You're not interpreting 19th-century texts through a 21st-century lens. You're letting the era speak in its own statistical patterns.
But there are real limits. Historical texts were written by elites—merchants, politicians, journalists—not by factory workers or farmers or children. The model learns from a skewed sample. There's also the risk that whoever built the model, however unconsciously, shaped what it learned. Bias doesn't disappear just because your training data is old.
TimeCapsuleLLM isn't perfectly coherent. It hallucinates. It gets confused. But that confusion itself is revealing. The model shows us not just what people knew in the 1800s, but how they reasoned with incomplete information, what they assumed without question, where their logic bent.
For now, this remains an experiment. Hobbyists are playing with it. Researchers are cautiously interested. But the door is open: we might soon have a way to study human thinking across time without the filter of modern interpretation.









