October 13, 2025
A few days ago I got a sensationally stupid email from one of those websites that most of us probably have a subscription to, but which I will not give the oxygen of publicity by linking to[1].
The subject line was:
Your paper “NEURAL SPINE BIFURCATION…” is now an analogy.
No; no, it’s not.
Our paper Neural spine bifurcation in sauropod dinosaurs of the Morrison Formation: ontogenetic and phylogenetic implications (Wedel and Taylor 2013) is not an analogy. It’s a scientific study of whether neural spines bifurcate progressively through ontogeny. (Spoiler: not really. Somehow we stretched that out to 34 pages.)
The email then goes on to say:
Our AI turned your research into an easy to understand analogy
Oh, did it now? Do tell.
The bifurcation of neural spines in sauropods can be likened to the branching patterns of river deltas. Just as a river flows into a delta…
But it really, really can’t. You might just as well say “The bifurcation of neural spines in sauropods can be likened to Marcel Proust’s seven-volume masterwork À la Recherche du Temps Perdu.” It would be exactly as meaningful.
To add insult to injury, they want me to pay to upgrade to Premium if I want to see how this nonsense continues. Hard pass.
When I forwarded this exercise in idiocy to Matt, he replied with just two lines:
What is this I can’t even.
If you’d built a “tool” that stupid, why would you advertise the fact?
Well, quite.
The current generation of “AI”s do have some uses. I ask ChatGPT questions about the minutiae of programming all the time, like when I wanted to know whether there is a standard pattern for invoking a React hook on behalf of a class-based component. But for every useful application of LLMs, there are ten useless or actively destructive uses. I’m not sure which of those two categories “Your paper is now an analogy” falls into.
Just stop it.
[1] It was academia.edu, and who can possibly explain how they were able to get a domain name in the .edu TLD?