Pet Rocks: "AI" is a Grift, But Not Just Like That

  1. Quote Artificial Intelligence Unquote
  2. The Pretense, Though
  3. And That Other Pretense, Too
  4. My Brain on Doomerism

[ Posted: Mar. 3, 2026.]

Sometimes, I feel like I'm losing it.

Yeah, yeah, not all that unique of a proclamation from a mentally ill, terminally online weirdo like myself, but I'm being specific here. The conversation around "AI"--that is what I mean to talk about. Because something about the general discussion of "AI," in and out of anti-"AI" spaces, hits a vulnerable spot in my mind.

I'm not an expert. I have a degree in history, and I have succeeded in installing CFW on multiple gaming devices without bricking them. (A tremendous achievement, I know. I'll hold for applause.) I'm fully cognizant of this and would never seek to knowingly sell myself as more knowledgeable than I am. That's where "AI" comes in swinging because as much as I am confident in understanding the basics of developing and utilizing an LLM, you hear one too many times that generative "AI" is an unknowable black box and AGI is around the corner, and now you're wondering if maybe you have misunderstood something since that doesn't seem quite right but you aren't an expert and now your brain is melting.

I think, in part, that's the point.

The confusion. The ambiguity in definition. The gray area between, where corporations like OpenAI and Anthropic bend and abuse the fill-in-the-blanks at their convenience.

Part of me always fears claiming even a borderline conspiracy like the above, especially in a space that butts up against actual conspiratorial nonsense like this. But, like, it isn't new in capitalism for companies to leverage consumer ignorance. They want you to listen to them, to eat their slop and not question it.

So, how about that elephant between the quotation marks?

Quote Artificial Intelligence Unquote

There's two reasons I keep writing "AI" like that:

  1. Artificial intelligence encompasses more than simply its generative permutation.
  2. Generative AI is pretty fucking stupid.

LLMs don't have a monopoly on the appellation of "AI." They've subsumed the word to the average person, so when someone says "AI," your gut assumption is that it will be something generative, spat out by an LLM or one of its sibling tools. It's gotten to where I've seen folks thrown into a tizzy over a game developer mentioning "AI" when they're referring to an application of artificial intelligence that existed over a decade ago. This extends to me, too. I brace myself whenever I hear or read "AI" now, but I think it's worth remembering that artificial intelligence is not all generative. The character AI in your favorite RPG is not ChatGPT.

In daily use, though, "AI" is usually in reference to LLMs like ChatGPT, and much as Sam Altman believes it is a brilliant tool encompassing the knowledge of innumerable experts, feeding a library into a machine doesn't mean it understands or can apply it all. LLMs are dumb. Just…so dumb, and they're like that because there is no actual intelligence at work. Yes, the "intelligence" in artificial intelligence should convey that, but I feel it's important to stress the point. Corporations want you to view them as intelligent and reliable in employing that intelligence when really, it's mostly that LLMs are good at playing into that pretense.

AI is "AI" when I mean it as generative because I want to drill it into your head, my head, everyone's heads that LLMs are not living and are not sentient and not the sum total of the label.

The Pretense, Though

When you think of "AI," what do you think? The Matrix? Skynet? A thousand other fictional AI that have developed intellect far greater than any human? They know more than us. They can learn and act quicker than us. Oh no, what if we gave one access to the nuclear launch codes!

Okay, not everyone is that extreme in their estimations of what LLMs are capable of, but it's also a more common viewpoint than I would like.

LLMs are stupid. As far as I can reasonably tell, they are stupid, I promise. Know how your phone can do predictive text? LLMs are like that, but trained on vaster quantities of data and with greater computing power to permit more inference points. Talking to ChatGPT is talking to a word calculator. LLMs that produce images like Midjourney are similar in principle, but trained instead on pairs of images and text descriptions.

Yes, the technology is more complex than that (and machine learning and neural networks are entire fields in and of themselves), but more complicated weighing of probabilities while generating a result does not an AGI make. LLMs are not sentient nor poised to become sentient. The architecture can scale up and up, but a machine program making complicated inferences across mountains of data to spit back out an amalgamation of the data in response to a prompt is still just that.

But corporations want you to think there's more. Every advertisement I've ever seen for Gemini or fucking whatever involves vaguely talking up the LLM like it is applicable and suitable for anything and everything. It does math. It writes. It tells you how many moons Saturn has. It can totally do your job.

Here's the gray area: they call it "AI" and not LLMs or whatever specifically because they want consumers to apply preconceived cultural baggage to it. They don't explicitly say it can do anything, but they let you make assumptions. It's artificial intelligence, so I guess it is like the ones I've seen in movies based on the ads. Surely, this "AI" can accurately reproduce information fed to it during training and won't insist glue is a delicious pizza topping because it is ultimately a numerical pulley system of probability with zero comprehension of its tokenized words!

It's a grift starting with the name.

And That Other Pretense, Too

LLMs are natural mimics. They construct sentences comprehensible to you and me. They respond to our questions and can hold a conversation, a back and forth with a linear, sensical progression. They learned to sound human by devouring the whole of the internet and boiling it down to patterns and probability.

And in all that output, there is no emotion, no heart, no soul, no intellect, but LLMs put up an effective facade of it. It's tremendously easy to anthropomorphize LLMs. Humans do it naturally without thinking. "My printer is being a bitch and won't work." "That boat is a beautiful lady." Or whatever! To not extend human qualities to non-human things takes conscious effort. It is a pain in an ass and a half to talk about LLMs because it necessitates circling around or clarifying any usage of phrasing I would otherwise default to.

LLMs, whether intentionally or not, prey on this aspect of us. If the experience is superficially similar to talking to another human, then you'll think of it in those terms, right? They respond warmly, like a friend unable to disagree when you insist repeatedly the sky is purple and rain is cloud piss. Much as some find that cadence grating, others take outright comfort in its cloying sycophancy and embrace it. Still, even if you are cognizant of LLMs' nature--it being a stupid, stupid, stupid unfeeling tool--it's an active thought necessary to keep pinned because, again, discussion of LLMs renders it so easy to anthropomorphize them. You don't have to consciously mean it when you phrase something like an LLM is a person, but your own diction can permit the pertinacious idea of "human characteristics in LLMs" to slyly take root.

I must sound obnoxious, maybe even condescending, harping on and on about this, but in full sincerity, writing this is a reminder to me most of all. LLMs are not human; they are a shed skin of our online legacy, something in a familiar shape but hollow. To me, it is vital I remember this to not lose my fucking mind.

My Brain on Doomerism

The start of 2025 was a bad time for me. I've talked about it a bit elsewhere, but as an American who wasn't foaming at the mouth for a headfirst slide into fascism, the presidential election results were a blow. For months, I crawled into a dark corner of my brain to hide from how helpless I felt about the outcome and despairing I was at the nature of people. While it took months, I did overcome that mentality eventually.

Late 2025 took another bat to the kneecaps of my mental health.

LLMs as a technology intrigue me, and I find it unlikely that they'll be a flash-in-the-pan fad like NFTs. There's actual use cases and helpful applications of the technology. Of course, that doesn't absolve the host of environmental, legal, and so on and on issues that make up the ethical quagmire encompassing LLMs or the damage wrought by the proliferation and hype-fueled employment of them. Do I think they need to be used outside of possible benefits in medicine? Nah. But they are being used, and I understand that the average person can ask ChatGPT to format a list or something else equally inane but tangible in its results. Assuming no hallucinations occurred, then wow, the "AI" saved me some time! So, for someone that isn't terminally online and steeped in discourse, I can't fault them necessarily for using a tool that has been shoved in their face and advertised to death.

LLMs feel omnipresent, and while they will likely fade into the background once the bubble pops (oh, please, let it pop), you can't close Pandora's box. There's already something unnerving in that reality, that we have to live with tools that have already been so disruptive, but that isn't what sent me spiraling this time.

It's the conspiratorial shit.

Getting locked into this cycle of thinking is what originally inspired me to write down my thoughts like this. Because of all of this--the pretense of humanity, the ease of anthropomorphication, the vague possibilities promised by "AI"--it's a black hole of anxiety that only fuels the hype cycle of "AI."

LLMs are not unendingly capable. They are not alive. They are being deployed without thought or care because saying a product has "AI" makes it sound more advanced and gives the impression of progress. This is by design. Is it better for your consumers to believe more of your product or less? Rely on it. Spend on it. Make it integral. If you're scared of the fearmongering, that means the marketing is doing its job.

The vacuous friendliness of an LLM isn't all that different from a smile drawn onto a rock. It feels nothing, and thus returns only a hollow echo of our experiences broken down to ones and zeroes. Such knowledge--creations epitomizing what we are--poured into a container unable to even be called a ghost of humanity. Really, I think all we accomplished was making an obscenely expensive pet rock.