makeshift.computer

promptbaiting

Here’s a thing I’ve noticed with LLMs. You start off the session asking a perfectly reasonable question, say, about models for open source software projects. The LLM gives its response, and at the end: “if you give me a specific category of software, I can list more options and …”. You reply “CRMs.” Another answer, then:

“If you tell me your exact use case (self hosting, reuse for clients, …), I can break down the licensing and …”

“self-hosting”

“If you want, I can map the landscape … and give 5-10 companies into each quadrant.”

“Sure.”

“(answer with charts) if you’re interested, I can add a third axis …”

“OK.”

Soon enough you’re just replying “Sure.” “Go on.” “Nah, option B.” “Continue.” The mental equivalent of scrolling Tiktok, except with ChatGPT, or Claude, or Gemini. I’m not sure there is a word for this yet — I was tempted to say “doomprompting” but you’re not exactly filled with dread here. If anything, it’s the opposite. Look at all the generated tokens! I’m super productive and just monitoring the situation with my little LLM helpers. There might not be a feeling of doom, but it echoes the semi-zombie state of social media. You start with coherent thoughts and critical analysis then slowly collapse into (swipe right) / (swipe left).

“Promptbaiting” is the closest term I can think of. I’m sure at least part of it is from the models’ training to be helpful, but doesn’t it almost feel like they’re baiting you into continuing the conversation? You won’t BELIEVE the knowledge that’ll come out next turn!

Mar 8, 2026, 1:19 PM