The AI/LLM Shame
Judge the book by its content
The more I talk to people, the more I realize there’s a real shame in admitting you use LLMs to think and synthesize.
Let me give you my perspective upfront: the ones who don’t learn AI will fall behind. And the more you use it, the better you’ll understand its limitations. That experience won’t come from holding back or reasoning yourself to better answers, the way some alignment communities seem to think. It’s my opinion, take it for what it is, but I don’t think creating more academia is what the world needs right now. We need to start shoveling.
But here’s the thing that makes this shame absurd: a lot of the code running our world is already being generated by LLMs. I know because I work with it every day. Your life literally depends on AI-assisted work regardless of your opinions about it. The ship has sailed. The question isn’t whether to use these tools. It’s whether to get good at using them.
Now to the opposition’s arguments: “LLMs will flatter you and tell you you’re a genius.” “LLMs can’t be trusted.” And my favorite, from a well-known alignment community: “Even at lower levels of intensity, ChatGPT is likely to tell you your ideas are fundamentally good and special, even when humans would consider them sloppy or confusing.”
Well, if the standard for high-quality content is “humans don’t get confused,” we’re all in trouble. Try explaining relativity to the average person. Or better yet, quantum mechanics. It should be obvious why this metric is broken.
The root of the problem is twofold.
First: lineage. The early versions of GPT, Grok, and the rest were exactly that, first iterations. They had real flaws, especially around sycophancy. AI would say whatever made you feel good, and therefore, the argument went, all your conclusions were suspect. But those times have changed. OpenAI has published extensively on how they’re addressing this. The world isn’t static. What was true eighteen months ago isn’t necessarily true today.
Second: gatekeeping. Or if I’m being less generous, intellectual protectionism. A group of people whose absolute conviction is that they alone can form the basis for our future. And I base this on one fact: if your argument is “it was written or co-written with an LLM, therefore it’s not trustworthy,” you’ve already failed. You’re not engaging with the content. You’re performing gatekeeping.
The right approach is to argue the actual facts. Engage. Ask why. Just like you would with any statement from a human, an organization, or an AI. If someone won’t defend their reasoning, fine, dismiss them. But if they’re prepared to engage and you refuse? You’re the one doing humanity a disservice.
I understand the frustration with people claiming AI is conscious or whatever else. But the gatekeeping path, dismissing everything touched by AI, can’t be the way forward.
My advice: start evaluating arguments on their merits. Whether an LLM was involved will become increasingly irrelevant. Learn to use these tools well, or watch as others do.
