# Every AI Use Case Is Different
There’s a saying in the autistic community: If you know one autistic person, you know one autistic person. The implication being that there are as many ways to be autistic as there are autistic people.
Well, if you know one [[AI]] use case, you know one AI use case.
A friend asked me the other day to give him a quick sound bite on why people should use Gemini or ChatGPT. I told him that if I could do that, OpenAI would be paying me millions as a consultant, because that’s their biggest problem: trying to explain to people what their product is for. Obviously, I don’t have an answer to that question—if I did, I’d be rolling in consultant cash and sipping mojitos on a beach somewhere, not grappling with its infinite complexity here. But maybe I can explain why that question has no answer, or more specifically, why it has *millions* of answers.
While there are technical differences, modern Large Language Models, or LLMs, are designed to mimic human creativity. To hideously oversimplify, they take in vast amounts of data, learn to recognize patterns and relationships in that data, and then use those patterns and relationships to create new data based on a prompt given to them by a human. They’re not really *thinking machines*—not yet, anyway, though advanced models, like OpenAI's newer chain-of-thought frameworks, are making strides in step-by-step reasoning that closely mirrors structured human problem-solving—but they are *creativity machines*.
Well, what is creativity for?
See, this is the problem OpenAI’s crack marketing team still can’t solve. Everyone has a different answer to that question. For some, it’s writing code. For others, generating art. For still more, brainstorming. Really, LLMs can create anything they were trained on, just like humans. They can’t come up with something completely ex nihilo, but neither can nearly all humans, and you wouldn’t want them to. Any creative output that is totally unique, with no prior influences or connections, will be completely alien and baffling to anyone who sees it, because they have no frame of reference. Creativity isn’t mystical. It’s just connecting things that other people haven’t thought of as related before. And LLMs do it just as well as a reasonably intelligent, widely-read human.
But trying to condense that down into a sound bite is beyond futile. What you get out of a *conversation*—[[Why AI is bad at answering easy questions, but really good at hard questions|not a simple, context-free Q&A]]—with an LLM is *entirely* dependent on what **you** bring to the table. The prompt is everything, because without it, the LLM has no parameters. As Orson Welles said, “The enemy of art is the absence of limitations.”
So the use case for AI is whatever you want to create with it. In my personal use, I use it as an editor for my writing, a talk therapist, a sounding board to help me learn new concepts and grow as a person. I don’t have it actually draft new articles for me, though it’s perfectly capable of doing that, and in my own voice. I don’t have it generate code, because I’m a *recovering* software developer for a reason. But that’s all just me. Your use cases will be, *must be*, different. So, what will you create with it?