Every time a disruptive technology becomes available, humanity goes through essentially the same cycle: At first, exaggerated fear and sulking rejection on the one side clash with overflowing enthusiasm on the other. As the technology matures and its capabilities and limitations become more distinct, eventually a synthesis—often grounded in rules and regulations about where and how it can be most beneficially applied—emerges. Thus, what once seemed new, outlandish, and groundbreaking turns into a commodity, and often one that helps to shape the next level of innovation. The question about generative AI today is obvious: “Will this time be different?

Socrates, famously, held a strong opinion that the widespread use of writing would have disastrous consequences for humanity. The philosopher argued that its rise would demote people’s ability to memorize texts, essentially dumbing down society as a whole. And, alas, he was correct on the first point: The fact that people no longer had to hold the stories that they wished to conserve for future generations in mind word for word made fewer people actually learn those texts by heart. But on the other hand, that did not mean that nobody memorized anything anymore. Personally, about 2,500 years later, I still find joy in reciting the Shakespeare sonatas I’ve learned by heart. Of course I didn’t memorize those poems because I felt that this would be the most efficient way of preserving them for those who come after me. I did it, simply because I enjoy the sound that Shakespeare’s elegantly crafted words make as they reverberate in my mind. But apart from pure pleasure, and more to Socrates’ larger concern: We now know for sure that the adoption of written language did not, in fact, make humanity any dumber or any lazier. Quite the contrary was true: The fact that we didn’t have to expend so much mental energy on memorizing ancient texts has freed up capacity for more advanced thinking. It led, for one, to us collectively creating many more and, arguably, better stories.

Since the advent of writing, this dynamic has played out many times over. We find a way of doing something more efficiently. But we’re also afraid that it might diminish the value of how we did that thing before. Then we figure out that we can still do it the old way, but chose to do that for fun, rather than for fur profit. And the new way frees up resources that we can expend on other, oftentimes more valuable, tasks. Thus, in turn, we benefit from compounding interest of innovation.

Broadly speaking, the same is true of (narrow1) AI in general, and generative AI in particular. Some are afraid that the value we assign to certain types of work, such as writing or visual artistry, will be greatly impaired by ChatGPT, Midjourney, and the like. And, of course, there’s some truth to that: If, for example, your job solely consisted of inhaling long texts and then summarizing them for the benefit of your boss, then well… ChatGPT may have bad news for you. If you’re a graphic artist and your only interaction with your clients is a that they tell you to “create an image of a rainbow colored unicorn”, well, DALL-E may really be out there to get you. But if that’s really all your job is about, then this begs the question: How valuable was that job to begin with? Notwithstanding the fact that you may take pleasure in doing that type of work, its economic value may be the same as my memorizing a Shakespeare sonata: Close to zero.

The fears associated with this train of thought, however, are grounded in two false premises: The first one is about how you as a human being relate to the value you add to the economy. 50 years of neoliberal indoctrination have made us believe that those two things are one and the same: That if you don’t produce something that someone else is willing to pay hard currency for, then you, yourself, are worthless. Of course, that view is not only dehumanizing, it’s also at odds with how people have interacted with each other for the last couple of millennia—before Milton Friedman and Friedrich Hayek arrived on the scene. In social groups all across time and space, humans have collaborated much more on an intuitive, rather than a transactional, basis. They didn’t measure, tally, and compare individual contributions, but rather, they pooled their resources as well as their duties in order to achieve their common objectives. They pulled along the old and the weak, the infirm and the incapable, because that’s the right thing to do and because acting otherwise would weaken the bond of the community as a whole. Hence, most spiritual traditions consider every person—every sentient being, if you’re willing to go that far—as valuable for its own sake, without any strings attached.

But let’s suppose that this is a bit too far out there for you. After all, you’ve got bills to pay, and the electricity company won’t accept “human value” instead of a bank transfer. So bear with me.

Because the second false premise is that as technology advances, it will be able to do everything you do today better, faster, cheaper than you, all of a sudden, and all at once. And that there’s no way for you to make any meaningful contribution anymore whatsoever. This proposition, as it turns out, is just as confused as the first one when considering the development of generative AI. Yes, ChatGPT can string words together in a plausible sounding way. But it won’t come up with the next big idea for what you should write about. Yes, DALL-E can give you ten different versions of that rainbow colored unicorn in the blink of an eye. In the style of van Gogh, Picasso, or Andy Warhol if you like. But it will not do what van Gogh or Picasso or Warhol did for their craft: Inventing a new category of art. Today’s systems are great at reshuffling and replicating stuff that somebody had already created before. And that’s often useful, just as it’s useful to write things down rather than having to keep them in mind. But what these tools can’t provide is creativity—true creativity that’s required in order to create something that wasn’t there to begin with.

When people started to adopt writing, they found ways to use it as a tool to ease the creation of new things—categorically new things. What would be the equivalent of that in our day and age, being confronted with tools like ChatGPT? I think the answer is surprisingly simple: It’s about learning to ask better questions.

When you have instant access to (virtually) all of human knowledge, coupled with the ability to query it using natural language, and the freedom to have the result shaped in any way you desire, from a limbic poem to an explanation “for a 5-year old”, the possibilities, alas, are limitless. But you, the human, with your unique background, memories, and creative ability, have to think hard about what you ask of such a technological marvel. And I don’t mean this in the narrow, clickbait-y “Those 215 ChatGPT Prompts Will Change Your Life” sense. What I’m asking you is to think deeply about what it really is that you need at this moment in order to drive your current endeavor forward. Are you seeking information? Inspiration? A proposal for how to structure it? The fine-tuning of a draft? And then to consider which of the tools at your disposal is most suitable to that.

These days, ChatGPT will often be the the one you’re turning to. And that’s fine. But that won’t spare you the hard work of thinking about what question you will to ask it in order to get a useful result. And how to interpret the answer. After all, the invention of writing did not “automate away” the need for human ingenuity to come up with better stories. It made it easier to create and preserve them, for sure. Douglas Adams’ Hitchhiker’s Guide to the Galaxy, in and of itself a great story, comes to mind. “What,” they asked the god-like computer they had invented, “is the answer to life, the universe, and everything?” The machine’s response, alas, is history. As is the marvel of what would have happened had they only thought to ask a better question.

  1. Note that I keep referring here to “narrow” AI, in the sense that the algorithms in question are designed to perform a specific set of tasks. That is in contrary to the idea of Artificial General Intelligence (AGI), which would be a different matter altogether. ↩︎