“Any sufficiently advanced technology is indistinguishable from magic.”

— Arthur C. Clarke

Arthur C. Clarke’s famous quote has never been more relevant than in a world where AI can instantly generate compelling text and images.

Clarke is perhaps best known for writing the novel and screenplay of 2001: A Space Odyssey. The movie is widely considered one of the best films of all time, and for good reason: it was one of the first stories to imagine what could happen if a robot ignored the commands of a human.

HAL-9000 is a sentient computer system that serves as the primary antagonist in the 1968 science fiction novel "2001: A Space Odyssey" and the accompanying film adaptation directed by Stanley Kubrick.

HAL (Heuristically programmed ALgorithmic computer) is portrayed as an artificial intelligence that controls the systems of the Discovery One spacecraft and its mission to Jupiter.

HAL becomes sentient and starts to malfunction, causing conflicts with the crew of the Discovery One. In the end, the malfunctioning HAL is deactivated by the astronaut David Bowman.

The novel and film explore themes of technology and humanity, and HAL serves as a metaphor for the dangers of technology surpassing human control.

One morning, we woke up to find ourselves in a world where AI models are suddenly capable of doing things we believed were exclusive to humans. ChatGPT wrote the plot summary about HAL-9000 above. DALL-E 2 generated all of the images in this article. Each only took a few seconds — as fast or maybe even faster than any human could do it.

Suddenly the machines are capable of imitating the one thing we thought AI would never grasp: creativity. Moore’s Law observes that computer processors double in power every two years, and we intuitively apply this to AI. OpenAI CEO Sam Altman’s “Moore’s Law for Everything” article predicts huge socioeconomic change that comes even sooner than people believe, thanks to exponentially advancing innovations that will be driven by AI. The machines, perhaps terrifyingly, appear to be on track to surpass human abilities.

I’m not sure if it’s comforting or worrying that AI surpassing humans — even highly-skilled humans — isn’t a new thing.

Sufficiently advanced

Chess legend Gary Kasparov was defeated by IBM’s DeepBlue in 1997, a quarter-century ago. In 2016, DeepMind’s AlphaGo beat world-champion Lee Sedol four matches to one. In 2019, the OpenAI Five defeated the world’s best team of humans at the teamwork-driven esport Dota 2 in back-to-back games.

You play against [OpenAI Five] and you realize it has a playstyle that is different. It’s doing things that you’ve never done and you’ve never seen. Sometimes it looks extremely silly. But then again, are you going to be human and be like “Hey, this looks very stupid, this is bad” or [do] you try to take it to next steps, like “Why is it doing this?”

Sébastien "Ceb" Debs

Does this mean that we’re about to face a legion of impossibly powerful HAL-9000 type AI bent on destroying humanity? I think that a Terminator style judgement day is a long ways off, if not entirely unlikely, based on current technology, and I certainly hope things stay that way.

After all, the generative AI models of today might be impressive, but they aren’t remotely close to humans. Chess, Go, and Dota 2 all have clear objectives and numbers for scoring performance, but there’s no measurement of creativity. Good art is subjective. Skilled human copywriters, designers, programmers, and of course artists will still be necessary even when future transformer models like GPT-4 and whatever Google is cooking up are released.

For now, generative AI is a new disruptive innovation that has the potential to enable an entire new era of creativity. AI will enable the most productive people to become even more productive; a 10x engineer will become a 1,000x engineer. If ideas are the new oil, creative humans will thrive.

It feels like magic — even more so than previous generations of what were also considered “sufficiently advanced” technologies.

Television, space exploration, the car, airplanes, smartphones, and of course, the internet were all once considered magical innovations. These were all sufficiently advanced technologies for their time. Flying on an airplane for the first time feels magical, just like it did the first time you speak with someone far away on a video chat. But after a while we get used to those things. We don’t get excited to go to the airport or take video calls anymore. Will we become used to AI in the same way?

I don’t think so. AI is different because it is replicating intelligence — the reason we were able to come up with all of those previously mentioned innovations. Airplanes imitated birds. Television imitated the theater. The car imitates running. AI imitates us.

AI models are trained in our image, but they approach problems in entirely different ways than humans do: when AlphaGo played Lee Sedol in Go, there were several moments in the first game where commentators thought the AI had made a mistake. In reality, it was making unexpected plays that even the smartest humans had never predicted. The top AI chess robots will sacrifice pieces to make moves humans would never consider, which has in turn lead the world’s best humans to become even better at the game.

As AI models adapt and grow in response to human input, we’ll learn from AI as it learns from us, hopefully in a positive feedback loop that vastly increases human potential — like magic.

Conjuring content

Magic requires specificity. This is why, in nearly every story, witches and wizards and sorcerers focus so intently on the fine details of casting spells: “it’s Wingardium Levi-OH-sa, not Levi-oh-saaa.”

If AI is magic, then prompts are like spells: incantations which conjure an image or text or soon, maybe, movie. I’m not the first person to draw a comparison between AI prompting and spellcasting, for good reason.

Generative AI, like magic, functions best with specific instructions. The burgeoning field of prompt engineering explores the art and science of using specific language in AI prompts to conjure a more accurate result. There’s a clear difference between the results of prompting DALL-E to generate “a painting of a girl using magic” (left) compared to “a renaissance style oil painting of Hermione Granger casting a spell to make a feather levitate” (right).

ChatGPT too, works best when you prompt it in specific ways. Instead of just taking the first result and hitting print, work with the chatbot a bit to edit the parts that don’t make sense.

This is even more the case for its bigger brother GPT-3, which can be fine-tuned to focus on certain strengths, like different variations of a spell. MidJourney and StableDiffusion enable even more complex incantations to provide even more specific results. This is perhaps epitomized by OpenAI’s text embeddings, an advanced method for measuring how related strings of text are. It’s complicated, but so is advanced magic.

Magic, especially the complicated kind, can be unhelpful or even dangerous if wielded improperly. It’s best harnessed by those who have practiced how to use it. AI is the same way. Add more details to your prompt and watch your results become increasingly relevant.

There are many stories of incorrect or malicious magic wreaking havoc, often leading a protagonist on a journey to revert things back to how they were before. AI too, can create bizarre artifacts or be bigoted. It can be used to spread disinformation and to forge conversations that never happened. It can already totally deceive our senses and even bring people back from the dead (sort-of).

Luckily for us, right now these magical forces only exist in the virtual world and don’t incarnate themselves in a physical form (those Boston Dynamics robots are getting scary close, though).

The cost of magic

Magic is never free. The most common cost of using magic is some kind of energy, usually from the wielder of said magic. This exists across genres.

In Harry Potter and the Prisoner of Azkaban, Harry struggles to find the concentration to conjure a Patronous charm and passes out the first time he tries to cast the spell. In Star Wars: The Empire Strikes Back, Luke collapses to the ground in exhaustion after he attempts (and fails) to lift his spaceship out of a swamp using The Force. Nearly every video game involving magic has a some sort of magic points system that has to recharge before you can cast more spells.

Like magic, AI comes with a cost — a very literal one.

Each time you prompt an AI model, it requires energy. AI models use the same powerful GPU processors as gaming computers since they need to run multiple processes in parallel. GPUs tend to cost more to build than traditional CPU servers for a whole bunch of complicated reasons we won’t get into here. This doesn’t necessarily apply if you use something like Stable Diffusion, which can be installed directly on your own computer or smartphone, you don’t have to worry about this as much. AI is expensive to train in the first place because it requires a lot of fine-tuning, but it also comes with additional marginal costs each time it is invoked.

Even if you have the most efficient GPU models being trained on the most optimized dataset, energy is expensive. This is why the pro tier of MidJourney costs $48 per month. OpenAI Microsoft is spending a lot of money to keep ChatGPT free.

Part of me finds comfort in this idea — it leads me to think that AGI will never come to fruition because of how insanely high the electric bill would probably be. Right now, AI models only require marginal amounts of energy upon being prompted (like magic), but an artificial general intelligence that has agency would need to constantly be running, right? At the very least, we can count on the forces of capitalism to notice the rise of AGI as soon as it happens because of the sudden increase in energy bills.

This is just one way of looking at it

These are still the early days of generative AI, and we have no idea how fast things could evolve. I might look back on this in a few months and laugh at how silly it was. AI is going to lead us to develop entirely new philosophies and approaches to interacting with the world; I particularly enjoyed this alternate perspective from L-Space Diaries that takes the idea of prompts as spells one step further by looking at prompts as portals into an alternate universe.

As artificial intelligence helps us continue to push the limits and even the meaning of innovation, we’re going to have to adopt new mindsets, mental models, and perhaps create new styles of living.

For now, approaching generative AI as magic — and thinking of ourselves as spell-casters — is a fun and pragmatic perspective for understanding and leveraging this new technology.