What Sci-fi Can Teach Us About AI

While science fiction can’t predict the future of AI better than anyone else, writers understand the importance of keeping people at the center of the story.

Concerned about the potential for artificial intelligence to amplify online harassment, author and AI consultant SL Huang wrote a novelette about a chatbot that drives people to commit suicide, hoping it would serve to warn about future dangers. So imagine Huang’s surprise when OpenAI released ChatGPT at the end of November, the day before Clarkesworld published her story, “Murder by Pixel: Crime and Responsibility in the Digital Darkness.”

“None of us had expected anything to happen that would bring AI into mainstream conversation at such a heightened level,” says Huang.

The Great AI Awakening

Generative AI is taking the world by storm. ChatGPT attracted 100 million active users in less than two months. Its latest iteration, GPT-4, has scored better than 90% of humans on the bar exam for would-be lawyers but also lied to deceive a human to solve a Captcha test for it. Big tech rivals are competing furiously for leadership in the space and investors poured more than $15 billion into AI startups in the first half of this year.

Yet experts can’t explain precisely how the large language models that power generative AI work, or agree on the risks they might pose. Hundreds of tech leaders, including OpenAI CEO Sam Altman, signed an open letter warning about the risks of extinction from AI, while venture capitalist Marc Andreessen argues that AI may save the world by accelerating innovation and growth and fostering human creativity. Amid such uncertainty, companies should proceed carefully in experimenting to see if generative AI can lift productivity, according to experts at Oliver Wyman, and guard against risks of cyberattacks, leaks of confidential data, and so-called hallucinations whereby AI models confidently generate factually incorrect output.

Where Fiction Writers Dare to Tread

Writers have been grappling with AI since English novelist Samuel Butler suggested intelligent machines might gain supremacy over humanity in his 19th century book “Erewhon.” Science fiction can’t predict the future better than anyone else, but writers understand the importance of keeping people at the center of the story if we want AI to deliver the widest societal benefits rather than serving narrow interests. Good science fiction also raises profound questions about what it means to be human in an age of ever-more-powerful technology, helping us understand and consider the powerful changes underway.

Huang, a mathematics graduate from the Massachusetts Institute of Technology, has been writing science fiction for a decade. She is a firm believer in the promise of AI but says the industry and policymakers need to address the risks of algorithmic bias and digital harassment to unleash the technology’s benefits and reduce the risk of harm.

The main “character” in “Murder by Pixel” is Sylvie, a suspected AI chatbot that digitally harasses several questionable characters to the verge of suicide but also provides emotional support and advice to battered and troubled women. The story leaves unanswered the question of responsibility — the human who created Sylvie or the society whose digital outpourings trained it.

With AI progressing so quickly, “we as a society and as humans need to be careful about our choices here,” Huang says. “I hope that we will largely choose a path that is going to make our world better with these tools, which I absolutely think we can.”

The Potential for Spam, Human Enrichment, and Inequity

Ironically, publisher Clarkesworld ran into its own problems with AI barely two months after the story came out. The online magazine temporarily suspended accepting new articles because of what Editor Neil Clarke said was a flood of “spammy submissions” written with the aid of AI chatbots. The good news, he says, is that most of the work was of poor quality and easily detected. The bad news? Bots are indefatigable and learning all the time.

The use of AI is a big issue in the ongoing strikes against Hollywood studios by writers and actors. Clarke doesn’t think AI will replace established science fiction writers but worries about finding the next generation of talent amid the blizzard of copy created by large language models.

Ted Chiang is a popular writer whose thought-provoking science fiction often imagines ways in which technology can enrich the human experience. His 2010 novella, “The Lifecycle of Software Objects,” tells the story of humans and a model of sentient robots that they train, imagining their relationship as one of parent and child rather than rivals for supremacy. Achieving that vision may require a re-examination of today’s societal values. Writing in The New Yorker in May, Chiang said his worry is not that AI will escape our control and threaten humanity but that it will entrench the technological powers that be and exacerbate inequality. “The tendency to think of AI as a magical problem solver is indicative of a desire to avoid the hard work that building a better world requires,” he writes.

Science and Sci-Fi

Science fiction is popular in Silicon Valley and other technology hotbeds even if many technologists bristle at the penchant for dystopian scenarios.

Christopher Earls, an applied mathematician at Cornell University, says the first “Star Wars” film captured his imagination as a child and led him to a career in science and technology, but he worries about the potential for a “Terminator”-like future of autonomous weapons, which the war in Ukraine is bringing closer to reality. Earls is now leading a new center at Cornell that seeks to use mathematics, rather than natural language, to learn how AI recognizes patterns and to foster AI-human collaboration. Machine intelligence “is going to exceed us,” he explains, “and so what we need it to be is a generous tutor.”

Pedro Domingos, a machine-learning specialist and professor emeritus at the University of Washington, sees AI as a powerful tool for good rather than a threat, and is frustrated at the way many writers and directors characterize the technology. “It's always been the case in AI that people perceive more than is really there because we project our own intelligence and our own desires and emotions onto it,” he says.

But Domingos can’t resist the lure of science fiction. He has written a satire about a startup team that runs a bot for US president as a publicity stunt, only to unexpectedly win the Republican nomination and find themselves fighting a real election with an immature piece of technology.

The use of AI for electoral interference is on most policymaker’s and technologist’s Bingo card of potential dangers, along with physical harms. That’s why sci-fi narratives are so important. By imagining a full range of scenarios, writers can shape our own understanding — and hopefully steer us toward positive outcomes rather than a dystopian future.

What We’re Reading

 

With artificial intelligence dominating the headlines and capturing the attention of business executives and policymakers, we share reading recommendations from members of the Oliver Wyman Forum community and science fiction writers and editors.

Neil Clarke, editor of Clarkesworld magazine, recommends “Blindsight,” a 2006 novel by Peter Watts about the transhuman crew of an AI-captained spaceship that is sent to investigate a radio signal from the farthest reaches of the solar system and encounters strange organisms with great brainpower but no apparent consciousness. “Truly alien aliens,” says Clarke. Another favorite is “The Secret Life of Bots,” a Hugo award-winning short story by Suzanne Palmer about a bot that saves humanity by acting independently.

Pedro Domingos, professor emeritus at the University of Washington, says the progress of AI keeps reminding him of a short story by the Argentine writer Jorge Luis Borges. “The Library of Babel” describes a vast but essentially useless library containing every possible book of 410 pages, using 25 letters and punctuation marks in every possible order. “It’s an infinite library, which is what the web is,” says Domingos.

Christopher Earls, applied mathematician and head of Cornell University’s new Scientific Artificial Intelligence Center, recommends “Sparks of Artificial General Intelligence: Early experiments with ChatGPT-4.” This 155-page scientific paper from Microsoft Research is not your typical beach read, but the examples it contains, such as showing how OpenAI’s latest chatbot compares with its predecessor in writing a Platonic dialogue or producing Javascript code that creates an image in the style of a Kandinsky painting, underscore the rapid progress the AI tool is making.

SL Huang, science fiction writer and AI consultant, mentions “The Lifecycle of Software Objects,” a 2010 novella by writer Ted Chiang that centers on the relationship between a former zookeeper and a digient, or virtual pet, she’s hired to train. “It’s a very interesting and thoughtful piece on digital life forms,” says Huang. She also recommends “Algorithms of Oppression: How Search Engines Reinforce Racism,” by UCLA Professor Safiya Umoja Noble.

Ana Kreacic, partner and chief operating officer of the Oliver Wyman Forum, recommends Isaac Asimov, who posited his three laws of robotics – a robot shouldn’t allow harm to come to humans, must obey human orders as long as they don’t conflict with the first law, and should protect its own existence as long as that doesn’t conflict with the first or second laws – in a 1942 short story, “Runaround,” which was included in his 1950 compilation “I, Robot.” “It has influenced our thinking about ethics since the dawn of the computer age and is even more relevant now as we grapple with AI,” she says.

John Lester, a partner in Oliver Wyman’s Digital practice, is a huge fan of Iain M. Banks’ series of novels about the “Culture,” a space-based society run by advanced artificial intelligences, known as “Minds.” They provide almost limitless abundance and freedom to humanoid citizens but also engage in epic struggles with other civilizations. “A Player of Games,” the second book in the series, is a good entry point to Banks’ world. “I view it mostly as an exercise to think about human nature,” says Lester. “If I woke up and won a multi-billion lottery, what would I want to do?”

John Romeo, managing partner and head of the Oliver Wyman Forum, suggests “AI 2041: Ten Visions for Our Future,” a 2021 short story compilation by Kai-Fu Lee, chairman and CEO of Chinese venture capital firm Sinovation Ventures, and novelist Chen Quifan. “It brings the science to life by raising practical and existential questions.” He also likes “Klara and the Sun” by Nobel prize winner Kazuo Ishiguro. “It’s an enjoyable read about our changing world and what makes a life worth living, told through the eyes of an innocent AI.”

Uncharted: Insights Off The Beaten Path

 

Every month, we highlight a key piece of data drawn from more than two years' worth of consumer research. 

AI Early Adopters Leave Employers Behind

Disruption from AI motivates one in three white-collar job seekers