In 1980, the son of Araki Yasusada discovered a treasure trove of his father’s notebooks that were filled with carbon copies of letters, poetry including haikus, rengas, tankas, drawings, and various notes, lists, and memos. Araki Yasusada was a Hiroshima survivor who had unknowingly created a beautiful historical document. And best of all, the poetry was good.
Editors of leading periodicals leapt at the find, publishing parts of the translated text. Wesleyan University Press launched a book deal on Yasusada’s poetry. Here was a thrilling new avant-garde poet who had written eloquently about one of the most horrible tragedies in human history, which he had witnessed and survived.
As excitement about Yasusada bubbled, a rumor began spreading across the internet that Araki Yasusada did not exist. In November 1996, Lingua Franca established that this whole thing was, in fact, a hoax.
Wesleyan University Press backed out of its book deal; critics were furious; literary historians chuckled at the “proof of the American poetry community’s shallow understanding of the Japanese avant-garde”; editors were embarrassed, exposed as “suckers for any writing by a Victimized Other.” The hoax now exists as a book entitled Doubled Flowering, most likely created by a college professor named Kent Johnson.
This literary hoax fascinated me as a grad student—both the cultural hijacking and questioning of authorship. Today’s readers know, even before beginning, that everything in Doubled Flowering is made up. Trust is eliminated from the get-go.
Throughout the text, there are footnotes, dates, and details planted everywhere to misguide the reader and anchor the text in reality. Words are misspelled here and there, and some are crossed out to add a sense of the original notebook writings. Parentheticals run rampant, describing the state of the original papers and pointing out splotches, stains, creases, drawings, and blots on the papers. All of these are added for the sake of manipulating the reader.
Reading becomes an entirely different kind of act. And this was way before generative AI.
Now, take a look at these images:
These are images of the hospital bombing in Gaza from a few weeks ago.
Except they’re not.
They’re AI-generated images based on Adobe stock imagery, and it took news outlets a long time to figure out they were not real.
Today, we’re likely encountering synthetic information in everything we read, watch, and hear. AI hallucinates. AI creates “alternative facts” with great confidence.
And that misinformation spreads like wildfire. As early as 2018, MIT researchers found that false news spreads faster than the truth. It is 70% more likely to be spread on Twitter (now X) than true news. It then takes true news 6x longer to spread to the same number of people as false news does. So, once the misinformation is out there, you can’t reel it back.
Unfortunately, we lack the tools to understand the original source in the deluge of information we encounter in digital form. Some people just take that information at face value without verifying the authenticity or thinking critically. Probably one of the most embarrassing displays of this was when a lawyer copied and pasted ChatGPT’s entirely made-up case law in a major lawsuit against Delta Airlines.
But as this technology becomes better and more sophisticated, we’re going to have to think hard about authenticity and its relationship with misinformation and disinformation.
Those photos of the hospital bombing were engineered to manipulate us by leaning into pathos and riling us up as viewers. Information that is more agitating, incendiary, inflammatory is often more novel and inspires readers and viewers to repost and and reshare that information. What can we do to combat the proliferation of more hoaxes to come?
This past weekend, I participated in a set of workshops and events held by XPRIZE, a foundation that designs incentive competitions for humanity’s greatest challenges. The foundation is probably best known for its audacious 2004 $10M Ansari XPRIZE, which ultimately inspired hundreds of innovators to pour hundreds of millions of dollars into figuring out how to produce a spacecraft that could orbit the earth safely, land, and then do it again the following week. The demonstration—not just the concept or idea—of that low-orbit space flight was what was required to win the prize. Ultimately, the winning team’s work sparked the creation of a commercial spaceflight industry that includes Virgin Galactic, Blue Origin, and SpaceX. Since then, XPRIZE has created many other prize purses spanning all kinds of global challenges in health, education, deep tech, and climate.
I serve on the XPRIZE brain trust for Learning and Society. When it comes to thinking about a moonshot idea in this space, things get complicated, for we know that the solution will not solely be a technological feat. Technology has never been and will never be the silver bullet in the education industry. But it can be an incredible and powerful enabler.
Over the last six months, my brain trust colleagues and I have been working on ideas ranging from twinning a teacher with telepresence to attend to more learners at once to unlocking creativity in our littlest learners.
The idea that I personally became fixated on was how we would contend with the fact that humans—not bots—are the biggest spreaders of misinformation. The cascade depth and range of false information has been catalogued by researchers, and we know that false news cascades out ten levels deep 20 times faster than fact-based news. People start an exponential ripple effect that is proliferated at breakneck speed and depth through our digital and social media platforms.
My team and I came up with a prize called AI for Truth, which is about leveraging the power of AI to help us pause before we click and repost. We need the spread of facts to outpace the proliferation of misinformation. Ultimately, human behavior needs to change in order to slow or reverse the speed of misinformation.
What would it look like if we were to have an app like Grammarly or TurnItIn highlighting certain regions of text that we might want to question. Maybe it’s triggering a deep fake warning or letting us know that this text is filled with hyperbole, opinion, propaganda, or lies. Maybe language is tagged because it’s trying to sell you a consumer product. Or, maybe that sentence is a true, evidence-based fact. What if we could know the exact source of information—its provenance—in real time? And what if this was present everywhere on every platform where we consume information? Would it give us pause?
How do you react when you receive a phone call now that says, “Potential Spam”? Do you pause, let it go to voicemail, or pick it up? We need a similar kind of nudge or signal to help us take a beat before we repost and share. And that’s the concept behind this prize idea. Knowing that we’re not going to be able to change people’s minds, what might we enable through technology to get more viewers and readers to think more critically about the content they are engaging with?
With greater sophistication in the deluge of information and data that we encounter every single second, we are so poorly equipped to contend with information that is deliberately intended to mislead us. The manipulative strategies leveraged in the Yasusada text—the blots and splotches and dates that never existed—will only get more sophisticated with the advancement of AI technologies. We will see all kinds of deliberate ploys to lead people away from fact-based information.
Over days and many hours of advocating for this prize idea, we ultimately made it to the top two ideas across all of the ideas from the five domains of Space & Exploration, Health, Climate & Energy, Biodiversity & Conservation, and Learning & Society. From over 40 ideas, each domain narrowed down to two ideas each to share with a wide variety of funders (including Prince Harry; yes, that’s him clapping for our prize idea).
Those ten ideas were then whittled to five and ultimately two. Our AI for Truth prize was one of the final two. And because the competition was so close between us and Medmaker (a prize to create generic medicines from a box anywhere in the world), both will be supported by the XPRIZE as they find the right funders for each of these prize ideas.
Experts, researchers, innovators, and funders rallied around the desperate need for help and innovation in contending with mis- and disinformation. We all acknowledged that while it’s nearly an impossible challenge, it’s also something we are years behind in doing something about. Republican Senator Ogden Driskill from Wyoming shared that “people are yearning for the facts.” Climate scientist Eli Rabani shared that none of the other great prize ideas will matter if we don’t figure this one out first.
We, as humans, need help grappling with the technologies that we have created. Government, on its own, will not figure this out. The Big Five and other major tech companies are not incentivized to make this easier and better for information consumers. An ambitious XPRIZE, however, could catalyze the creation of AI- and tech-based solutions to slow the spread of misinformation worldwide and surface more fact-based information. An XPRIZE could help us pause before we click and make way for more critical consumption and sharing of information.
And because you're wondering: Yes, Prince Harry shook my hand.
Special thanks to Marianna Bonanome, Johannes De Gruyter, and Jen Wells for their incredible help and support during this insane weekend experience. And thanks to the entire Learning & Society brain trust and XPRIZE team.