Leveraging AI’s Moment of Fear and Trembling
Beginning in the 1840s, more people began using trains as a mode of transportation. In the U.S., those early trains moved at a whopping seven to ten miles per hour. Still, people were outraged by what they viewed as terrifyingly high speeds. They complained of “railway brain” or “railway spine”—the unwelcome effects of scenes whizzing by and taking a toll on their bodies in disturbing ways.
One of the great Hudson painters, Thomas Cole, explained it best while raging in his journal:
“The hurry noise & restlessness of Rail Road travelling with the consequent violence done to all the natural requirements of the body are anything but conducive to the health of the body or serenity of the mind. The body is made to be merely a sort of Tender to a Locomotive Car; its appetites & functions wait on a Machine which is merciless & tyrannical.”
Another fantastic description by the social critic Max Nordau: “Every line we read or write, every human face we see, every conversation we carry on, every scene we perceive through our window of the flying express, sets in activity our sensory nerves and our brain centres.”
The great art critic John Ruskin described the railway as “the loathsomest form of devilry now extant… destructive of all wise social habit and natural beauty.”
Wow. Destructive devilry. The “flying express” was clearly a source of duress for these riders—inducing real fear and anxiety about how this thing was altering their perceptions of the world.
Indeed, our brains can scarcely do anything but perceive newness as danger. Our amygdala fires up in the face of the new and unknown. So, when we encounter an innovation like the latest version of ChatGPT, we react fearfully and freak out about how students will cheat on their exams. We go to dark places, and we lose sight of how generative AI may afford us an incredible opportunity to focus learners on the most critical problem-solving and systems thinking skills we need in order to contend with even more uncertainty and unpredictability in the future.
Functional magnetic resonance imaging (fMRI) studies of our brains reveal just how bad we are at thinking about the future. When we try to think about ourselves 10 years from now, our medial prefrontal cortex (MFPC) does not activate, meaning our neural patterns mirror the same neural patterns that occur when we look at a total stranger. No empathy. No emotional connection. We look at our future selves as though we’re looking at strangers.
Despite this physical limitation built into our brains, we don’t overcompensate or make up for this deficit. We don’t scaffold our learning experiences to help us engage in the discipline of looking ahead.
To be clear, I’m not suggesting that we need to practice becoming better predictors of the future. That’s an impossible feat. Instead, we need to learn how we might respond and plan strategically in simulated environments.
At the Johns Hopkins Bloomberg School of Public Health, as an example, they have a Center for Health Security that runs fictional scenarios they call Tabletop Exercises that center on imagined global disasters like a new pandemic or a new outbreak of Smallpox. The simulations are meant to be teaching and training resources for global leaders.
In these fictional disasters, a group of decision makers must deploy resources and create policies, but before they do, they have to consider risks, identify potential failure points, understand public opinion, and communicate carefully in situations where certain populations get left behind, and ultimately, somehow make the best-informed decisions they can in a very stressful situation.
The military has a different version of simulations and sophisticated scenario-planning to think through various possibilities and outcomes they must anticipate as Armed Forces enter life-threatening circumstances.
These gamified simulations are meant to bring to life what Nassim Nicholas Taleb calls Black Swan events, or improbable happenings that prepare us to think more nimbly about the future. When done well, organizations can identify their own weaknesses and gaps, or where they have to shore up resources and processes to prepare better for the future.
At the same time, we don’t always have to be in crisis mode to engage in foresight training. Thinking deliberately about the future also includes interrogating the first- and second-order effects of the innovations, strategies, and policies we design. This means combining knowledge and practices across boundaries and disciplinary domains—a combination of sense-making, scenario planning, anthropology, empathy, ethnography, design thinking, ethics, the philosophy of law, climate science, cybersecurity as well as many other disciplines.
Real interdisciplinarity also forces us to consider who else needs to be included in the conversation. We know that more diverse teams bring more diverse perspectives and approaches to complex problems. But in the tech industry, an industry that is so forcefully reshaping every interaction in our lives, less than two percent of the VC funding is going to black founders. Moreover, women are in less than 28 percent of all jobs in math and computing.
What might our world look like today if more women and more people of color had been involved in the original launching of Twitter, Facebook, and Google? More vigorous debate and tough questions from more diverse perspectives and life experiences might have helped us think through how social media apps could ignite hate-fueled mobs and lynchings like the ones in Sri Lanka, Indonesia, India and Mexico. Maybe instead of reacting, we might have anticipated how algorithms could be overrun by click bait, bias, and outright lies.
H.G. Wells famously mused:
It seems an odd thing to me that though we have thousands and thousands of professors and hundreds of thousands of students of history working upon the records of the past, there is not a single person anywhere who makes a whole-time job of estimating the future consequences of new inventions and new devices…there is not a single Professor of Foresight in the world. But why shouldn’t there be? All these new things, these new inventions and new powers, come crowding along; every one is fraught with consequences, and yet it is only after something has hit us hard that we set about dealing with it.
We tend to act first and then react later.
Even Geoffrey Hinton, one of the “godfathers of AI,” recently acknowledged: “I thought it would happen eventually, but we had plenty of time: 30 to 50 years.” The “it” he’s alluding to is the time when digital intelligence supersedes human intelligence. He acknowledges that even while knowing that it may be 50 years into the future, he didn’t do anything decades ago to mitigate the destructive effects of what he was unleashing into the world.
It’s precisely because we’re unskilled at contending with the future that we need to practice the future more. If we use this inflection point in AI wisely, we can rethink in wholesale ways how we might integrate real-world, problem-based learning and scenario planning into our curricula from K through 12 and beyond.
We shouldn’t have to go into a master’s degree program or the military to learn how to engage with thorny challenges. With more foresight training embedded in learning experiences at all phases in our lives, we can better plan for and anticipate the ramifications of what we’re building before we launch them.
And with better strategies, our contingency plans will ultimately look very different from Elon Musk’s notion of interplanetary travel as a way of escaping our A.I. creations. Learners will be able to think through challenges before they arise, or at the very least, respond less fearfully when they do.
References
Max Nordau, Degeneration (London: William Heinemann, 1892), 39.
Thomas Cole, 1847 Journal
John Ruskin, The Words of John Ruskin, Volume 34 (1908), 604.
Jane McGonigal, “Our Puny Human Brains Are Terrible at Thinking About the Future,” Slate, April 13, 2017, https://slate.com/technology/2017/04/why-people-are-so-bad-at-thinking-about-the-future.html.
H.G. Wells, BBC Radio broadcast: https://www.bbc.co.uk/archive/communications-1922-1932--hg-wells/z4f6kmn