Even the most state-of-the-art A.I.s learn through a rigid process, in which they are trained, at great expense, and don’t really get any smarter after that, no matter how much new information they ingest. It’s as though their minds freeze on graduation day. Yet human beings constantly improve their own minds through an unfolding, open-ended process that connects newly acquired facts and ideas to ones collected long ago. — Joshua Rothman
A new word to describe AI is not “educable.”
Computer Scientist Leslie Valiant has just come out with a book called The Importance of Being Educable in which he uses this adjective, “educable,” to distinguish humans from AI. Unlike a mind freezing on graduation day, humans are able to intuit, recombine information, and synthesize new data alongside what we learned long ago.
AI’s intelligence, by contrast, is decidedly more limited. Computer scientist Yejin Choi puts it bluntly: “AI today is unbelievably intelligent and then shockingly stupid.” She uses the word stupid because AI lacks what Choi calls common sense. In a TED talk, she shares the response she received when she asked GPT-4 to measure six liters of water using either a 12-liter jug or a six-liter jug. The answer, of course, is to just use the six-liter jug, but here’s what she received as a response: “Step one, fill the six-liter jug, step two, pour the water from six to 12-liter jug, step three, fill the six-liter jug again, step four, very carefully, pour the water from six to 12-liter jug. And finally you have six liters of water in the six-liter jug that should be empty by now.” Um, what??
AI also lacks a moral compass, human values, and a human psyche. Oxford philosopher Nick Bostrom illustrated this well through a thought experiment: If we asked an AI to maximize the production of paper clips, that superintelligence would likely destroy anything and anyone that would try to thwart its efforts to create more paper clips. AI is single-minded in the way it uses all of its computational resources to achieve a singular goal, even if it might mean hurting others.
Because of these limits of AI, we believe that humans have a leg up on the machines. You may have heard people talk about AI requiring “humans in the loop.” It certainly sounds reassuring as we worry about keeping our jobs in the face of more automation, but historically, humans left in the loop have been paid a pittance to clean up data or tag photos to train AI. Mary Gray and Siddharth Suri call this kind of work “ghost work.”
Billions of people consume website content, search engine queries, tweets, posts, and mobile-app-enabled services every day. They assume that their purchases are made possible by the magic of technology alone. But, in reality, they are being served by an international staff, quietly laboring in the background.
These invisible workers have been forced to clean up after what economist Daron Acemoglu calls, “so-so automation.” He uses the adjective, “so-so,” because we haven’t been able to introduce new technologies that are sufficiently productive to displace workers fully and generate new and more creative forms of labor that benefit people.
Think about Amazon warehouses, which not only have hundreds of thousands of robots powering its fulfillment centers but also hundreds of thousands of humans recognizing, picking, and packing boxes. Why? Because as MIT economist David Autor explains, “there is at present no cost-effective robotic facsimile for these human pickers. The job’s steep requirements for flexibility, object recognition, physical dexterity and fine motor coordination are too formidable.”
Not many people grow up aspiring to be a pick-and-packer. Ghost work is rarely stimulating, fulfilling, or steady work. So, Valiant’s concept of humans being educable unlocks this challenge of moving away from humans in the loop doing stultifying work to how we might engage in more creative work in the future.
While generative AI can often sound sophisticated and confident tonally and linguistically, when we look deeper, we can spot the repetition, the circumlocution, and the vagaries embedded within. Not only are we going to have to constantly question and interrogate the outputs of AI, but we’ll have to lean into our educability, our ability to synthesize information, exposure, and experiences beyond what we learned at school.
This is how humans can develop their own superpowers — through our ability to span, range widely, and pull from experiences, intuition, and our observational powers. With a sensitivity and our literal sensing of the world around us, we can apply our knowledge in new and non-obvious ways.
David Epstein calls this same sort of phenomenon “range,” our ability to span across ideas and disciplines, or our dexterity in recognizing and connecting scenarios that typically wouldn’t be viewed as having anything in common.
But we have to cultivate this kind of thinking if we want to have a leg up on the machines. This unpredictable application of knowledge doesn’t come naturally. It takes work and practice for humans. Dedre Gentner, a psychologist at Northwestern, has demonstrated through her research that most people are only able to think with simple, surface analogies. We’re not able to engage easily in what she calls “analogical thinking,” or our ability to think across deeper structural or common, disciplinary connections.
Even if we believe this kind of problem solving is what is being taught in universities, Gentner’s research has illuminated how hard it is for most majors to engage in this kind of wide-ranging, interdisciplinary thinking. Only one group excelled at this activity — students from Northwestern’s Integrated Science Program, a mix of biology, chemistry, physics, and math, all combined into one major.
Unfortunately, programs like the Integrated Science Program, which empower learners to grasp connections across disciplines, are often rarities on campus, even unwelcome. Interdisciplinarity often turns into more of a buzzword rather than real practice in postsecondary education. Most college systems are structured around a separation and siloing — not collaboration — of departments that seek to define a specific and narrow competence.
Look at the incentive structures within a university to understand its priorities. Rarely do promotion or tenure processes reward co-teaching or collaboration across courses and disciplines. So, somehow, learners must prepare to solve the thorny and wicked problems in the work world by engaging with only domain-specific learning during college.
For humans to be indispensable figures working alongside AI, we have to be interrogating algorithms, asking hard questions and the right questions while making unique connections across domains of knowledge. Where are the universities of the future that will cultivate these exceptional thinkers?
We may have access to human skills, but let’s be clear: We can also stink at human skills. We must practice and deepen our self-awareness, our communication, our empathy and sense of ethics, as well as our ability to solve problems with evidence and a bit of intuition.
Which schools will be the first movers to make problem-based learning the core of everything they do?
We are educable beings. We have a special power that exceeds the expectations of what even the most advanced AI can do today. But we need the right systems and structures in place to help us strengthen and flex our superpowers so that we can aspire to be humans in the loop.
Dr. Michelle R. Weise is the author of Long Life Learning: Preparing for Jobs that Don’t Even Exist Yet. To find out how you can work with Michelle, go to michelleweise.com.