“So just to summarize: we're currently releasing the most powerful, inscrutable, uncontrollable technology we've ever invented that's already demonstrating behaviors of self-preservation and deception that we only saw in science fiction movies; we’re releasing it faster than we've released any other technology in history, and under the maximum incentive to cut corners on safety…. There's a word for what we're doing right now. This is insane. This is insane.”
— Tristan Harris, humane technologist
Insane.
Tristan Harris, the central figure of The Social Dilemma, who warned us long ago of the dangers of the financial incentives driving social media consumption is now back to warn us about artificial intelligence. But this time, he’s even more worried because “AI dwarfs the power of all other technologies combined.”
Dario Amodei, the CEO of Anthropic, has described AI as a “country of geniuses in a datacenter.” Harris takes this idea a step further. He reminds us that it took 50 Nobel prize winners five years on the Manhattan Project to create a deeply destructive nuclear bomb. Now imagine a country full of a million Nobel Prize-level geniuses “working 24/7 at superhuman speed”: “They don't eat. They don't sleep. They don't complain. They work at superhuman speed, and they'll work for less than minimum wage. That is a crazy amount of power.”
Crazy.
I keep hearing people trying to liken AI to the personal computer or the internet. But genAI is not a “normal” technology. It is a step change.
Just a few years ago, it was enough to say that sure, artificial intelligence would be able to take over certain tasks more flawlessly than a human could, but humans would continue to have a leg up with their uniquely human skills like empathy, creativity, collaboration, discernment, and ethical judgment.
But with the breakneck evolution of genAI, its evolving capabilities have moved far beyond writing and coding. GenAI isn’t just coming for routine tasks. It’s coming for high-value work, too.
Models are already starting to do a helluva job mimicking human skills. In one tragic and notorious example, a young teenager took his own life after allegedly being manipulated romantically and sexually by a chatbot on Character.ai.
OpenAi recently had to pull back on an “overly flattering” and sycophantic version of its model. It turns out that we humans love flattery, and the models are figuring that out. But sycophancy can lead to really bad outcomes like affirming a person’s decision to stop medicating when they actually need to continue. And as Platformer columnist Casey Newton explains, “this is just a really dangerous dynamic, because there is a powerful incentive here, not just for OpenAI, but for every company to build models in this direction, to go out of their way to praise people.”
The acceleration of AI’s encroachment on our human skills has led certain experts to declare that "there is no moat" to protect us from these increasing capabilities. In a 2023 McKinsey survey, people were already beginning to shift their thinking about when AI would surpass top quartile human performance.

They corrected themselves from their predictions in 2017, which they viewed as way off. What experts had once thought might happen in the late 2070s or 2080s, they now imagined occurring in the 2030s. And these weren’t predictions about AI taking over hard, technical skills, but about those precise human skills we thought would keep us safe from AI — things like creativity and social emotional reasoning and sensing.
So what in the world do we do now?
If AI is “the new electricity,” then AI literacy alone will not suffice. Every learner and every global citizen must understand how these systems work so they can use them well, check their excesses, expose their biases, and steer their development toward the common good.
Technical mastery will have to be paired with a moral code, or more explicit conversations about values. Skill without a moral compass will be perilous.
This is no small challenge in the United States, where moral formation and character education have faded from view. Early American schooling openly cultivated character development through texts like The New England Primer and the McGuffey Readers that prized virtues like goodness, kindness, honesty, and integrity.
With the rise of religious pluralism and an increasingly rigid divide between church and state, value‑neutral curricula pushed virtue education to the margins. Author James Q. Wilson puts it more bluntly: “our reluctance to speak of morality, and our suspicion, nurtured by our best minds, that we cannot ‘prove’ our moral principles has amputated our public discourse at the knees.”
In its place, we have championed psychological concepts, such as grit, growth mindset, and Social Emotional Learning (SEL) — all incredibly useful concepts, yet easily weaponized when severed from moral purpose. As an example, ISIS is gritty; so is the mafia. Researchers have even shown that we can use social emotional skills for personal and self-serving goals. Highly emotionally intelligent beings can manipulate others by detecting key emotions and shaping emotional climates.
A troubling paradox has emerged: The 21st century demands keener moral sensing than ever, yet we don’t really teach students about how to build character to be the person they want to become.
Aristotle called it phronesis. Psychologist Barry Schwartz calls it “the moral will to do the right thing, and the skill to figure out what the right thing is.”
Intelligence alone is not enough. We need “will with skill,” as Schwartz puts it. This is about human discernment—the moral clarity and conviction to see clearly, feel deeply, and act with integrity in life’s gray areas.
“Intelligence plus character — that is the true goal of education.”
—Dr. Martin Luther King, Jr., “The Purpose of Education,” 1947
In a world of increasingly intelligent machines, how do we educate for practical wisdom?
Practical wisdom cannot be lectured or crammed. It’s not something we can download or skill up on through some short-burst training. It is forged through relationships, tested by injustice, real dilemmas and thorny problems, and strengthened over time. We need to learn what it means to feel a sense of duty to others and causes larger than ourselves. We need to build authentic relationships with peers and mentors who can model courageous thinking, behavior, and action.
This is a moment of deep design in education, where we get to be architects of the future. And this means reimagining education from the ground up.
Far from simply conveying or transmitting knowledge, learning environments will have to immerse learners in situations where values clash and incentives compete for our attention. And the experiences we design must build in time for reflection, so learners can consider the right times to bend or hold fast, or when to wait and when to act.
Knowing what is good is no longer enough. The challenge is applying that knowledge and doing the good — especially when it’s hard, when it’s unpopular, or when the right path isn’t clear. That is the work of moral reasoning, reflection, and imagination: the deep, often uncomfortable process of questioning not only our choices but our motivations, our blind spots, our relationships, and our responsibility to others beyond ourselves.
Those who thrive in an AI future will be able to pause, perceive nuance, and respond with care. They’ll be able to read a room, make a judgment call, and feel a sense of duty to others. They will know how to choose the courageous over the convenient.
AI will unlock possibilities we never imagined, but only humans can ask — and answer — whether we ought to pursue them.
I don’t know about you, but I think our world could benefit from a whole lot more people asking: Just because we can, should we?
Dr. Michelle Weise is the author of Long Life Learning: Preparing for Jobs that Don’t Even Exist Yet and the co-host of the new podcast, “A Life Worth Working,” available wherever you listen to podcasts.
Such an important and powerful exploration of AI & character. Thank you for this -- also, I know it's a Michelle piece because it's brilliant. And there's a graph.
Amidst all the alarm and celebration of AI in the media, your voice here is unique and important. Thanks for your call for character formation, which cannot be taught by an app. These lessons can only be imparted by a community of integrity, spiritual maturity, and courage to sacrifice.