When My 8-Year-Old Proved AI Wrong (And Why That Matters More Than Ever)
Just the other day my son modeled a behavior that will be increasingly critical in a world with AI. He challenged an answer instead of just accepting it.
I had just started sipping my morning coffee when he brought me the latest gift in his Harry Potter LEGO Advent Calendar. He's just getting into the series so doesn’t always know what each item from the calendar actually is. He'd opened that day's door to reveal a tiny LEGO figure in a festive sweater, and he had a question: "Who is this supposed to be?"
I did what most of us do now. I pulled out my phone, snapped a photo, and asked AI.
The response came back immediately, with that characteristic AI confidence that can feel so authoritative: "This is Hermione Granger. The cat on her sweater is her pet Crookshanks. The calendar features all characters in Christmas sweaters instead of their usual Hogwarts robes."
Detailed. Specific. Completely certain.
And completely wrong.
"That Is Definitely Not Right"
Jackson didn't even hesitate. "That is definitely not right. It is not Hermione."
Not "I don't think so." Not "Are you sure?" He was certain. So certain that he walked me through his thought process, detail by detail.
"Look at the hair," he said, holding up the tiny figure. "Hermione's hair is different." He pointed to the sweater color. "And this animal on the sweater? That's not a cat." (He told me what it actually was, but in the moment, I was so focused on what was happening that the detail slipped away—and honestly, that's part of the story too.)
He didn't just disagree with AI. He reasoned. He used evidence. He trusted what he knew about a fictional universe he was still learning.
His argument was sound. But, I don’t know anything about Harry Potter so we had to investigate. And we had to do so outside of AI.
The Investigation
We Googled and found the complete list of characters in the advent calendar. Scrolled through item by item. Found the figure.
AI was wrong. Jackson was right. And here's what struck me most: he never doubted that he wasn't. He always knew he was right, even when I wasn’t so sure.
The Muscle We Need to Build and Flex
This moment keeps replaying in my mind because it crystallized something I've been thinking about as I help families navigate AI through my work with AI for Curious Kids. This wasn't a designed activity. I didn't set out to create a "critical thinking moment." But it perfectly demonstrated what I call the Idea → AI → Play™ framework in action:
Idea: Jackson had genuine curiosity about something that mattered to him—his advent calendar character.
AI: We used AI as one tool in the investigation, not the final authority.
Play: He didn't stop at AI's answer. He kept thinking, testing, and ultimately proving his case.
But more than that, this moment revealed what I believe is the critical skill in the age of AI—for kids and adults alike. It's not knowing how to prompt AI. It's not even knowing how to use it effectively. It's knowing when to challenge it.
What Makes This Skill So Hard to Develop
Here's the thing that worries me: AI is really convincing. The confidence. The specificity. The speed. It activates something in our brains that wants to defer to authority, especially technological authority. And, as adults, we want an answer quickly so we can move to the next thing.
Jackson didn't have years of that conditioning yet. He still trusts his own observations. He hasn't learned to doubt his gut in favor of what a screen tells him. And, he wasn’t in a rush.
How to Build This Muscle in Your Kids (and Yourself)
So how do we raise kids who maintain Jackson's healthy skepticism while still benefiting from AI's capabilities? Here's what I'm learning:
1. Model questioning AI outputs yourself
When you use AI in front of your kids, narrate your thinking. "This is interesting, but let me check that against what I know." Your kids are watching how you treat AI.
2. Ask "What makes you think that?" when kids challenge AI
When Jackson said "That's definitely not right," I could have shut down the conversation by trusting the AI. Instead, I asked him to explain his reasoning. That simple question validated his critical thinking and gave him practice articulating his thought process.
3. Celebrate when kids trust their own knowledge
Make it a proud moment when your kids catch errors or trust their instincts over technology. Jackson saw me light up when we confirmed he was right. That positive reinforcement matters.
4. Create low-stakes opportunities to test AI
LEGO advent calendars. Sports trivia. Recipe modifications. These everyday moments are perfect practice grounds. The stakes are low enough that being wrong doesn't matter, but the pattern-building is invaluable.
The Workplace Connection (Because It Matters Here Too)
I've been thinking about how this shows up in professional contexts. In meetings where AI-generated analysis drives decisions. In strategic planning that relies too heavily on predictive models. In teams that defer to algorithms over experience.
The organizations that will thrive aren't the ones that adopt AI fastest—they're the ones that maintain that Jackson-like ability to say "That is definitely not right" when the output doesn't align with what they know to be true. The balance between leveraging AI capabilities and maintaining human judgment isn't just a nice-to-have. It's becoming a competitive advantage.
But honestly? That's secondary to why this matters for our kids.
What I Hope Jackson Carries Forward
I hope Jackson carries forward that confidence in his own observations. That willingness to trust what he knows, even when technology tells him otherwise. That instinct to reason through disagreement rather than just accepting the first answer.
Because here's what I believe: AI is going to get better, faster, more convincing. The technology will improve. But that doesn't mean it will always be right. And our kids need to grow up knowing they're allowed—expected, even—to challenge it.
They need to know their thinking matters. Their reasoning counts. Their gut has value.
AI should be a tool in their toolkit, not a replacement for their own good judgment.
My (just turned!) eight-year-old reminded me, once more, that the most important work we're doing isn't teaching kids how to use AI.
It's teaching them when not to trust it.