The computer isn’t always right: Teaching kids AI isn’t the authority
In my last article, I shared four core principles I focus on when teaching my kids about AI. This is the first deep dive into Principle #1.
Be honest: growing up you probably thought the computer was an authority.
Maybe that’s not exactly how you thought about it explicitly, but that is what you were “taught.”
Think about it.
When we were kids (no matter what decade we grew up in), the computer opponent in a video game was always harder to beat than the friend sitting next to you. When you needed to do math, you wished for a calculator. And, once the internet came to be, that’s where you went when you needed an answer.
Even before AI, screens held power. If something (or someone) showed up on a TV screen, it immediately gained credibility and importance.
Are you having a revelation right now?
We’ve been conditioned to defer to computers.
And now our kids are being conditioned in that same way, but the stakes are higher.
Because AI generates responses. And those responses sound so confident and authoritative. It gives you an answer with zero hesitation, even when it shouldn’t. It’s programmed to answer. It’s just doing its job.
I see the mistakes AI makes in both my professional and personal life. And some consequences are steeper than others. But I’ve also seen the sheer power of the technology, particularly when sound human judgement is applied.
So when I started using AI with my kids, there was one core principle I knew had to be front and center: AI makes mistakes and we learn from questioning the outputs. We are the authority, the directors. AI is just a tool.
How I know the lesson is resonating with my kids
A few weeks ago, I overheard my 7-year-old talking to a friend.
"You know ChatGPT is wrong a lot," he said, completely matter-of-fact.
His friend just looked at him.
"Like when I asked it to make a snowman running from a velociraptor? It gave me half snowman, half velociraptor. That's not what I asked for."
I had to smile. The lesson had stuck. I also thought: Maybe I’m on to something with this AI for Curious Kids thing…
He didn't think the computer was infallible anymore (well, that’s probably going a little too far). He knew it made mistakes. And more importantly, he knew he was the one who got to decide if the output was right, wrong, or needed some more work.
We need kids to make that distinction.
From “search and choose” to “ask and accept”
This is probably one of my favorite new concepts to talk about. When we used to search for (Google) something, we received a bunch of blue links in list form. And this was not that long ago! We’d click through, compare sources and then decide what site we wanted to visit to “get the answer.” Though Google did its own ranking and evaluation, there was a good amount of work we were still doing to determine what source we thought would have the most reliable answer.
AI has shifted us to: ask and accept.
And this shift started before ChatGPT launched its own search browser. You know those AI-generated answers you get now on Google? That’s typically the answer you accept now. Site visits across the internet have dipped dramatically because people are no longer scrolling the sources and picking one, they are taking the AI-generated answer at its word and moving on.
This happens outside of traditional search. People ask AI a question and take the first output and move on.
AI is not an authority. You need to direct it. You need to question it. You need to validate it.
Sometimes AI is obviously (and hilariously) wrong
Like the example I gave earlier, it gives you a snowman/velociraptor hybrid when you’ve asked for both things separately. Sometimes, you ask it to add a scared person to a dino-themed scene and it gives you a disturbing, faceless lego baby about to be devoured by beasts (true story).
But most times the mistakes are more subtle. And, other times, they’re not outright mistakes, but it’s also not the right direction. That is harder to teach young kids. For now, I focus on the obvious so that they are inclined later to look at everything more critically.
I’ve made a game out of finding AI mistakes. And I think you should too.
Currently, AI image generators don’t handle text very well. I don’t know why. I’m sure they’ll get better at it. But for now we do the garbled text game because letters always seem to get scrambled, words turned to gibberish.
This has turned out to be the best “AI makes mistakes, let’s find them” game with my kids. Every time we generate an image with text, the kids look for the errors. They point them out. They laugh. They learn.
"Look, Mom! It spelled 'HERO' like 'HREOO'!"
They're developing a critical eye. They're learning to spot when AI doesn't quite get it. And they're realizing that they are the ones who have to catch the errors.
Finding the mistakes is half the challenge.
So, turn error-spotting into an actual game with your kids.
Every time you use AI together, ask:
"Do you see anything wrong with this?"
"Is this what you imagined?"
"Does this make sense?"
"What would you change?"
Make it fun. Make it routine. Make it expected.
Because here's the thing: If kids learn to question AI outputs, they're learning to question everything. They're building critical thinking skills that will serve them for the rest of their lives.
They learn to ask:
Where did this information come from?
Who created this?
What might be missing?
Is this actually true?
These are the questions we want them asking about everything they encounter online, not just AI. And, in life, generally.
Listen, this isn’t about making kids distrustful of AI. This is to make them more trusting of themselves and their own judgement. That kind of thinking will serve them forever.
Next up in this series: We're diving into Principle #2: why imagination and ideas from the human need to come first. Because if AI is leading and your kid is following, we've got it backwards.