the long view skepticism

When Skepticism Becomes Suspicious | The Long View

When Skepticism Becomes a Suspect Behavior

I ran across a report this week claiming older Americans are driving political polarization and conspiracy thinking online. You’ve probably seen some version of it already. Still, the headlines all blur together after a while.

In the end, the takeaway is always the same. Older people share bad information. Boomers question too much. Or Generation X is the problem.

Apparently, remembering things is now a red flag.


The study assessed posts that users of different age groups interact with the most on X, Facebook, Instagram, and TikTok to help reach its conclusions.

While Gen. Z frequently engages with left-leaning political content and progressive social issues, it was the Millennials and Gen. Xers who were more slanted toward traditional partisan content, the study found.

“This suggests that the loudest political debates occur not among the youth themselves, but among their parents and older siblings, who use social media platforms as spaces for ideological expression and partisan discourse,” the study explained. – New York Post


The report looks at online behavior and sharing patterns. It tracks who passes along what, and how often. Naturally, it concludes that older users are more likely to circulate questionable political content. Anyone who’s spent time on Facebook knows how fast nonsense travels.

What bothered me wasn’t the observation. Instead, it was the tone. It lacked curiosity and sounded corrective. Like they had finally identified something and now wanted to deal with it.

That’s usually when I slow down and read more carefully. At that point, the framing starts to matter.

Let’s Hit Pause For A Minute, First

Yes, older Americans share bad information online. It happens all the time. I’ve seen it. You’ve seen it. Chain emails didn’t disappear. They just learned how to post links.

Some of what gets shared is sloppy. Or some it falls apart the second you look at it. And obviously, some of it probably shouldn’t even be shared at all.

None of that is hard to admit.

What is worth questioning, though, is how quickly this turns into a story about a defective generation, rather than a messy internet that rewards whatever gets attention.

This Isn’t About Gullibility. It’s About Memory.

Here’s the part that keeps getting skipped.

Older Americans aren’t just spreading conspiracies. A lot of the time, they’re reacting to stories that feel unfinished.

They lived through Watergate, watched Vietnam drag on and collapse. We were told Iraq had weapons that never materialized. And we saw institutions make confident claims, walk them back later, and move on without much explanation.

If you lived through that enough times, you don’t stop questioning. You expect revisions.

That doesn’t mean every doubt is right. It does mean skepticism feels normal. It feels earned.

Younger Generations Aren’t Dumber. They’re Just Wired Differently.

This isn’t a smart-versus-stupid divide. It’s a habits divide.

Younger Americans grew up inside systems that tell them what’s authoritative.

I remember when the internet took off and blogs started catching on. People were told to position themselves as experts. Now TikTok hands everyone that title.

Platforms point them toward approved sources. Experts show up pre-vetted. Narratives close quickly, and anyone who keeps asking questions gets treated like a disruption.

That doesn’t make younger people naive. It makes them efficient. They’ve grown up in systems that value speed and certainty, not loose ends. Older Americans aren’t wired that way. They don’t expect clean conclusions because they’ve seen how often those fall apart. One group keeps asking how it ends. The other feels comfortable once the story sounds finished.

Where AI Fits Into All of This

However, here’s the detail that should make people pause.

Much of this labeling comes from AI-driven analysis. Machines built to spot patterns. Systems trained to flag behavior that disrupts consensus. Models designed to identify what doesn’t fit.

AI doesn’t know history. It struggles to understand why someone might distrust an official explanation. Experience doesn’t register at all. The system just notices deviation.

What’s easy to miss is that a lot of this labeling now comes from automated systems, not people.

There’s no bad intent here. A system simply does what its designers built it to do.

But when machines start deciding which instincts count as acceptable and which ones trigger concern, we should pay attention.

The real issue isn’t that older Americans sometimes share bad links. The problem starts when people reframe skepticism itself as something dangerous.

At that point, truth no longer depends on accuracy. It depends on permission. Someone decides who gets to question and who must comply. Institutions then label those who resist as problems.

The people most likely to lose that permission are usually the ones who remember what happened the last time everyone was told to stop asking questions.

And that should make all of us a little uncomfortable.

Feature Image: Created in Canva Pro

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.