WOMEN'S WRITE

It Was Just A Meme, Until It Wasn’t: How Digital Bystanders Enable Harmful Beliefs

21/08/2025 11:31 AM
Opinions on topical issues from thought leaders, columnists and editors.

(Because sometimes the most dangerous thing in the feed isn’t the post, it’s the silence that follows.)

By Mediha Mahmood

Most of us know someone in a WhatsApp or Telegram group who regularly shares content that’s – let’s be honest – a little questionable. Maybe it’s your uncle forwarding a conspiracy video, a colleague dropping a misogynistic joke, or a friend quietly scrolling through racist memes “just for the laughs”.

And when you ask why they stay in those groups, the answer is almost always the same: “Oh, I’m not like them – I’m just curious.”

But that curiosity, multiplied across thousands of users who never speak up, creates exactly the kind of silence in which harmful content thrives. Everyone assumes they’re just there to observe. And so, no one challenges what’s shared. No one says, “That’s not okay.” The silence grows louder, and gradually, it becomes the norm.

Radicalisation doesn’t always begin with manifestos. More often, it starts with memes – casual, low-stakes content that slips under the radar, spreading unchecked. It isn’t just driven by loud extremists shouting into the void, but by the quiet majority who see it, share it, or scroll past it without question.

Silence the most dangerous element in an online space

In my work at the intersection of content regulation and media ethics, I’ve come to believe that the most dangerous element in an online space is not always the person posting hateful content. It’s the silence that surrounds it.

That silence isn’t passive – it’s data. To the algorithms curating our online experiences, silence on harmful content suggests engagement or approval. No reports, no objections, no friction? It must be fine. And to people within those digital communities, silence can feel like social permission. If no one’s saying anything, maybe it isn’t that bad. Maybe it’s even true.

This is how hate gets normalised – not with a bang, but with a shrug. The phenomenon is known as the digital version of the bystander effect – where the more people witness harm, the less likely any one person is to intervene.

Online, that passivity is multiplied and masked by anonymity. Research shows that in extremist chat groups, over 80% of users never post anything themselves. They don’t initiate hate, but they’re there – watching, clicking. sharing. And in doing so, they help keep those ecosystems alive.

In Malaysia, research by SEARCCT has found that radical content spreads fastest not on public platforms, but in private or semi-private spaces – unmoderated Telegram groups, closed chat rooms, and fringe forums. These aren’t necessarily echo chambers of hardened extremists. They’re often filled with regular people; friends, colleagues, acquaintances who may disagree with what they see, but stay silent nonetheless.

It’s no surprise then that many countries have turned to deplatforming as a response. Takedowns, content moderation, algorithm tweaks – these are essential tools in any regulatory arsenal. But they’re not silver bullets. In fact, without thoughtful communication, they can backfire.

Remove extremist content too swiftly, and its creators are recast as martyrs. Ban a channel without explanation, and you leave behind a vacuum that quickly fills with conspiracy theories. Kick bad actors off a platform, and they don’t disappear – they just migrate to more opaque, harder-to-monitor spaces.

Simply removing content doesn’t remove its influence.

The necessity for counterspeech

This is where counterspeech becomes not just useful, but necessary. Counterspeech means responding to harmful content with facts, empathy, questions, or alternative narratives. It works best when it’s fast, authentic, and comes from peers, not just authorities. The idea is not to out-shout hate, but to interrupt it – early, calmly, and effectively.

And there’s data to back it up.

In Sri Lanka, a PeaceTech Lab pilot found that engagement with hate content dropped by 46% when counterspeech was introduced early. In Germany, civic volunteers who replied to hateful YouTube comments within the first hour helped reduce hate-driven threads by 17%.

These aren’t massive interventions. They’re small, consistent disruptions, and they matter.

At the Content Forum, we’re building on this idea in many of our initiatives – from our suicide content guidelines to our trainings with influencers, and in our ongoing efforts to educate children and parents about digital friction and media literacy.

The goal is not just to clean up digital spaces, but to amplify ethical voices – to give more people the tools and confidence to speak up before harm escalates.

Still, the question remains: why don’t more people do it?

In every training I run, I hear the same three reasons:

“I don’t want to be attacked.”

“It’s not my place.”

“It won’t make a difference.”

But the truth is, counterspeech doesn’t require you to win the argument. You don’t need to craft the perfect reply or go viral with your response. You just need to say something. Even a simple, “Are we sure this is okay?” is enough to interrupt the flow. It breaks the momentum. It breaches the echo chamber, and often, that’s all it takes.

Imagine if even a fraction of the millions who scroll past problematic content every day chose to pause – to respond, to report, to redirect. What if the so-called “silent middle” decided to stop being silent?

Silence online is influence

Whether we realise it or not, silence online isn’t just abstinence – it’s influence.

It tells the algorithm: this is fine.

It tells the community: no one minds.

It tells the extremist: no one will stop you.

But that doesn’t have to be the message we keep sending. If harmful beliefs thrive in silence, then perhaps disruption begins not with noise or outrage, but with clarity, courage, and consistency.

We don’t need everyone to be loud, social media is loud enough as it is! We just need more people to stop being quiet.

Because sometimes, the most dangerous thing in the room isn’t the person shouting.
It’s everyone else saying nothing.

-- BERNAMA

Mediha Mahmood is a content policy strategist and CEO of the Content Forum, Malaysia’s industry forum and self-regulatory organisation for content on electronic media. She focuses on ethical content ecosystems, online safety, and responsible content governance.

(The views expressed in this article are those of the author(s) and do not reflect the official policy or position of BERNAMA)