Resources
October 17, 2025

What 1.4 Million Redditors Can Teach You About Bad Research Data

In this blog, we explore why “fast” can often mean “fake” when it comes to research, and what teams can do about it.
October 17, 2025

What 1.4 Million Redditors Can Teach You About Bad Research Data


In this blog, we explore why “fast” can often mean “fake” when it comes to research, and what teams can do about it.

On the surface, unmoderated testing can sound like a dream. Quick, low-cost, scalable. Perfect for teams under pressure to move fast.

But here’s the part that often gets overlooked: a huge portion of those tests are being gamed. Not just by a small handful of opportunists, but by a thriving online community built around beating the system.

On Reddit alone, there are over 1.4 million people sharing tips, tools, and tricks to pass screeners and fake their way into user tests. That’s not just a few bad actors. It’s the equivalent of the entire city of Munich conspiring to fake your research results. Imagine everyone there waking up one morning and deciding their new full-time job was tricking your unmoderated tests… That’s the scale of the problem.

Shape

How the Gaming Works

Spend five minutes in forums like r/beermoney and you’ll see:

- Posts on how to “pass more screeners” dominate the daily threads

- Detailed guides on faking personas and behaviours

- Bots on GitHub designed to automate responses

- Users running multiple accounts to multiply payouts

Why This Matters

When fake participants slip through:

- Product teams end up prioritising features based on fiction

- CX teams map journeys that don’t reflect reality

- Marketing teams get misled on what messages or images resonate

- Insight leaders lose credibility when results don’t hold up in market

Speed without quality is a setback, not a handy shortcut.

What teams can do instead...

What Teams Can Do Instead

If unmoderated testing is part of your toolkit, you don’t necessarily need to ditch it, but you can strengthen it. Here’s how teams can protect quality without losing speed:

1. Vet your sources more rigorously

Not all panels are equal. Go beyond price and sample size, and ask suppliers how they verify identity, detect duplicates, and block automation. Push for transparency and ask for rejection rates. If they can’t show you their quality controls, see it as a red flag.

2. Blend your methods

Use unmoderated tests as one input, not the only input. Pair them with moderated sessions for depth, behavioural analytics for proof, or recontacts from CRM for reliability.  

3. Make screeners smarter

Multiple choice alone is easy for participants to game. Layer in open-ends, logic traps, and time-to-complete checks. Design questions that require context and thought, not guesswork. Even simple steps can filter out bad actors.

4. Add human oversight

Automation helps scale, but a light human check can catch issues machines miss - like suspiciously fast completions or copy-pasted answers. Assign someone to audit samples regularly and feed learnings back into your process.

5. Reset expectations on speed

Fast is tempting, but meaningless insight only slows you down in the long run. Align stakeholders on the difference between fast and reliable. If a test needs an extra day for quality checks, that’s time well spent.

6. Build your own trusted panels

For high-stakes work, consider creating your own verified community; people who match your audience and are invested in giving real feedback. It’s an upfront investment that pays off with cleaner, more actionable data.

Shape

Final Thought

Unmoderated testing isn’t completely broken, but it’s certainly not foolproof. If you treat speed as the only goal, you risk being misled by people who know how to play the system.

Treat quality as non-negotiable. Because the cost of bad data isn’t just wasted budget, it’s wasted decisions.

Move fast, yes. But make sure you’re moving in the right direction.

You may also like
From One-Off to Always-On
Read
Not Another Research Report
Read