6 Comments

Loved the post :-). Poking the bear here deliberately -- what do you think of attempts like NPS to reduce survey load but still get at something "valuable"?

Expand full comment

I have a Net Promoter Score (NPS) post in the hopper! It's mostly about the somewhat forgotten origin story of NPS, but it touches on what you're talking about.

The TLDR for that is I really like what NPS is trying to do in theory... but I think it's an outdated implementation in practice.

I'd also note that there's an entire academic field (psychometrics) that is dedicated to reducing survey load through techniques like factor analysis. And I definitely think those techniques are way underutilized outside of academia. On second thought, they're underutilized in academia as well.

Expand full comment

Cool essay, and the easily ignored "Please take this short survey" is spot-on. But it seems like a commons problem of sorts. If the ocean of survey respondents is being drastically overfished, how do you get the fishermen to let up? Especially if you're in a situation where sometimes quantity of response can make up for quality (I'm thinking the Netflix/Pandora/Reddit thumbs up or thumbs down). The incentives seem pretty skewed...

Expand full comment

So I totally agree with the overfishing metaphor. In fact, Thomas Leper has a GREAT article making exactly this point that it can be seen as a "tragedy of the commons."

"Where Have the Respondents Gone? Perhaps We Ate Them All"

https://academic.oup.com/poq/article/83/S1/280/5520778

Armed with that framework, I think you have to change individual incentives. So for Pandora and Netflix, they've done a decent job making it worthwhile to rate stuff - it makes personalization for your suggested content immediately better. Where the "please take this short survey" fails is there's typically no obvious link between taking the survey and improving the product/services you're getting.

-----

Where I think I disagree is on the quantity making up for quality. That's only true if you're making some very, very strong assumptions that your sample isn't skewed or influenced. So let's take your Reddit example.

Sinan Aral has some cool work on biases there where a randomly assigned up/down vote makes a big difference: https://science.sciencemag.org/content/341/6146/647.abstract

FWIW, I'll actually be writing on this very topic in some future posts. So stay tuned.

Expand full comment

Great explanation of how data collection has changed over the years! “Survey fatigue” is something I feel now with the Union, district and site administration asking us to take surveys all the time.

Expand full comment

Thanks, Celia! FWIW, I'll be writing a bit on strategies organizations, like schools, can easily implement to reduce survey fatigue. So stay tuned!

Expand full comment