Simple Tools To Make Better Surveys
The way we design surveys is broken. Here are a few simples tools to fix it.
We can all agree that long, meandering surveys are terrible. When I reach question #32 vaguely inquiring about my satisfaction at working at Company X, my eyes are already glazed over. I spend (maybe?) 5 seconds thinking about my answer. Depleted, I finish the survey and immediately want to make a beeline to the coffee machine. And I’m someone who loves surveys.
This is not an esoteric problem; most of us are victims of awful surveys. So why does this still happen? It’s due to flaws in how we make surveys. It doesn’t have to be this way: simple tweaks to the survey design process can dramatically improve quality. This post will discuss why this happens and then walk through a few easy-to-use tools that curtail bad tendencies.
Define Your Goals
The first step for any data collection exercise is to define the objectives clearly. Otherwise, you’re lost before you’ve even started.
I’m going to use an employee survey as a running example in this post because it’s a nearly universal unpleasant survey.1 The first question to ask is why do a survey at all? In this example, let’s say the organization is having an issue retaining talent. There’s a problem with employee turnover and the organization’s weighing different options to try to address it. They set a goal of reducing the turnover of high-performers by 20% within a year.
Using the SMART framework, this is a well-defined goal:
Specific: The goal is to prevent high-performing employees to leave. It’s clear assuming that the organization can identify high-performers. For the sake of this exercise, let’s assume they can.
Measurable: Measuring intent to leave is hard; it may be difficult to directly ask people if they’re planning on leaving. However, you can elicit attitudes and behaviors that may predispose people to look for jobs elsewhere.
Actionable: There are a plethora of actions you could take here. Increasing salary, changing HR policies, etc. Given the results of the project, you can actually change things.
Relevant: Keeping high-performing employees is important for the success of most organizations.
Time-bound: The data collection has to be done quickly in order to implement policies that can achieve a 20% reduction in high-performer turnover within the year.
Survey Scope Creep
Survey owners - the people who are the project managers for a survey - start with clear goals before drafting surveys. Blindly using existing survey templates is a place is a common misstep. For example, even if other companies do employee surveys, it doesn’t mean that copying an existing one whole hog is the answer. Other organizations may have different goals. Perhaps employee morale is a bigger issue than turnover? Stealing individual questions (or even sections) from existing surveys is great for speeding up the development process. But it’s dangerous to assume that an existing survey has the exact same goals.
Where surveys fall off the rails is during the survey editing process. In project management, this is called “scope creep.” Projects begin with one set of requirements but often go far beyond what is originally set (or “scoped”) out.2 This doesn’t happen all at once - it’s typically death by 1,000 cuts. As the name implies, the “creep” happens slowly. It starts as one simple request, which is only a mild inconvenience. Then there’s another. And another. And when you step back and examine your project, it’s grown like a monster in a bad horror film.3
The villain is often office politics. A diverse set of stakeholders are asked for their input on a survey and each one always has their pet issues. They want to make sure their issues get sufficient attention. For the survey owner, it is much easier to add another question than to say no to powerful stakeholders. This results in a survey that is long and kludged together instead of one that is focused and elegant.4
To be clear, a question added through scope creep isn’t inherently a bad question when considered in isolation. Demographic questions are some of the most frequent useless add-ons. Survey owners will ask pro forma questions like age or income level when they aren’t relevant to the task at hand. Asking just one more question may seem harmless but trust me, it’s not. Every question is a chance for someone to quit your survey. Even if a survey owner constrains the number of questions, there’s an opportunity cost to every question asked; it’s one fewer data point a survey owner has to achieve the goals of the survey.
It’s easy for survey owners, especially if they’re junior staff, to get lost in internal politics. They lose sight of the fact that survey takers don’t care about battles between office fiefdoms. Surveys are always a burden. Survey takers want to minimize that burden, get to the end, and move on with their day. I breathe a sigh of relief at the “Thank You” pages at the end of the surveys I take.
Survey owners lose sight of the original goals too. If you have run surveys before (or managed someone who had), a really useful exercise is to go back through a survey and see what questions ended up in the final analysis and write-up. Do you see questions that you asked but didn’t make it into the analysis? Why was that? If you were to run the survey again, would you include them?
Potential Survey Outcomes
Below is a framework for thinking about survey design through the lens of counterfactuals: what could have happened.5 This tool is useful for evaluating surveys after they’ve been done. I find that thinking about what makes a successful survey before it is designed is helpful in forcing you to think about scope creep from the beginning.
The two columns represent what you actually asked and did not ask on the survey. The rows are whether you can map a question back to the goal of the survey or not. Each question is assigned to one of the four boxes below.
The survey owner’s role is to shepherd the survey development process to optimize two things:
1) Maximize the number of questions that useful
2) Minimize the number of omitted questions that WOULD HAVE BEEN useful
The first aim is straightforward. The second aim is harder to conceptualize. I like to call it “question regret.” It’s when the survey’s complete, the results are presented in a conference room (or on Zoom these days) and someone asks “wait, did we ask about XX?” And there’s an awkward pause. Eventually, someone else breaks the silence with a meek “No, we didn’t.”
Question regret happens for a few reasons: a question was considered and didn’t make the cut, or a question was only obvious in hindsight after seeing the results. The latter is lamentable. The former is preventable.
A good survey process should include useful data and cut what would have been useless data. This matrix makes these goals clear. It can also be used after surveys to point to areas where survey owners can improve their process.
Fighting Survey Scope Creep and Question Regret
The original agreed-upon goals are a great weapon against survey scope creep. This isn’t an original idea; consultants, lawyers, and anyone who works on contracted terms is familiar with wielding contracts like cudgels to limit what’s asked of them. Survey owners need similar tools to do the same.
A major source of question regret is imbalance and overlap in the survey. Perhaps there are too many questions about goal #1 and not enough about goal #2. To reduce question regret, I find that tracking and examining question metadata is essential. What I mean by question metadata is information on question types as well as what goals the questions help answer. Creating ways of aggregating metadata -- particularly for longer and more complex surveys -- can make redundancy and lack of coverage more apparent.
An effective way to force alignment is to build surveys in spreadsheets, not word processors or survey software. Spreadsheets are great because they easily allow you to incorporate metadata on questions in columns (I discuss this in detail in the example below.) You can set up convenient functions to produce aggregate counts and statistics on your surveys as well.
Spreadsheets double as a communications tool. Most stakeholders are used to spreadsheets but are unfamiliar with survey software. Word processors are commonly used for survey drafts and are familiar to stakeholders; however, reorganizing questions and tracking metadata is a pain. Spreadsheets are far from perfect, but they are flexible and accessible.
Example: An Employee Survey
Let’s return to our employee survey example and walk through it. As a reminder, the goal is to reduce the turnover of high-performers by 20% within a year.
The organization brainstorms a bunch of reasons why high performers may be leaving. They might come up with a model that looks something like this:
On the left of the chart, there are hypothetical factors that could lead to turnover. The highlighted ones relate to policies that organization thinks that they can do something about, such as increasing salaries. Peer disrespect and recruiters poaching staff may be important drivers of turnover, but they can be deprioritized from the survey because they are considered out of control; it’s hard to stop external recruiters from contacting employees.
Now there’s a relatively narrow focus with a few hypotheses. The next step is to start drafting questions. Each question should have at least the following information attached:
Goal: The goal does this question helps achieve
Hypotheses: Specific hypothesis tested or question answered
Question Text: The text of the question
Scale: What the answer options are (if any)
Reading Level: The readability of the question text. This is audience-dependent but trying to aim for around Grade 8 is a decent target.
More complex surveys may require additional columns for metadata. For example, you may want to limit the number of free response questions in your survey because they take a lot of time and can reduce completion rates. If a survey is distributed through text messaging, it’s helpful to include a character count column to make sure the questions will fit on a cellphone screen.
What’s great about metadata is it allows you to make sure your survey is doing what you intend it to. On a separate tab, I make sure to provide a summary view of what’s being asked just with simple tallies of the metadata.
In this view, we can see that of the ten questions, four of them are related to salaries and only one is about inflexible work. I immediately think about the balance here; is one question on inflexible work enough to see what can be done there? Are four questions about salary overkill? Summary views are great tools for identifying where fat can be trimmed as well as where you might need additional questions.
Summary statistics are an easy way to push back against parochial departmental issues. In our example, perhaps the finance department is fighting for more salary questions. If the survey is restricted to ten questions, it’s easy to show these stakeholders there that their issues are overrepresented and make the argument for cutting one or two questions.
Tools like readability scores are helpful to cut back on jargon and simplify questions. If someone is writing college-level questions for a middle-school audience, it’s a bad idea. Scores like Flesch-Kincaid Grade Level - what Microsoft Word used on their spelling and grammar check back in the day - are far from perfect; they are funky formulas based on syllables per word and sentence length.6 That said, they do give a rough idea and can highlight overly complex questions that are difficult to understand and should be rephrased.
Link to Data Better survey template:
Template Link
Principles Over Tools
Making surveys in spreadsheets isn’t a great experience, but it’s the least bad way that I know. Recent advances in survey software have focused on improving aspects of individual questions but have largely ignored changing the survey development process. If you know of any tools that help with the process, please leave a comment!
You can use any tool and still do better. All you have to do is remember three simple principles:
Every survey question needs to be explicitly tied to a goal and hypothesis. If you can’t tie a question to a goal, don’t ask it.
Question text should be simple and understandable to your audience. Think about their reading level and what jargon they will or won’t know.
Survey scope creep happens and you need to fight it. Look for imbalance in question metadata to remove questions that collect useless data. Summary question metadata can also shine a light on likely areas for question regret.
For a typical survey like this, see CultureAmp’s template https://docs.google.com/spreadsheets/d/1RIXgHuCh4cRSpLjR8xKkcVKwNU9EiHPmywo-qPc-Dgo/edit#gid=0
The Project Management Institute has a nice discussion on the causes of scope creep here: https://www.pmi.org/learning/library/top-five-causes-scope-creep-6675
It makes me think of “The Blob”: https://www.youtube.com/watch?v=vq0our4mceQ
I love “kludge” because it’s almost an onomatopoeia of what’s happening. It’s a bit jargony, but it’s becoming more commonly used - especially in civic tech circles. Here’s an etymology:
https://www.theatlantic.com/technology/archive/2016/09/the-appropriately-complicated-etymology-of-kluge/499433/
It’s inspired by the “potential outcomes” framework (also called the Rubin Causal Model), which is one of the primary frameworks for causal inference.
I wrote a custom function in Google Sheets (in the template) to get Flech-Kincaid based on this Github Repo (https://github.com/cgiffard/TextStatistics.js). You can also use some nice online tools to help, such as the Hemmingway App (https://hemingwayapp.com/).