Part 1: Why many surveys aren’t very useful

So, you have to make a survey. It seems simple enough—come up with a questionnaire, slap on a few response options, and you’re good to go. If humans were robots, perhaps it would really be that easy. But human behavior is complicated, memory is limited, attention spans are short, language is complex, and individual interpretation of the same event can vary widely. Did I mention that memory is limited? Due to the survey design itself, many studies fail to accurately measure the very things that they set out to measure. Indeed, asking questions is both an art and a science.

Characteristics of good survey design

While each survey project is different, there are some common characteristics that will help you to avoid common survey pitfalls. Ideally:

  • The information to be collected is information that a survey can actually answer
  • The project is planned in the order of taking broad goals and moving to specific research questions and objectives, and using these objectives to create the survey itself
  • The mode of the survey (telephone, web, mail) is matched to the population being studied
  • The survey questions are written so that all respondents understand them in the same way
  • The survey questions are ones that respondents are both capable of answering and willing to answer
  • Online surveys are designed to display and function well on a smartphone
  • The best survey questions go through a pretesting process that involves cognitive testing
  • The confidentiality and/or anonymity of respondents is always protected

The importance of sound question design

While I’m going to consider multiple dimensions of survey projects, I’m going to spend a lot of time discussing the question design itself, because it plays such a pivotal role in project outcomes. So I want to start off with an example to illustrate why question design is so important. Here is a ‘normal looking’ survey question.

This seems like a pretty reasonable question, and it’s similar to ones you’ve probably answered countless times before. However, it actually has multiple serious shortcomings, and the resulting information that a question like this yields could be very misleading.

So, what best-practices can be used to make this a better question?

  1. Good survey questions are specific. This question simply says “class.” Students often take multiple classes. What class is the question referring to? Some students might answer this for another class, or think about all classes in general. The specific class should be defined, so everyone answers this question from the same frame of mind.
  2. Good survey questions use well-defined terms and concepts. This question asks about attending class, but that can have multiple meanings. Is it walking to and from class? Is it sitting in class? Safety could be very different when sitting in a well-lit classroom versus walking back to a car in the dark. The question dimensions should be well defined.
  3. Good survey questions are worded neutrally. The term “level of agreement” implies that at least some safety is the norm (i.e., that feeling ‘unsafe’ is inconsistent with a social norm.) This biases the results by pushing respondents to agree with the question. The term “level of agreement or disagreement” is more appropriate.
  4. Good survey questions have answer categories that apply to everyone. What if not everyone completing the survey has actually gone to this class? (Some college students are pretty lazy…) What if someone just doesn’t have any opinion at all on their safety? All possible scenarios should be accounted for. There needs to be an option for those without an opinion and for those to whom the question does not apply.
  5. Good survey questions use agree-disagree scales sparingly. Agree-disagree scales are often overused. They can require a lot of thinking (referred to as “cognitive burden” or “response burden”) and may have questionable accuracy. First, the question makes a respondent consider what “feeling safe” means. Then, the respondent has to consider how much they agree or disagree with their subjective judgment of ‘feeling safe’. This is complicated. It gets worse. What if the respondent feels ‘extremely safe’? This is different from “safe” and so the respondent may select “Strongly Disagree.” In other words, some respondents who feel more positive (e.g., more than just “safe”) and other respondents who feel less positive (e.g., less than “safe”) may actually select the same response of “Strongly Disagree.” When respondents who feel differently select the same response, this is bad. The ideal solution here is to use a scale format that directly measures safety, rather than one that indirectly measures the level of agreement/disagreement with the concept of feeling safe.
  6. Bonus Note: Good survey questions understand the self-imposed limits of language and culture. This question asks specifically about safety. In other terms, it is a question that pertains to generalized risks. While related to, it is not the same as ‘fear of crime,’ and a question like this should not be used to assess fear of victimization. For example, someone may have no fear of crime at all, but feel very unsafe due to a fear of slipping and falling on ice. If our goal is a generalized risk assessment, asking about safety in this fashion is acceptable. If our goal is assessing fear of victimization, additional, more-specific questions will be needed.

So, if we take those best practices, we can apply them to make a better question like this:

A question like this will not only be easier for someone to answer, but it will elicit better responses that directly apply to what the researcher wishes to learn about.

What you will learn in this eight-part series on How to Avoid Common Survey Pitfalls

When we’re all said and done, you should be able to answer the following questions:

  • When is a survey a good match to answer a research question?
  • What basic framework can I use to refine my broad goals into specific research objectives?
  • What are some common types of survey questions, when should they be used, and what are the best practices for utilizing them?
  • What are some web platforms that I can use to make free web surveys?
  • Where can I turn to for more assistance with survey project implementation?

A little bit about me

So where do I fit into this equation? I’m also a senior research associate at TU’s Regional Economic Studies Institute, where one of my primary roles is instrument development. I really like making and talking about surveys, and I specialize in front-end instrument design. My fascination began with the realization of how the smallest of details in survey design can have the greatest of impacts on end-outcomes. There aren’t many other fields where simply using the word “or” just one time could have radical consequences in regard to the accuracy of the end findings. This is a place where the subjectivity of human behavior and language meets the objectivity of data science and information technology. Survey design seems simple at face value, but there is so much more than what initially meets the eye.

Interested in knowing more? Please feel free to shoot me an email at