Surveys
A type of user research that involves asking questions to a group of users in order to gather data about their needs, wants, and opinions. Surveys can be administered in person, via email, or through online survey tools.
Overview
A survey is a quantitative research method that collects data from a large group of respondents through structured questionnaires, gathering numerical insights about user opinions, behaviors, needs, and preferences. Surveys present respondents with a consistent set of questions—often with standardized answer options—administered at scale through online platforms, email, in-person interviews, or phone calls. Unlike qualitative research methods like user interviews that explore depth through conversation, surveys prioritize breadth and statistical rigor: they measure prevalence, frequency, and correlation across large populations, producing results that can be analyzed statistically and generalized beyond the sample. Surveys are fundamental to UX research, product management, and market research because they efficiently measure what many users think without the time investment of qualitative research, enabling data-driven prioritization and strategy.
Why are Surveys Valuable?
Surveys provide unique quantitative insights that qualitative methods cannot deliver. By reaching hundreds or thousands of respondents, surveys reveal patterns and prevalence that emerge across a population; if 60% of users struggle with a feature, a survey will show that pattern while interviews with 8 people might miss it entirely. Surveys are also efficient and scalable; once designed, administering a survey to 1,000 people costs only marginally more than administering to 100, making them cost-effective for large samples. Surveys also create accountability through numbers; stakeholders who might dismiss anecdotes from interviews often change their minds when confronted with data showing that 73% of users experience a problem. Additionally, surveys can generate quantifiable metrics for tracking change over time—running the same survey quarterly reveals whether satisfaction is improving, declining, or stagnant, enabling measurement of progress.
When Should Surveys Be Used?
Surveys serve specific and important research purposes:
Measuring prevalence and frequency: When you need to know "how many users experience this problem?" or "how often do users do this task?", surveys provide statistical answers rather than anecdotes.
Validating hypotheses at scale: After forming hypotheses through interviews or observation, surveys test whether those patterns hold across your broader user population.
Gathering feedback on features or changes: After shipping a feature, survey users to measure satisfaction, adoption, and identify areas for improvement based on statistically significant feedback.
Tracking satisfaction and sentiment over time: Regular surveys (quarterly, annually) establish a baseline and reveal trends in customer satisfaction, NPS, effort scores, and other metrics that guide strategy.
What Are the Drawbacks of Surveys?
While powerful, surveys have real limitations that require careful design to mitigate. Survey questions are prone to bias; poorly worded questions lead respondents toward particular answers, and respondents often provide socially desirable answers rather than honest ones. Surveys also lack depth and context; they can tell you that 50% of users are frustrated with a process, but they can't explain why or how to fix it—follow-up interviews or contextual research provide that understanding. Additionally, surveys suffer from response bias; people who respond to surveys often differ systematically from those who don't, skewing results. Survey design requires expertise; amateurs often create surveys with flawed questions, poor sampling, or biased language that produces misleading results. Finally, surveys require sufficient sample size to be meaningful; surveying 20 users and finding that 60% prefer option A is statistically meaningless.
Best Practices for Designing and Conducting Surveys
To maximize survey value and avoid common pitfalls, follow these principles:
Define clear research questions before designing: Know what you want to learn and why. This clarity shapes question design and helps you avoid the trap of surveying just because it feels like you should.
Use clear, unbiased question language: Test questions with colleagues or a small group before fielding the full survey. Avoid leading language, jargon, and assumptions. Provide balanced answer options.
Keep surveys short and focused: Longer surveys have lower completion rates and lower quality responses. Aim for 5–10 minutes, prioritizing the most important questions.
Sample appropriately: The larger and more representative your sample, the more you can generalize results. For qualitative research with small samples, follow up with surveys to validate whether patterns generalize.
Combine surveys with qualitative research: Surveys show you what people do and think; interviews explain why. Use surveys to measure prevalence and interviews to understand drivers.
Report results honestly: Share not just headlines but sample size, response rate, and limitations. Acknowledge where findings might not generalize and avoid over-interpreting small differences.
Surveys are essential tools in the research toolkit when designed and conducted well, providing quantitative evidence that shapes product decisions and holds teams accountable to real user data.