By Keir Liddle
The survey method is almost ubiquitous in human sciences and it’s used to evaluate everything from healthcare interventions to Government performance and to predict the outcomes of elections. Yet little heed is paid to the importance of proper survey design which means that so, so many surveys and questionnaires utterly fail to answer any of the questions asked in them and instead just generate more questions and uncertainty.
Now no survey is ever going to give you a definitive answer or allow you to make the argument that your correlations show a causal relationship but used properly they are a relatively efficient (in terms of time and resource costs) and used properly a reasonably powerful research method. More than this bad surveys make social scientists cry!
There are a number of sins that novice survey designers and amateur pollsters tend to make and most of these could be avoided with a little thought and remembering how to avoid many of the more obvious pitfalls.
This time around we will have a wee look at the first step.
The most obvious piece of advice I’d want to impart to would-be pollsters is to consider exactly what question you want answered and who will be answering it. All to often an online poll overs voters a single question in isolation – usually of a “Do you agree/disagree with x?” form. Now to anyone who has ever seen a poll about creationism or alternative medicine be pharyngulated it should be obvious why this is a survey design sin. If you just ask one question you’ll get an answer – say 55% agree/disagree and you might think that shows majority support for a position.
However it is important to ask a majority of who?
Unless you collect some form of demographic data all you can ever say is that “55% of people who visited my website between a and b thought x” which is a remarkably unimpressive claim. Particularly if the question addresses an issue which is highly contentious and polarised. How do you know that the ayes or nays have it if you have no way of determining if you have a biased sample?
If the answer to your question could be influenced by age, gender or other demographic variables it’s worth asking – because at the least you’ll have something more interesting to report results wise. If the answer to your question could be influenced by belief than it’s perhaps worthwhile to consider some way of measuring this. There are countless ways this could be achieved but for a quick web survey perhaps a self declaration or a likert scale measure might be worth considering.
If you reckon you can easily define your group than a self declaration might be worthwhile. If for example the question or poll is about government cut’s/efficiency savings it might be worth asking people if they have any particular political allegiance. Alternatively if the belief you are trying to measure is messier and more ill defined perhaps it would be better to attempt to at least measure the severity of your belief.
The format of a typical five-level Likert item is:
- Strongly disagree
- Neither agree nor disagree
- Strongly agree