Online surveys are worthless. That is, they are worthless as a source of information about popular belief and opinions. Yet many people still find them compelling, and so they can be useful as a way of driving traffic to your website. I guess that’s why they persist.
A recent poll about teaching complementary and alternative medicine (CAM) in Australian universities has become a matter of unnecessary controversy. Asher Moses wrote an article complaining about the fact that the survey seems to have been “gamed”, in an article: Vote on alternative medicine falls victim to dark arts of the internet. In the article he seems to miss the two real points about the poll – surveys are not reliable, and it’s fallacious to use them as an argument from popularity anyway. He writes:
Voting progressed steadily at first but on Tuesday votes began rising from about 125,000 to more than 877,000 by the time voting closed on Thursday. The end result was 70 per cent no, 30 per cent yes. The number of votes in the poll was about eight times more than the number of online readers of the story, a clear indicator that the poll had been gamed.
Moses talks in the article about how easy it is to “game” an online survey, but that is not the real issue. Most surveys are probably not hacked, as indicated above it is easy to detect such manipulation. Rather, there is a problem inherent with polls and surveys. The only reference to this issue in the article is acknowledgement that the survey was not “scientific” – but what does that mean, exactly?
A scientific survey is a method for estimating the percentage of the general population (or some identified subpopulation) that has one or more characteristics. It does not have to be an opinion, it can be something physical like: what percentage of the population has blue eyes. This is a deceptively difficult task to undertake.
First you would have to clearly define “blue eyes”. Blue can blend into green or even hazel, without a clear demarcation. Should the survey contain a box for “ambiguous”? Do you allow for people to decide for themselves and self-report whether or not they have blue eyes, or do you require a picture, or perhaps an in-person exam under controlled lighting conditions with multiple examiners having to agree on the eye color? Perhaps some people define themselves as blue-eyed because they think it’s more attractive. The point is – no matter how simple you think the question is, there are layers of complexity that will affect the outcome.
The next big problem is how you are going to select the targets of your survey in order to ensure that it is representative of the target population. Are you gong to stratify by race? Otherwise the racial mix of whatever community you look in will have a large impact on the outcome. If you are going to sample widely from many communities, then how are those people going to be selected? Will it be adequately random and sequential? You have to avoid anything that can potentially bias the results by preferentially selecting blue or not-blue eye color.
For surveys about opinions and beliefs there are more layers of complexity. Self selection, obviously, is a huge biasing factor. Any survey that allows people to choose whether or not to respond to the survey is “unscientific” and essentially worthless. Online polls are even worse because people not only can choose whether or not to respond (for example by agreeing to take part in a phone survey or by taking the time to mail back a survey), but they can also choose to seek out the survey, or can be directed there by groups with an interest in one outcome. For example, about the CAM survey Moses reports:
In the email sent by the Complementary Healthcare Council of Australia to members of its mailing list urging them to vote, the organisation’s consumer affairs director, Justin Howden, noted that the ”no” vote was streets ahead and said: ”We need to fight fire with fire.”
Urging members of one group to vote is breaking the poll. P Z Myers likes to point out this fact by “Pharyngulating” a poll – directing his readers to go to the poll and vote, massively biasing the outcome. He is clear that the point of this is not to engineer one outcome, but to demonstrate how worthless online polls are because they are so easy to bias. What you are really measuring is not public opinion but how effectively one side or the other can mobilize its online community.
A scientific survey is one in which efforts are made to contact random and representative people in a systematic way that avoids any bias. Subjects are chosen – they don’t choose themselves. Response rate is still a huge issue, because people can refuse to participate in the survey. Always look at the response rate of a survey and consider that an error bar. If only 10% of potential responders agreed to take the survey, ignore the results. As above, you can frame the issue as – what are you really measuring? Are you measuring passion for the issue? Comfortableness with the question? The anger that the issue provokes? Willingness to be honest about the question?
The difficulty of all of these issues is made clear by political polling. Anyone who follows the news in an election year will notice that polls can be very inaccurate. Such polls are simple in the respect that they have a finite number of choices – which candidate are you voting for? This is a clearly defined question. And yet, results often do not predict voting outcomes, for all the reasons I stated above.
Opinion polls suffer from a further layer of complexity that can significantly change the outcome – how questions are framed. Are you asking responders if they agree with a position or disagree with its opposite? This can significantly affect the outcome. Are you making any assumptions about the context of the question? You can also bias survey results by assuming or even presenting facts that might make responders feel stupid, immoral, or just out of the mainstream for answering one way.
For example, surveys about belief in evolution are notoriously easy to bias depending on how the question is framed. If responders are made to feel that by stating they accept evolution they are rejecting God or religion, they are much less likely to do so.
In this CAM survey the question was: “‘Should universities teach alternative medicine?” This sounds simple, but those taking the survey may make many assumptions. What exactly is meant by “alternative medicine?” And teach it how? – Teach about alternative medicine, or promote alternative medicine as legitimate? The survey, of course, was attached to an article, so the content of the article can provide context and hugely bias the survey.
Scientific surveys are very tricky and their results should be viewed with extreme caution and a savvy eye. Surveys that are not scientific are worthless as a source of information. They persist because they are a gimmick for driving traffic to an article or website. Worse – they can be easily biased and then used inappropriately as if they actually represent public opinion. The online skeptical community has actually been effective in “breaking” such polls, not to use the results but to keep them from being used.
Legitimate surveys can be useful measures of public opinion, but in this and many similar cases proponents try to use them in order to make an argument from popularity. Moses throughout his article assumes that the popularity of CAM is an important issue. However, it is entirely irrelevant to the specific issue at hand – how should universities approach the topic of so-called CAM?
I have written several articles about this topic in which I make the point that universities should be thought leaders, helping to define and defend rigorous standards of intellectualism and scholarship. They should not be taking opinion polls and then following the current intellectual fad. Even if the vast majority of the public wanted CAM it would be appropriate for a university or medical school to take the unpopular position that CAM is a false category built largely on bad science, distorting the evidence, and even trying deliberately to water down the standards of science and evidence in medicine.
Thought leaders sometimes have to take unpopular positions. In fact, the issue about teaching CAM in universities is about defending science standards in the face of popular nonsense. It’s an oxymoron to argue that universities should or should not do so because of popular opinion.