SkepticblogSkepticblog logo banner

top navigation:

An Online CAM Poll

by Steven Novella, Feb 13 2012

Online surveys are worthless. That is, they are worthless as a source of information about popular belief and opinions. Yet many people still find them compelling, and so they can be useful as a way of driving traffic to your website. I guess that’s why they persist.

A recent poll about teaching complementary and alternative medicine (CAM) in Australian universities has become a matter of unnecessary controversy. Asher Moses wrote an article complaining about the fact that the survey seems to have been “gamed”, in an article: Vote on alternative medicine falls victim to dark arts of the internet. In the article he seems to miss the two real points about the poll – surveys are not reliable, and it’s fallacious to use them as an argument from popularity anyway. He writes:

Voting progressed steadily at first but on Tuesday votes began rising from about 125,000 to more than 877,000 by the time voting closed on Thursday. The end result was 70 per cent no, 30 per cent yes. The number of votes in the poll was about eight times more than the number of online readers of the story, a clear indicator that the poll had been gamed.

Moses talks in the article about how easy it is to “game” an online survey, but that is not the real issue. Most surveys are probably not hacked, as indicated above it is easy to detect such manipulation. Rather, there is a problem inherent with polls and surveys. The only reference to this issue in the article is acknowledgement that the survey was not “scientific” – but what does that mean, exactly?

A scientific survey is a method for estimating the percentage of the general population (or some identified subpopulation) that has one or more characteristics. It does not have to be an opinion, it can be something physical like: what percentage of the population has blue eyes. This is a deceptively difficult task to undertake.

First you would have to clearly define “blue eyes”. Blue can blend into green or even hazel, without a clear demarcation. Should the survey contain a box for “ambiguous”? Do you allow for people to decide for themselves and self-report whether or not they have blue eyes, or do you require a picture, or perhaps an in-person exam under controlled lighting conditions with multiple examiners having to agree on the eye color? Perhaps some people define themselves as blue-eyed because they think it’s more attractive. The point is – no matter how simple you think the question is, there are layers of complexity that will affect the outcome.

The next big problem is how you are going to select the targets of your survey in order to ensure that it is representative of the target population. Are you gong to stratify by race? Otherwise the racial mix of whatever community you look in will have a large impact on the outcome. If you are going to sample widely from many communities, then how are those people going to be selected? Will it be adequately random and sequential? You have to avoid anything that can potentially bias the results by preferentially selecting blue or not-blue eye color.

For surveys about opinions and beliefs there are more layers of complexity. Self selection, obviously, is a huge biasing factor. Any survey that allows people to choose whether or not to respond to the survey is “unscientific” and essentially worthless. Online polls are even worse because people not only can choose whether or not to respond (for example by agreeing to take part in a phone survey or by taking the time to mail back a survey), but they can also choose to seek out the survey, or can be directed there by groups with an interest in one outcome. For example, about the CAM survey Moses reports:

In the email sent by the Complementary Healthcare Council of Australia to members of its mailing list urging them to vote, the organisation’s consumer affairs director, Justin Howden, noted that the ”no” vote was streets ahead and said: ”We need to fight fire with fire.”

Urging members of one group to vote is breaking the poll. P Z Myers likes to point out this fact by “Pharyngulating” a poll – directing his readers to go to the poll and vote, massively biasing the outcome. He is clear that the point of this is not to engineer one outcome, but to demonstrate how worthless online polls are because they are so easy to bias. What you are really measuring is not public opinion but how effectively one side or the other can mobilize its online community.

A scientific survey is one in which efforts are made to contact random and representative people in a systematic way that avoids any bias. Subjects are chosen – they don’t choose themselves. Response rate is still a huge issue, because people can refuse to participate in the survey. Always look at the response rate of a survey and consider that an error bar. If only 10% of potential responders agreed to take the survey, ignore the results. As above, you can frame the issue as – what are you really measuring? Are you measuring passion for the issue? Comfortableness with the question? The anger that the issue provokes? Willingness to be honest about the question?

The difficulty of all of these issues is made clear by political polling. Anyone who follows the news in an election year will notice that polls can be very inaccurate. Such polls are simple in the respect that they have a finite number of choices – which candidate are you voting for? This is a clearly defined question. And yet, results often do not predict voting outcomes, for all the reasons I stated above.

Opinion polls suffer from a further layer of complexity that can significantly change the outcome – how questions are framed. Are you asking responders if they agree with a position or disagree with its opposite? This can significantly affect the outcome. Are you making any assumptions about the context of the question? You can also bias survey results by assuming or even presenting facts that might make responders feel stupid, immoral, or just out of the mainstream for answering one way.

For example, surveys about belief in evolution are notoriously easy to bias depending on how the question is framed. If responders are made to feel that by stating they accept evolution they are rejecting God or religion, they are much less likely to do so.

In this CAM survey the question was: “‘Should universities teach alternative medicine?” This sounds simple, but those taking the survey may make many assumptions. What exactly is meant by “alternative medicine?” And teach it how? – Teach about alternative medicine, or promote alternative medicine as legitimate? The survey, of course, was attached to an article, so the content of the article can provide context and hugely bias the survey.

Conclusion

Scientific surveys are very tricky and their results should be viewed with extreme caution and a savvy eye. Surveys that are not scientific are worthless as a source of information. They persist because they are a gimmick for driving traffic to an article or website. Worse – they can be easily biased and then used inappropriately as if they actually represent public opinion. The online skeptical community has actually been effective in “breaking” such polls, not to use the results but to keep them from being used.

Legitimate surveys can be useful measures of public opinion, but in this and many similar cases proponents try to use them in order to make an argument from popularity. Moses throughout his article assumes that the popularity of CAM is an important issue. However, it is entirely irrelevant to the specific issue at hand – how should universities approach the topic of so-called CAM?

I have written several articles about this topic in which I make the point that universities should be thought leaders, helping to define and defend rigorous standards of intellectualism and scholarship. They should not be taking opinion polls and then following the current intellectual fad. Even if the vast majority of the public wanted CAM it would be appropriate for a university or medical school to take the unpopular position that CAM is a false category built largely on bad science, distorting the evidence, and even trying deliberately to water down the standards of science and evidence in medicine.

Thought leaders sometimes have to take unpopular positions. In fact, the issue about teaching CAM in universities is about defending science standards in the face of popular nonsense. It’s an oxymoron to argue that universities should or should not do so because of popular opinion.

14 Responses to “An Online CAM Poll”

  1. Phea says:

    Dr. Novella, you’ve left me a bit confused… Your blog starts out,

    “Online surveys are worthless. That is, they are worthless as a source of information about popular belief and opinions.”

    You then go on to pretty much prove that these are about the ONLY things online surveys prove, (regarding CAM), and that popular belief and opinion are worthless in determining the value of CAM.

    Other than that, great post. I’ve been skeptical of statistics in general for quite a while now, and rigged “surveys” are just one of many tools used to get the desired results. I do take online surveys though, for IPSOS, and try to give them honest feedback and opinions.

    • Phea – That was not what I demonstrated. I specifically pointed out that there are two problems with using these surveys as the article suggests. One is the argument from popularity. But the other, which I spent most of the article discussing, is that online surveys are not scientific and do not reflect popular belief and opinion. They reflect the ability of competing online communities to mobilize their members. They generally allow for a vocal or passionate minority to disguise themselves as a majority.

      • Phea says:

        I did not think what you just stated was the main thrust of “most of the article”, after reading,

        “Moses talks in the article about how easy it is to “game” an online survey, but that is not the real issue.”

        In your conclusion, you go on to say,

        “Legitimate surveys can be useful measures of public opinion, but in this and many similar cases proponents try to use them in order to make an argument from popularity.”

        This seemed to contradict the two opening lines I mentioned earlier.

        Please believe me when I tell you that OVERALL, it was a great post and I agree with your conclusions completely. I believe I now understand the article better. I just wanted to point out that it was a bit confusing at first… to me, (which isn’t all that rare… for me).

      • The key word there was “legitimate” – meaning a rigorous scientific survey. An online poll is not a legitimate survey.

        Sorry if that was confusing.

    • MadScientist says:

      For the results of a survey to provide information any genuine value, the survey must be carefully planned and controlled. Open internet polls are of no value as far as reliable information goes (but they’re useful for propaganda). Targeted polls (customer/consumer polls) which are not open to the general public may be valid polls.

  2. Nathaniel Brottingham says:

    Please proof-read before posting. It was an excellent article in many ways, but spelling errors can look bad (and hence influence whether some readers will accept the content).
    thy → they
    their → there

  3. WScott says:

    Am I the only one who glanced at the title of the article and thought, just for a second, “Why is Skepticblob doing a poll about web cameras?” #ineedmore caffeine

  4. BillG says:

    Both the pro and con sides of gun control are good at this – skew the question for your favored result. Not unlike, “do you still beat your wife?”

    Then again, scientific surveys on public opinion is oxymoronic. When does opinion contribute with real science?

  5. Bob Kirk says:

    Try the following on how to bias poll results –

    http://www.youtube.com/watch?v=G0ZZJXw4MTA

  6. The Midwesterner says:

    I sometimes respond to online polls because I want to see the results, not because I think I’ll find out anything scientifically accurate, but because it tells me a bit about the political/social leanings of the people who take the poll. The polls attached to an article are the most interesting – and depressing – because many of the most strident comments reflect that the person voting and commenting has 1, read only the headline; 2, read only a portion of the article; 3, read the article but did not comprehend it.