SkepticblogSkepticblog logo banner

top navigation:

Debating Homeopathy Part I

by Steven Novella, Mar 25 2013

Six years ago I was asked to participate in a group debate over the legitimacy of homeopathy at the University of CT (there were six speakers, three on each side). This year I was asked to participate in another homeopathy debate at UCONN, but this time one-on-one with Andre Saine ND from the Canadian Academy of Homeopathy taking the pro-homeopathy side. (I will provide a link when the video is posted online.)

While the basic facts of homeopathy have not changed in the past six years, the details and some of the specific arguments of the homeopaths have evolved, so it was good to get updated on what they are saying today. In this post I will discuss some overall patterns in the logic used to defend homeopathy and then discuss the debate over plausibility. In tomorrow’s post I will then discuss the clinical evidence, with some final overall analysis.

Believers and Skeptics

As with the last debate, the audience this time was packed with homeopaths and homeopathy proponents. When I was introduced as the president of the New England Skeptical Society, in fact, laughter erupted from the audience. But that’s alright – I like a challenge. It did not surprise me that the audience, and my opponent, were unfamiliar with basic skeptical principles. Andre, in fact, used the word “skeptic” as a pejorative throughout his presentation.

The difference in our two positions, in fact, can be summarized as follows: Andre Saine accepts a very low standard of scientific evidence (at least with homeopathy, but probably generally given that he is a naturopath), whereas I, skeptics, and the scientific community generally require a more rigorous standard.

The basic pattern of Andre’s talk was to quote from one of my articles on homeopathy declaring some negative statement about homeopathy, and then to counter that statement with a reference to scientific evidence. The problem is, his references were to low-grade preliminary evidence, and never to solid reproducible evidence.

That is one functional difference between skeptics and believers – the threshold at which they consider scientific evidence to be credible and compelling (there are many reasons behind that difference, but that is the end result).

I was asked what level of evidence I would find convincing, and that’s an easy question to answer because skeptics spend a great deal of time exploring that very question. In fact, I have discussed this in the context of many things, not just homeopathy.

For any scientific claim (regardless of plausibility) scientific evidence is considered well-established when it simultaneously (that’s critical) fulfills the following four criteria:

1- Methodologically rigorous, properly blinded, and sufficiently powered studies that adequately define and control for the variables of interest (confirmed by surviving peer-review and post-publication analysis).

2- Positive results that are statistically significant.

3- A reasonable signal to noise ratio (clinically significant for medical studies, or generally well within our ability to confidently detect).

4- Independently reproducible. No matter who repeats the experiment, the effect is reliably detected.

This pattern of compelling evidence does not exist for ESP, acupuncture, any form of energy medicine, cold fusion or free energy claims, nor homeopathy. You may get one or two of those things, but never all four together. You do hear many excuses (special pleading) for why such evidence does not exist, but never the evidence itself.

The reason for this is simple – when you set the threshold any lower, you end up prematurely accepting claims that turn out not to be true.

Plausibility

The less plausible, the more outrageous and unconventional a scientific claim, the more nitpicky and uncompromising we should be in applying the standards above. This follows a Bayesian logic – you are not beginning with a blank slate, as if we have no prior knowledge, but rather are starting with existing well-established science and then extending that knowledge further.

To clarify – if a new claim seems implausible it does not mean that it is a-priori not true. It simply means that the threshold of evidence required to conclude that it is probably true is higher.

Scottish philosopher David Hume sort of captured this idea over two centuries ago when he wrote:

No testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous than the fact which it endeavors to establish.

I like to think of it this way: The evidence for any new claim that contradicts prior established scientific conclusions must be at least as robust as the prior evidence it would overturn. You can also ask the question – what is more likely, that the relevant scientific facts are wrong, or that the new claim is wrong?

What is more likely, that much of what we think we know about physics, chemistry, biology, physiology, and medicine is wrong, or that the claims of homeopathy are wrong? I think this is an easy one.

Ultramolecular Aqueous Dilutions

When researchers are trying to publish papers on topics that are highly controversial they often invent new terminology to evade the stink of pseudoscience. Cold fusion was therefore renamed low energy nuclear reactions by proponents. Similarly extreme homeopathic dilutions, those that are diluted beyond the point that any original ingredients are likely to remain, have been dubbed “ultramolecular aqueous dilutions,” or UMDs.

Such dilutions present a huge problem for homeopathy – how can a treatment have any biological effect if there is no active ingredient, if you simply have solvent with all the ingredients diluted out? The short answer is – you can’t. Hahnemann, who invented homeopathy two centuries ago, thought that his process was transferring the “essence” of the treatment into the water. Homeopathy remains a vitalistic energy-medicine pseudoscience.

In order to give plausible deniability to homeopaths on this point, however, there have been several attempts to demonstrate that homeopathic water is different from regular water. That the water itself can retain the memory of what was diluted in it.

Saine referenced a 2003 study that claims to demonstrate that homeopathic water has different thermoluminescent properties than non-homeopathic water.  The experimenters actually took heavy water (deuterium water) and then froze it at very low temperatures, exposed it to radiation, and then measured the thermoluminescence as it melted. That sounds like a homeopathic remedy, right?

Essentially the researchers were anomaly hunting, with an experimental setup that has many possible variables and unknown effects. Let’s apply my list of criteria above to this study – while it had statistically significant results, it was not blinded, and it is not clear that the researcher isolated the variable of interest (homeopathy). Further, attempts to replicate the study were negative.

No so-called “water memory” experiment has come anywhere near being established science, yet Saine thinks this is enough to settle the question.

Next Saine went to a new strategy (using somewhat of a kettle defense). He cited a recent paper that claims that even ultramolecular dilutions retain measurable amounts of original substance. Harriet Hall at Science-Based Medicine has already done an excellent job of destroying this claim. This small study was not blinded and not even controlled – no control group. And of course, it has never been replicated.

Saine believes that he has rescued plausibility for homeopathy by citing the few preliminary, small, unblinded, often uncontrolled, and unreplicated studies that show some water anomalies. His threshold for finding evidence compelling is not even in the same universe as the mainstream scientific community.

Keep in mind also that even if the above claims for water memory or nanoparticles were true (which they probably aren’t), that is still many steps removed from demonstrating plausibility for homeopathic remedies.

Such water anomalies would have to transfer their properties to sugar pills when the homeopathic solutions are placed on the sucrose tablet, and survive when the water evaporates. They would have to remain intact over time on the shelf, when consumed, digested, absorbed, and then transported in the blood to whatever their target tissue is, and then have a biological effect. Each of these steps represent a massive barrier to the plausibility of homeopathy, and are completely unscathed by these unreliable preliminary studies.

Also, we are just talking about the high dilutions of homeopathy. Unfortunately such debates rarely get to an equally implausible aspect of homeopathy – the remedies themselves. There is no reason to suspect that any particular starting substance for a homeopathic remedy, even if given in a measurable amount, would have the claimed effects. The substances are chosen for fanciful magical reasons that make homeopathy more akin to witchcraft than medicine.

The reasoning is mostly based on sympathetic magic, that like cures like, but often even more bizarre than that. The patient’s personality type is often taken into consideration, and the totality of their symptoms in a fashion that is pure fantasy. Some starting ingredients, like osillococcinum, don’t even exist.

Tomorrow I’ll discuss the clinical evidence for homeopathy and some concluding thoughts.

Recommended Reading

27 Responses to “Debating Homeopathy Part I”

  1. Peter Damian says:

    Unfortunately, the statement “water retains the healing properties of the elements with which it has been in contact” takes 2 seconds to read and grasp the meaning, whereas your measured, intelligent response, takes considerably longer to read, and requires a modicum of cogent comprehensive skill to understand. Therin I think, lies your biggest challenge.

  2. JTM says:

    While I agree in general, I have three quibbles.

    First, I wish that you had stated the first criterion for useful scientific evidence a bit differently. Instead of one example of a counter to a potential confound – viz., blinding – followed by “controlling for variables of interest,” I wish that you’d say that the experiment needs to be “internally valid (and then explain what that means if needed). Why? Because there are many other ways to create an experiment with lousy or no internal validity besides subject or experimenter knowledge of condition (and only mentioning blinding might make people think that this one potential confound is the only one that matters), plus you don’t just control for “variables of interest”; the whole point is to control for confounds that are not of interest.

    Second, I do not see any reason to require a standardized effect size of any particular magnitude. Maybe in medicine it makes sense to ignore – at least, for now – any effects with a very small Cohen’s d, but it doesn’t make sense to do this in science in general. By many definitions of “acceptable standardized effect size” (e.g., 0.10) the difference between the predictions of Relativity and Newtonian physics for the orbits of planets is too small to matter. Tell that to the poor schmucks headed to the moon.

    Finally, I am very much against the idea that the criteria for evidence change when the data are being used to disconfirm a popular theory. The quality of a set of data (as mostly determined by internal validity) does not depend on when the data were collected, so it does not depend on the current level of support for the theory being questioned. You can ask for more evidence when a single set of data raise a problem for a popular theory – and, if that is what you were saying, then ignore this point – but you should apply the same rules for determining the quality of a set of data, regardless of the implications of said data.

    cheers

    – JTM

    • jt512 says:

      JTM, the reasoning in your last paragraph fails to take into account that a perfectly designed and conducted experiment can still produce false positive results (by chance). Not only that, but the probability that a study with a positive result is false depends inversely on the prior probability of the research hypothesis. The flip side of this is that the probability that a study with a positive result is true depends directly on the prior probability of the research hypothesis. Therefore, a valid assessment of whether a hypothesis is likely to be true or not must take into account the prior probability of the hypothesis as well as the evidence from the experiment itself.

      In other words, the probability that a positive experimental finding for a hypothesis is true is inextricably linked to the hypothesis’s prior probability. The consequences of ignoring this fact—ignoring prior probability—is that you will accept as true too many false hypotheses, and you’ll end up believing a lot of unbelievable things.

      • JTM says:

        I understand what you’re saying but I’m very uncomfortable with your combining of failures of internal validity with statistical false-alarm rate. Yes, when the theory in question is not true, any evidence in favor of it must come from a confound (i.e., a failure of internal validity) or a Type-I error (i.e.., a statistical false-alarm), but throwing both of these into one pile is a very bad habit (IMO), because the solutions to the two problems are very different.

        I’m equally unhappy about your apparent lack of concern about false-positive (i.e., false-favorable) results when you and/or the field in general already has confidence in the theory in question. That’s an equally bad habit, as it will cause over-confidence and, again, a lack of interest in “cleaning up” the studies (i.e., removing any remaining confounds), because you happen to like the results. One of the worst habits that I see many scientists get into is where they only criticize the studies that produced data that they didn’t like. This is almost as bad as the Alpha/Beta Gamble, where people add subjects to studies that “just missed” (e.g., had a p-value of .051), but never add subjects to studies that “just made it” (e.g., had a p of .049). At that point, you might as well just raise alpha to .010 and be honest about it.

        More generally, I really don’t like the way that you have weaved Bayesian thinking into all this. I completely reject the idea that “the probability that a positive experimental finding for a hypothesis is true is inextricably linked to the hypothesis’s prior probability.” That, to an old-schooler like me, is completely false. Instead, the probability that a positive finding is true depends on only two things, neither of which has anything to do with the theory: (1) the absence of confounds and (2) p-value.

        Before you write me off as just another old-fashioned scientist who just doesn’t get Bayes, please let me assure you that I don’t reject the approach in all cases and, like all humans, use Bayesian logic every day of my life (albeit mostly implicitly). But I do not use Bayes to tell me whether an experiment was any good and do let Bayes open the door to what I call “excessive scientific momentum” by setting different standards for evidence depending on the state of the literature. A false-alarm is a false-alarm. And an experiment with confounds is an experiment with confounds.

        As I said in my first comment, a surprising result merits replication more than an expected result. But not because it has less chance of being true, all else being equal, but because it will have a larger effect on other researchers.

        – JTM

        ps. thanks for engaging in this conversation

      • jt512 says:

        I don’t completely disagree with much of your remark, but you’re wrong about this:

        “[T]he probability that a positive finding is true depends on only two things, neither of which has anything to do with the theory: (1) the absence of confounds and (2) p-value.”

        That is wrong, and you can prove so to yourself with a simple thought experiment. Assume a certain experimental finding is unconfounded and has a p-value of .01. According to your claim, that is all the information on which the probability that the finding is true depends. Therefore, you should be able to state that probability. What is it?

        Obviously, you can’t do it, and the reason is that it is not true that the probability that a positive finding is true depends only on the absence of confounding and the p-value. If we assume unbiasedness, then the probability that a positive finding is true depends on exactly three things (none of which, BTW, can be determined from the p-value*): the probability of the finding under the null hypothesis, the probability of the finding under the alternative hypothesis, and the prior probability of the hypothesis.

        It is easy to see that the probability that a positive finding is true depends on the prior probability of the hypothesis by using another simple thought experiment. Consider two fields of study that always produce perfect (unbiased) experiments. However, field 1 only tests true null hypotheses; whereas field 2 only tests false null hypotheses. Thus the prior (frequentist!) probability for fields 1 and 2 are 0 and 1, respectively. Clearly, then, every positive finding in field 1 is false, while every positive finding in field 2 is true.

        What about a more realistic example? Consider again two fields of study, one of which (field 1) only studies highly speculative or exploratory hypotheses of which only 1 in 500 are true; and another (field 2) which studies more mundane hypotheses, half of which are true. Assume both fields always test at the .05 level of significance and design all of their studies to have 80% power. It is a simple matter to calculate the probability that any given positive finding is true. For field 1 it is .03, and for field 2 it is .95. Clearly, the probability that the hypothesis is true depends greatly on its prior probability. We might accept the findings of field 2 on their face, or perhaps after a single high-quality replication. But clearly we must have more evidence before we can rationally believe a finding from field 1.

        *Parenthetically, the reason the p-value doesn’t enter into the calculation is related to the fact that it is not a consistent measure of evidence against the null hypothesis; rather, its evidential value depends on the sample size. There is a considerable body of literature on this in the biomedical, psychometric, and statistical literature (and it’s not strictly Bayesian; frequentists understand this as well). So without knowing the sample size, the p-value can’t even enter into the calculation.

      • JTM says:

        Yeah, I really said that badly. (Wish this site had editing options.)

        What I was trying to say was that prior belief or confidence in the theory plays no role. The theory is true or false, regardless of whether there is any data (yet) in support or against it. Whether you get (new) evidence for or against the theory from any given experiment depends only on the “clean-ness” of the experiment (i.e., the internal validity, which is why I said absence of confounds) and the data (which is what I meant by p-value).

        What Bayesians do too often (IMO) is let the results of other studies come into play when interpreting the results from any new study. That’s what leads to “excessive experimental momentum” or “fads” or just, plain-old “lemming behavior.”

        And your counter above is a very fancy version of this. The underlying story appears to be the idea that more and more data in favor of a theory somehow makes the theory more true. That’s a tad self-centered, isn’t it? The theory is either true or false, regardless of what human scientists think. The theory doesn’t become more true when there’s more evidence in its favor; it simply becomes more widely believed or believed more strongly. So the probability of getting positive results doesn’t change; only our expectancy for such results changes. So the rules of inference should remain the same. Otherwise, you’re letting expectancy determine the interpretation, which is just as bad as those people who take the Alpha/Beta Gamble and, in effect, p-hack their way to significance and another pub.

        – JTM

      • JTM says:

        Oops. I skipped a section.

        I’m totally with you on p-values. I keep talking that way because not enough people seem ready to talk in terms of proportion of variance. Give me and my friends some time. Many of us who teach grad stats these days are pushing the middle ground between old-school sig/not-sig (p-values) and Bayes; we can be called Cohenites, if you wish. We teach proportion of variance explained for everything.

      • jt512 says:

        This is a reply to JTM’s comment of March 25, 2013 at 8:05 pm. Hopefully, this reply will appear below that one.

        JTM wrote:

        The underlying story appears to be the idea that more and more data in favor of a theory somehow makes the theory more true.

        I never said (nor implied) that a theory becomes more true, the more data we have. That doesn’t even make sense. What I did say (or imply) is that the more evidence we have in favor of a theory, the more we should believe that theory; that is, we should give it a higher probability of being true.

        The theory doesn’t become more true when there’s more evidence in its favor; it simply becomes more widely believed or believed more strongly.

        That’s correct. The more evidence we have in favor of a theory, the more strongly we (should) believe it, because the more evidence we have for a theory, the more likely it is to be true.

        So the probability of getting positive results doesn’t change; only our expectancy for such results changes.

        But the probability of getting a positive result for a theory that is already supported by a large amount of evidence is greater than for one with no prior supporting evidence, which in turn is much greater than for one which already has a large body of evidence contradicting it. What is the probability that if I conduct a well-designed experiment of the conversation of energy that I’ll get a result that agrees with the law of conservation of energy within the error bars of my experiment? Now what is the probability that if I conduct a well-designed experiment of telekinesis (the theory that people can bend spoons with their minds) that I’ll get a positive result? Clearly, the probability of a positive result depends on the prior probability, which is to say, the strength of the prior evidence.

        So the rules of inference should remain the same.

        Yes, the rules of inference should remain the same: Our degree of belief in a hypothesis should be directly proportional to the strength of the totality of the evidence in favor of the hypothesis, and inversely proportional to the strength of the totality of the evidence against the hypothesis.

        Otherwise, you’re letting expectancy determine the interpretation…

        No. You are (or should be) giving appropriate weights to new evidence and the prior evidence. That is why Steve says that if you have a hypothesis that is impossible if a competing hypothesis is true, then for your hypothesis to be believable, the evidence for it should be at least as great as the evidence for the competing hypothesis. In particular, if you have a hypothesis that is impossible if the Standard Model of Particle Physics is true, then for your hypothesis to be believable, it should have at least as much evidence for it as the Standard Model of Particle Physics has.

  3. JTM – I did not really try to capture all of the various things that make a scientific study credible, which is why I started with “methodologically rigorous” as a catch-all. I could have stopped there, but included specific mentions of those most significant in the current case.

    Effect size matters – but I framed it as “signal to noise” and “well within our ability to confidently detect.” The flip side is alleged phenomena that are right at the limit of our ability to detect, or that are small relative to the noise in the experimental system.

    Finally – I never mentioned the popularity of a theory, just the amount and quality of evidence that it would overturn if accepted as correct. The question is – at one point it is reasonable to proceed as if a specific claim is settled? Any such threshold has to consider prior probability. Or – looked at another way – no matter what threshold you create, it is possible that two mutually exclusive claims both meet the threshold. Then what? Or a new claim may meet that threshold but another older and mutually exclusive claim vastly exceed it. Or the claim may meet the threshold and be perfectly compatible with other established scientific theories. These situations cannot logically be treated identically.

    • JTM says:

      Only one thing to add after reading your second reply.

      When people arguing in favor of tiny effects (such as homeopaths [and other duck-noises]) keep running studies with Ns so small that they have no real chance of being significant, I stop paying any attention to them. Anyone who runs a dozen studies, all of which had – assuming that they are correct – a power of .10 or less, is just fishing for a Type-I result. Those aren’t scientists; those are advocates.

      There’s a reason that NIH, NSF, etc., all ask for power analyses (usually in the form of N*). It worst than a waste of time to run studies that are too small. When combined with advocate bias and the file-drawer issue, you’ll end up with whatever you wanted to find.

  4. Citizen Wolf says:

    Kudos Steve. I look forward to seeing the video of the debate. It should be educational to see the different points presented and how the homeopathy proponents view the situation.

    In a similar vein, a few years ago you interviewed Alex Tsakiris on SGU and there was some talk about doing some experiments/research on some of the stuff that Alex likes to go on about. Did anything ever come of that?

  5. Savvy Skeptic says:

    Nice work Steven. I think that the scientists/skeptics have won the debate already. Fallacies of homeopathy have been convincingly challenged with logic and experiment. Now, more needs to be done than just conducting debates. Homeopathy is counting on this on-going debate to last for ever.

    Do you think that scientists/skeptics could get together and at least stop the public funding of homeopathy in countries like UK, India etc. In UK, even after the science committee called homeopathy a witchcraft, government has not yet banned it from NHS.

  6. Luara says:

    It seems very sad to me that people have to waste their time refuting homeopathy. It should be self-refuting, and to people with a basic science knowledge, it is, (I hope!)

    • JTM says:

      If you ever have the misfortune to have a family member blowing their money on shark fins and such, because they are grasping for anything to help them with some disease, then your drive to help/support/admire all the bloggers on this site (and many others elsewhere) to fight this nonsense will be rekindled. It’s very sad, but it is needed. Once the person is in desperation mode (and buying said shark fins), it’s too late. And, in some ways worse (although this may seem heartless), that these people can become advocates for the duck-noisery when they survive, is even more painful.

      – JTM

      • Luara says:

        I do have serious medical problems, and various “alternative”-type treatments can be helpful, and I try to find out what evidence there is for them. Or against them – I don’t want to be harmed by a treatment.
        But the thing about homeopathy is, it can’t work! I don’t bother with arguments for it!

  7. Guy Chapman says:

    As we all know, there are only three problems with homeopathy. 1: There is no reason to suppose it should work. 2: There is no way it can work. 3: There is no proof it does work.

    Apart from that, it’s all good.

    No homeopath I have ever encountered, addresses more than two of these. The first is actually the dirty secret. Why should we believe like cures like? There’s no credible evidence for it, it’s a doctrine, just like the doctrine of potentisation. The scientific term for the miasms that cause disease, and the humours they disturb, is: wrong.

    Homeopathy is a house of cards several stories high, but with all but the top story missing. Occasional parts of the house are replaced by cartoon caricatures of cards. And homeopaths present it as an achievement equivalent to He Shard, as strong as a Waren truss.

    Dream on. It’s a religion, based on faith, received wisdom, holy writ, prophets, priess and acolytes. And the finl laugh is that if thy were smart they’d learn from Scientology and go down that route, which would afford them legal protection they cannot now attain.

  8. JTM says:

    (I’m wearing out my welcome rather quickly, aren’t I? Just tell me to shut up, if needed.)

  9. d brown says:

    Well I did read once that %80(?)of the people in hospitals bodies would have cured themselves in time. Would Homeopathy do that good?

  10. Crabe says:

    What I would really like to know, is – considering homeopathy claims to be true – why tap water isn’t a panacea? Consider this: it is rather likely that people infected with AIDS or other horrible disease took a bath or swimm in lakes or in the sea… After multiple dilution in the environnement at ultramolecular level, you should expect water to cure basicaly anything.

    • Jim says:

      The problem is that many more healthy people bath and swim in the water than sick people. Their healthy energies are thus infused in the Ultramolecular Aqueous Dilutions. As you are already undoubtedly aware, like cures like; consequently, tap water becomes a ‘cure’ for healthiness thus rendering those who drink it ill.

      The good news is that I can offer you a healthy alternative to tap water for the low price of $20 USD per 12 ounce bottle. My homeopathic drinking water is mixed with the essence of the recently deceased and then diluted to a 30C potency (a much more potent 200C version is available for $200 USD per 12 ounce bottle). By infusing this essence of death into the water it becomes an elixir of life – a literal fountain of youth.

  11. JTM says:

    (Starting a stack, since it was too hard to read.)

    “But the probability of getting a positive result for a theory that is already supported by a large amount of evidence is greater than for one with no prior supporting evidence, which in turn is much greater than for one which already has a large body of evidence contradicting it.”

    From the point of view of the person running the experiment, I agree. And, therefore, when you stand back and trying to integrate a new finding with the existing literature, you can and should take this into account. But you should not take this into account when interpreting the results from the experiment, itself, on its own.

    It’s the difference between the Results section and Discussion section of a paper. In the latter, you take the entire existing literature into account, such that an unexpected result needs more discussion than an expected result. In fact, if the finding is so expected that no-one would consider it news, you might even consider – while writing the Discussion – to drop the experiment from the paper, instead of include it (although this can lead to other problems which has recently been the topic of a symposium at a major psychology conference).

    But you don’t do this in the Results section. The data from the experiment are what they are, and the only factors that should be considered at that point are internal validity and statistical-conclusion validity. If you allow the wider literature to play a role in prima facie interpretation, then you get snowball effects, with people more and more trapped inside the prevailing view.

    A different way to look at this (which would be useful, eh?, since I’m just repeating myself at this point) is to recognize that any argument that depends on other ideas can only be weaker than the same argument without said dependence. Assuming independence (just to make the general point easily), the probability that your argument is correct is the product of the probabilities of all of the constituent parts. Each additional required idea can only lower the probability that the entire package is correct (or leave it unchanged, if the added idea has probability of 1.00); it can never increase the probability that the package is correct.

    People have great difficulty with the above. People have a tendency to average all the probabilities, instead of multiplying them, and pseudo-scientists often take advantage of this. Said pseudo-scientists pile in a ton of ideas that all have probabilities near 1.00 in order to take advantage of the human habit of averaging. These extra ideas make the receiver more likely to believe the total argument, when, logically, extra ideas can never help and can only hurt. (If you’d like, I can get you the references on this.)

    Anyhoo, I see Bayesians opening the door to exactly this kind of pseudo-scientist trick with their inclusion of prior odds. The Bayesian piles in a ton of priors, not because they really are using Bayesian logic (which does not make the averaging-of-probabilities error), but because they want the receiver to make the averaging-of-probabilities error.

    – JTM (who hasn’t been told to shut up yet)

    • jt512 says:

      JTM wrote:

      I wrote:
      “But the probability of getting a positive result for a theory that is already supported by a large amount of evidence is greater than for one with no prior supporting evidence, which in turn is much greater than for one which already has a large body of evidence contradicting it.”

      From the point of view of the person running the experiment, I agree. And, therefore, when you stand back and trying to integrate a new finding with the existing literature, you can and should take this into account.

      The probability that you’ll get a positive result clearly does not depend on whether you are the one running the study or someone else. Otherwise, what you’ve said above is exactly what I’ve been trying to say all along.

      But you should not take this into account when interpreting the results from the experiment, itself, on its own.

      I agree with that as well. However, for some hypotheses the contrary evidence is so strong that no one new study alone could possibly overcome the evidential burden against it. That’s why I don’t routinely read papers in parapsychology, no matter how well (they claim) their experiments have been carried out.

      In fact, if the finding is so expected that no-one would consider it news, you might even consider – while writing the Discussion – to drop the experiment from the paper, instead of include it (although this can lead to other problems which has recently been the topic of a symposium at a major psychology conference).

      I disagree that you should drop the study for the reasons (publication bias, significance seeking, etc.) that hopefully were discussed in that symposium. Are experimental psychologists starting to take these problems seriously? Because as things stand, much of the experimental psych literature is hard to believe.

      A different way to look at this…is to recognize that any argument that depends on other ideas can only be weaker than the same argument without said dependence. Assuming independence (just to make the general point easily), the probability that your argument is correct is the product of the probabilities of all of the constituent parts. Each additional required idea can only lower the probability that the entire package is correct (or leave it unchanged, if the added idea has probability of 1.00); it can never increase the probability that the package is correct.

      I don’t see the relevance to this, since neither multiplying probabilities nor averaging them is a valid way to accumulate evidence in either the Bayesian or frequentist framework.

  12. Dr. Nancy Malik says:

    Upto the end of year 2010, there have been 270 studies published in 106 medical journals including 11 meta-analysis, 8 systematic reviews including 1 cochrane review (out of approximately 20 systematic reviews published) and 93 DBRPCT (out of approximately 225 RCT published) in evidence of homeopathy.

  13. Phea says:

    These guys say it all, say it well, and with humor:

    http://www.youtube.com/watch?v=HMGIbOGu8q0

  14. Jeff White says:

    The biggest danger with homeopathic UMDs is that the patient may forget to take the medicine and die of an overdose.

  15. magufo says:

    Novella is pseudo sketik:

    First: The arguments of the pseudo skeptics have also evolved. Remember that before you say things like “there is not a study published in a peer reviewed journal,” or “there is a single Nobel Prize endorse homeopathy”, “all homeopathy is water” …. “not a single study with double-blind design.” The fashion now is to quote Ernst and his gambit oh! “has serious methodological flaws.” Many of these pseudo skeptical beliefs can be found in the books, articles, commentaries, of James Randi, Martin Gardner, Richard Dawkins, Ben Goldacre, Steve Novella (including participation in the journal Homeopathy), David Gorski

    Second: Criticism of Hall to study nanoparticles even so, Halla says he does not understand the technology and apparently does not understand that to the study. Hall applied a double standard to evaluate evidence of homeopathy and requires more rigorous. However, the study was Chickramen reproduced and published in Langmuir with special mention to Dr. Hall.

    Third: King’s study was reproduced by the same in 2007 and 2005 by Roeland VanWijk.

    Fourth: The funny thing is that if I apply the four points that suggest, I can remove even conventional medicine, some fields of physics and chemistry …

    1 – Methodologically Rigorous, properly blinded, and Sufficiently That Adequately powered studies define and control for the variables of interest (confirmed by surviving peer-review and post-publication analysis).
    2 – Positive results are statistically significant That.
    3 – A reasonable signal to noise ratio (Clinically significant for medical studies, or well Generally Within our Ability to confidently detect).
    4 – Independently reproducible. No matter who repeats the experiment, the effect is reliably detected.

    P.D: Because no one responds on the pseudo-replication of the Ennis exeriment on the Horizon tv show?

    Because Jacques Benveniste is accused of fraud, and
    James Randi allows commits fraud? Why nobody denounces SkeptikGate as one of the greatest scientific fraud over ten years?

    What more is needed to show that the event was a Horizon fraud?

  16. S DuBois says:

    Magufo

    I’m not exactly sure what you are trying to say.

    Unfortunately, your tenuous grasp of the English language makes your point incoherent. I don’t mean to mock a non English speaker, but you have a much better chance of being taken seriously if you use proper syntax and complete sentences. You also have a better chance of conveying information.