SkepticblogSkepticblog logo banner

top navigation:

The Unknown Unknowns

by Michael Shermer, May 22 2012

This review of Ignorance: How it Drives Science by Stuart Firestein (Oxford University Press, May 2012, ISBN 13: 97801-998-28074) was originally published in Nature, 484, 446–447 (26 April 2012) as “Philosophy: What we don’t know.”

At a press conference on February 12, 2002, the United State Secretary of Defense Donald Rumsfeld employed epistemology to the explain U.S. foreign entanglements and their unintended consequences: “There are known knowns. There are things we know we know. We also know there are known unknowns. That is to say, we know there are some things we do not know. But there are also unknown unknowns, the ones we don’t know we don’t know.”

Ignorance: How it Drives Science, by Stuart Firestein (book cover)

It is this latter category especially that is the focus of Stuart Firestein’s sparkling and innovative look at ignorance, and how it propels the scientific process forward. Firestein is Professor and Chair of the Department of Biological Sciences at Columbia University, where he teaches a wildly popular course on ignorance, inviting scientists in as guest speakers to tell students not what they know but what they don’t know, and even what they don’t know that they don’t know. (Would you rather earn an A or an F in a class called “Ignorance”?, he muses.) This is a slim volume about a fat topic, but Firestein captures the essence of the problem by contrasting the public’s understanding of science as a step-wise systematic algorithm of grinding through experiments that churn out data sets to be analyzed statistically and published in peer-reviewed journals after a process of observation, hypothesis, manipulation, further observation, and new hypothesis testing, with the Princeton University mathematician Andrew Wiles’ description of science as “groping and probing and poking, and some bumbling and bungling, and then a switch is discovered, often by accident, and the light is lit, and everyone says, ‘Oh, wow, so that’s how it looks,’ and then it’s off into the next dark room, looking for the next mysterious black feline” (p. 2), in reference to the old proverb: “It is very difficult to find a black cat in a dark room. Especially when there is no cat.”

If ever there was a time to think seriously about ignorance it is in our age of digital knowledge. Consider an Exabyte of data, or one billion gigabytes (typical thumb drives that most of us carry around consist of a couple of gigabytes storage capacity). It has been estimated that from the beginning of civilization around 5,000 years ago to the year 2003, all of humanity created a grand total of five exabytes of digital information. From 2003 through 2010 we created five exabytes of digital information every two days. By 2013 we will be producing five exabytes every ten minutes. The 2010 total of 912 exabytes is the equivalent of 18 times the amount of information contained in all the books ever written. It isn’t knowledge that we need more of; it is how to think about what we know and what we don’t know that is becoming ever more critical in science, through a process Feinstein calls “controlled neglect.” Scientists “don’t stop at the facts,” he explains, “they begin there, right beyond the facts, where the facts run out” (p. 12). It must be this way, he argues, because “the vast archives of knowledge seem impregnable, a mountain of facts that I could never hope to learn, let alone remember” (p. 14). Doctors and lawyers and engineers need many facts at their ready, as do scientists, but for the latter “the facts serve mainly to access the ignorance” because this is where the action is. “Want to be on the cutting edge? Well, it’s all, or mostly, ignorance out there. Forget the answers, work on the questions” (pp. 15–16).

To Rumsfeld’s epistemological categories Firestein would one add more: unknowable unknowns, “things that we cannot know due to some inherent and implacable limitation.” He puts history in this category, but I would not, for if we take the broader construct of history as anything that happened before the present then most of astronomy, cosmology, geology, archaeology, paleontology, and evolutionary biology are historical sciences, subject to testing hypotheses no less rigorously than their experimental scientists in the lab. And I worry slightly that an overemphasis on our ignorance about this or that claim opens the door to creationists, Holocaust deniers, climate deniers, and post-modern deconstructions who wish to challenge mainstream scientists because of religious or political agendas. Acknowledging our ignorance is good, but let’s acknowledge and celebrate what science has confidently given us in the way of well-supported theories.

cover image

The Believing Brain
by Michael Shermer

In this book, I present my theory on how beliefs are born, formed, nourished, reinforced, challenged, changed, and extinguished. Sam Harris calls The Believing Brain “a wonderfully lucid, accessible, and wide-ranging account of the boundary between justified and unjustified belief.” Leonard Mlodinow calls it “a tour de force integrating neuroscience and the social sciences.”

That caveat aside, Ignorance includes an important discussion about scientific errors and their propagation in textbooks. I’m embarrassed to admit that I perpetrated one of these myself in my latest book, The Believing Brain, in which I repeated as gospel the “fact” that the human brain contains about 100 billion neurons. Firestein reports that his neuroscience colleague Suzana Herculano-Houzel told him it is actually around 80 billion (after undertaking an actual neural count!), and that there are an order of magnitude fewer glial cells than the textbooks report. As well, Firestein continues, the “neural spike” every neuroscientist measures and every student learns as the fundamental unit of neural activity when the cell fires, is itself a product of the electrical apparatus employed in the lab and ignores other forms of neural activity. And if that isn’t bad enough, even the famous “tongue map” in which sweet is sensed on the tip, bitter on the back, and salt and sour on the sides that is published in countless popular and medical textbooks is wrong and the result of a mistranslation of a German physiology textbook by Professor D. P. Hanig, and that the localization differences are much more complex and subtle.

These and other errors are the result of our lack of skepticism of the knowledge we have and our lack of respect for ignorance. “Ignorance works as the engine of science because it is virtually unbounded, and that makes science much more expansive” (p. 54). Indeed it is, and as the expanding sphere of scientific knowledge comes into contact with an ever increasing surface area of the unknown (thus, the more you know the more you know how much you don’t know), we would do well to remember the mathematical principle of surface area to volume ratio: as a sphere increases the ratio of its volume to surface area increases. Thus, in this metaphor, as the sphere of scientific knowledge increases, the ratio of the volume of the known to the surface area of the unknown increases, and it is here where we can legitimately make a claim of true and objective progress.

22 Responses to “The Unknown Unknowns”

  1. Max says:

    “Thus, in this metaphor, as the sphere of scientific knowledge increases, the ratio of the volume of the known to the surface area of the unknown increases, and it is here where we can legitimately make a claim of true and objective progress.”

    In this metaphor, the more you know, the LESS you know how much you don’t know as a fraction of how much you know. Is that progress or hubris?

    • Other Paul says:

      Worse than even that! As the dimension of the ‘sphere’ increases (from line, through disc, through ‘proper’ sphere, through four-dimensional hypersphere and beyond) it gets ‘spikier’ and fills up less and less of its enclosing ‘cube’.

      • Max says:

        That’s the problem with taking analogies too far. I’m fine with saying, “the expanding sphere of scientific knowledge comes into contact with an ever increasing surface area of the unknown.” But when you start calculating surface to volume ratios, you have to first explain why use a growing sphere instead of, say, multiplying small spheres. And is it possible to have a Menger sponge of knowledge with zero volume and infinite surface area?

      • Syd Foster says:

        I think that’s called a religion, isn’t it?

      • John Heininger says:

        The problem here is that the sphere could well expand indefinitely, meaning that the ever expanding know, will always be comparatively insignificant, particularly when the ‘know’ is based largely on unverifiable hypothetical “inferences”, theoretical “mind experiments” and fancy math.

  2. Peter Damian says:

    As Socrates replied, after being asked if he thought he was the wisest guy around, (and I paraphrase liberally) “Yes I am, because I’m the only one who doesn’t think he knows everything.”

  3. CountryGirl says:

    It is possible to do reasonable and often correct things without knowing all. I kind of dead reckoning process or something akin to charting a course that will bring you to a known position relative to your target so that once you reach it you know which way to turn for the next leg towards the goal. These things obviously I am describing in a way familiar to sailors or early explorers but they can apply to common processes and scientific endeavors as well. They often do require practice/experience, most certainly confidence and they don’t look pretty in the process so you may not want to let your boss observe you.

    I know, I know! Someone is going to disagree and list all the reasons why I’m wrong. All I can say is that you don’t know what you don’t know…

    • Dr. Mike says:

      What you say is absolutely correct. Researchers pass many intermediate goals on their way to a discovery, and at each one, they make course corrections.

      Software is a lot like this, too. A programmer has to take inputs, and devise a way to get the desired outputs. Along the way, parts of the application are constantly adjusted and revised as unanticipated problems crop up. Finally, if the programmer is talented enough (and a little lucky), the goal is reached. Rarely, but on occasion, the attempt has to be abandoned because no path is found to reach the goal which seemed so obvious at the outset.

      Sometimes in science, some of the best corrections are discovered by accident, and sometimes, the corrections come about only as a result of painstaking labor evaluating hundreds, or even thousands, of alternatives.

      Science can easily be said to be in a state of evolution, a process of survival of the fittest of scientific ideas.

  4. Irene says:

    It is so hard to try to solve a problem through the scientific method, verifying every single piece of scientific knowledge that has been already tested (in theory…) and that it’s supposed to be verified by experts, etc, before publication. It is just not possible…

  5. BillG says:

    Although lacking in society, “I don’t know” is intellectual honesty at it’s best. Often we have too many blowhards who don’t merit the print or media attention – which unfortunately has seeped into some proclaimed science writers and researchers.

    Personally, my excitement and attention gets heighten when I hear we or “I don’t know”. Or to quote Isaac Asimov about the most exciting words in science, “isn’t that funny?”

  6. Steve Wells says:

    I live my life under the mantra – frequently wrong always in doubt. this helps me get to good answers but I’m not arrogant enough to believe I’m always getting the right answers.

    I live in a world of successful businessmen who are frequently wrong but never in doubt. while that’s not very intellectually attractive it allows them to react quickly and comfortably and keep up with the competition.

    Its interesting that you see very poor decision making in so many successful businessmen but whats invisible are the equally qualified failures!

  7. Wineou says:

    BillG’s “at it’s best” illustrates that even intelligent Skeptics can do almost anything with words except get apostrophes in the right place. I hope this comment will cause his attention to “get heighten”.

    • Dr. Mike says:

      Actually, you don’t seem to know where the quotes go either. The period precedes the quote thus, “get heighten.” Also, “get heighten” is probably just a typo. Nothing to get worked up about.

  8. Bad Boy Scientist says:

    There are two “Worlds of Science” the one which exists in the minds of laypersons (and everyone is a layperson in most fields of science) and the one that exists for the scientists. The latter falls far short from the ideals that our skeptics community embraces – why? Because, “Scientists are people, too – in fact, they are people,first and foremost!”

    Openly admitting ignorance may be good for the soul but it can be lethal for a career in science. A person has to have a mighty solid reputation to survive answering a question with “I don’t know” when giving a talk to his or her peers. There are all sorts of weasley tactics used to skirt admitting ignorance – such as saying “We’re working on that” or “I have _some_ ideas but I wish to develop them further before sharing them.” I suppose occasionally those statements are literal true but mostly they are just avoiding saying “I don’t know”

    Wasn’t it Thomas Kuhn who described scientific revolutions as occurring when the older generation – which opposes the new idea – eventually dies off? The fact is science works when there is a large enough ‘community’ that the Work can carry through despite the efforts of individual egos and bullies and charismatic leaders. The miracle of the scientific endeavor is that it works despite the personalities and short-comings of the scientists.

    • Max says:

      You just have to preface “I don’t know” with “That’s a good question.”

    • Syd Foster says:

      “The miracle of the scientific endeavor is that it works despite the personalities and short-comings of the scientists.”

      This is the single greatest achievement of the human race….. creating a practical philosophy which overcomes the human tendency towards self-delusion.

  9. David says:

    Just as ‘relevance’ is the filter which often turns information into knowledge, so ‘experience’ is often the bridge between knowledge and wisdom. If Thomas Kuhn is correctly quoted, I suspect he was wrong! Maybe he was exaggerating to make a point.

    Increasing longevity is a probably a major reason behind the dramatic rise in scientific wisdom, not merely a beneficiary of it (or worse, a retardant). A separate debate, however.

  10. Greg Upton says:

    No matter how much we humans learn through science over the remaining term of humankind, we just have to accept that there will always be some questions for which we will never be able to find answers. For example, what does the smallest unit of matter look like? How far away is the edge of existence? What does the edge of existence look like? Was there a beginning in time, and will time ever end? We have evolved a dire need to find answers to these eternal unknowns, and our characteristic nature finds temporary answers through the use of religion and paranormal beliefs.

    • Syd Foster says:

      The smallest unit of matter cannot “look like” anything, because that’s asking it to do something it simply can’t, i.e. to present an aspect in a different arena to that in which it resides: if the “smallest unit of matter” is a quark, which apparently can never be on its own, and is far smaller than the frequency of any light in any case, so it cannot be resolved by hitting it with photons, you are asking to “see” something which has no “appearance” in the scale of the world we inhabit. Or an electron, which is a distributed entity until you try to “look” at it. Even if you could “see” an electron, that would not be how it “really is”.

      To think of “the edge of existence” is to ask “where is the end of the circle?” An unanswerable question alright, but not because of any limits in the use of the practical philosophy of science: just a limit to your ability to ask the right questions, or to ask questions in the right way.

      “Temporary answers through the use of religion”? A religion is as temporary as a parking orbit too close to a black hole!

      Personally I have no “dire need” to answer such questions. I do like asking them though, so here’s another meaningless question to add to your list above: what is the sound of one hand clapping?

  11. Chris Howard says:

    Talk about ignorance:

    • Syd Foster says:

      I can see a lot of ignorant people getting led up the wrong path by that site. The start out claiming to be more scientific than the scientists, but within two line of reading the first thing I dipped into (where he’s defending anecdotal evidence by trying to say that people do have accurate memory, and he comes out with something like “I’d say memory is accurate 95% of the time”…. so his “scientific” response to research that has demonstrated that people do not remember the appearance of a man in a classroom who mugged the teacher right in front of them, is to pull a percentage number out of thin air! And yet so many people will not even notice or know that he is not as reasonable as he is pretending.

      I admit I find this debunking sceptics website depressing. The dull have so much commitment and focus on maintaining their dullness. You’d need a committee to pick out and explain all his faulty thinking amidst the verbiage….

  12. John Heininger says:

    Says Michael “historical sciences, subject to testing hypotheses no less rigorously than their experimental scientists in the lab.”

    Not true! If this was really the case the Nobel Committee would not make the clear distinctions between science based on experimentation and observation, and historical theories regarding unobserved and unrepeatable past events. And for good reason.

    As “unobserved” past events cannot be repeated for observation, nor ever tested and empirically verified by experimentation using the scientific method. As such, questions relating to origins and past events depend entirely on subjective “interpretations” of historical data. With historical data subjectively evaluated to determine what “supposedly” happened in the past, and what the relics “supposedly” represent.

    However, such “interpretations” and evaluations do not happen in a vacuum. They operate within specific ideological, philosophical and theological parameters, which define the nature and meaning of the data. If the evaluator is committed to “metaphysical” or philosophical naturalism and materialism, than the historical data will be interpreted accordingly. Based on the unverified “belief” that everything in existence is solely the result of natural events and material processes. This assumption is philosophically based, not science based.

    For such reasons the Oxford dictionary relegates historical theories to a Theory Sense 2 definition. Namely, “A hypothesis proposed as an explanation; hence, a mere hypothesis, speculation, conjecture; an idea or set of ideas about something; an individual view or notion.” Similarly, as earlier stated, the Nobel Prize Committee does not regard historical theories as having the same value as verifiable empirical science. To quote New Scientist, “Evolutionary biology does not fall within the Nobel Committees definition of prizeworthy science.” As was stated by Harvard’s Stephen Jay Gould, the absence of evolutionary science from Nobel prizeworthy science ‘is another example of the traditional view that historical science isn’t the “real” thing.’ (New Scientist, Dec 11, 1986, p48).

    Meaning, there is no possible way to ever “scientifically” establishing that past historical events happened one way, and not another way, or even whether everything is, in fact, purely the result of natural processes alone. As the quest for a verifiable scientific TOE is still a very distant event. Because every question answered leads to even more mysteries.

    The unknown, unknowns, may be ultimately unknowable. “As a finite point without an infinite reference point is meaningless and absurd.” (Sarte)