SkepticblogSkepticblog logo banner

top navigation:

Multitasking – Can You Walk and Chew Gum at the Same Time?

by Steven Novella, Apr 16 2012

Multitasking – the act of doing more than one thing at the same time – is largely an illusion. You can’t do it, at least not well. The research over the last couple of decades has shown in numerous ways how difficult and wasteful attempting to multitask is. Now a new study purports to show a possible benefit to multitasking. I will get to that study later – first, let’s review how bad multitasking is.

Researchers believe that there is a processing bottleneck in the brain. Essentially, we don’t have multi-core processors with multi-threading. In neurological terms, there are various functional components to consider. One is executive function, which is the “supervisor” function in the frontal lobes. Executive function includes the process of focusing attention, allocating resources, coordinating information, and scheduling cognitive tasks. Everything that the brain does is a finite resource, including executive function.

Attention itself is also limited. Our attention can be spread out so that we are taking in a lot of information at once, although very superficially. In this mode we may be scanning our environment for something interesting, or something in particular, but missing a lot of detail. Or we may focus our attention down on one thing, taking in greater detail but at the expense of ignoring everything else. This spreading out or narrowing down of our attention applies no only to sensory input but also ideas.

Some researchers believe that there are different information processing modes that we can engage in – either divergent or convergent. Divergent thinking allows us to look at the whole picture and integrate different pieces of information. Convergent thinking, however, is for deep systematic thinking on one topic.

So there are different ways in which we can focus our attention and different modes of information processing. There are also different specific pieces of information we can focus on, or sensory inputs, and of course there are different cognitive tasks to which we can turn our attention. I mention all of this because this is at the core of the difficulty with multitasking – switching among different types and targets of attention and information processing.

We cannot literally perform two cognitive tasks simultaneously. Rather, we switch between them (or among three or more tasks). Every time we switch our attention or our cognitive style that uses executive function resources (which are finite). Such tasks may also use resources of memory retrieval and other limited brain resources. Therefore some of our limited resources are being allocated just to switching tasks, and are therefore not available to engage in those tasks themselves.

This phenomenon is referred to as interference – one task interfering with performance on another. This interference can often be bidirectional – both tasks interfere with each other.

This definitely accords with my personal experience. If I am engaged in a mental task I prefer to have minimal distractions. If I am trying to also do something else at the same time (perhaps I am checking e-mail while writing a blog, for example) I waste time just just getting back mentally to where I was before I switched tasks. I end up taking more time to complete both tasks than if I did them separately, and the incidence of mistakes goes way up.

To use another computer analogy – sometimes, because I am impatient, if my computer is taking a long time to complete a task I may open another window and work on something else. Making the computer perform two processing-intensive tasks simulaneously takes more time than if both tasks were done alone, because the computer now has to expend resources loading information into memory and accessing the hard drive as it is switching resources from one task to the other (assuming you are using a computer that does not have true multitasking ability).

What I have summarized so far is all pretty basic. Researchers in this area are drilling down deeper to some interesting questions. For example, it is established that if we are performing a task that requires us to perceive certain information, that other information that is not related to the task will cause distraction and reduce performance (through interference). If, however, the task is using up all of our perceptual capacity (so-called high perceptual load), then there won’t be any perceptual capacity left over to notice the distracting events, and interference will paradoxically disappear. In every day terms – if you are fully engaged in a complex task you may “block out” distractions in your environment. Your brain simply won’t have the capacity to process those distractions.

This is only true, however, for tasks that are perceptually demanding. Tasks that are demanding in other ways, such as being memory intensive or involving stimuli that are at the edge of perception (things that are very small, for example) do not display this high-load phenomenon of causing interference to disappear.

There also appears to be an ongoing debate about perceptual interference being “early” or “late”. The early interference view is that distractions keep us from processing task-specific information at all – they interfere early in the process of perception. The late interference hypothesis is that the early processing of sensory stimuli is obligatory, and that distractions interfere with later processing of that perceived information. I haven’t read enough to have an opinion about which hypothesis is more likely to be true, and this appears to still be a point of controversy.

At this point you may be saying, “Wait a minute. I can walk and chew gum at the same time, and it doesn’t seem that my gum chewing suffers as a result.” This is due to what is called automaticity. Some tasks, like walking, inherently use little cognitive resources. This is because they utilize sub-cortical more primitive and subconscious parts of the brain. Our brain stems (the most primitive part of the brain) perform much of the processing necessary for simple walking. The brain stem also regulates other automatic functions, like breathing, which is why you don’t have to concentrate very hard in order to breath.

Learned tasks may also become more and more automatic over time. That is part of the benefit of practice. The cerebellum, for example, can learn coordinated motor actions, like shooting baskets, and can take over for our higher cognitive functions.

Automaticity, therefore, does not alter the reality of multitasking, but it does reduce the amount of cognitive resources that a task requires, and so the negative effects of multitasking are diminished. Both chewing gum and walking are tasks with high automaticity, which is the reason for the cliche insult in the first place.

The new study has to do with a special category of multitasking called media multitasking – looking at TV while texting a friend and watching a YouTube video on your iPad, for example. Research has shown consistent differences in information processing between high media multitaskers (HMMs) and low media multitaskers (LMMs). This research shows that HMMs perform worse on attentional tasks than LMMs. Specifically HMMs tend to maintain a wider attentional scope, even when instructed to focus on something specific, and therefore perform worse on tasks that require a narrow attentional scope.

This makes sense, although there does not appear to be any data that helps separate out cause and effect. Do HMMs learn to maintain a wider attentional scope, or are people who tend to have a wider attentional scope drawn to media multitasking? In fact, HMMs tend to be ironically worse at multitasking, because their wide attentional scope makes them more distractable and they suffer greater interference when task switching.

The new study is interesting because it set out to find if there is any task at which HMMs have an advantage. They separate subjects into groups of HMMs and LMMs and then gave them two different tasks. The subjects had to find a target shape on a computer screen among similar shapes. In one version of the task there was also a sound that drew attention to the target shape, in another version there was no sound. Similar to previous research, the HMMs did worse than the LMMs on the target finding task without the sound. They were worse at filtering out task-irrelevant stimuli. However, they performed better than the LMMs when the sound was present.

The thinking here is that HMMs are better at integrating multiple sensory inputs. This may be the first study to find a cognitive advantage to HMMs. It does make a certain superficial sense – media multitasking often involves paying attention to multiple difference kinds of sensory input at once.

Again – this study did not address cause and effect, so we are left not knowing if this potential advantage of integrating multiple sensory inputs is learned by HMMs or causes people to gravitate toward being an HMM. This is also (standard caveats) a single smallish study that needs to be replicated.

Conclusion

The human brain does not appear to be evolved for multitasking. Our brains have finite resources that have to be divided among the tasks in which we engage. Many societies, however, seem to be moving toward greater and greater multitasking demands. More and more people are becoming HMMs, whether they want to or not.

The unintended consequence of our information-heavy multimedia society is that we may be creating a generation of people who maintain a wide scope of attention so as to take in all of the sensory information with which they are bombarded. However, this comes at the expense of being able to focus attention on a single task and filter out distractions. The result is that most cognitive tasks suffer (including, ironically, multitasking itself).

If the results of the current study are reliable and hold up to replication, it seems there may be some advantages to media multitasking, specifically in the ability to integrate different streams of sensory input. The net effect of all this still needs to be sorted out. It seems, however, that if we want to mitigate the effects of heavy media multitasking we need to either reduce it, or design tasks that take advantage of the benefits and mitigate the weaknesses.

I do wonder if this is just a phase we are going through. Will future generations look back at the early 21st century and marvel at the ridiculous media multitasking. It may seem as if we are in the adolescence of using media technology, and later generations will be more mature and will understand the need to control our media exposure. We need to resist the temptation to extrapolate current trends indefinitely into the future – in other words, we should not assume that future generations will be even more heavily multimedia consumers. In addition to simply becoming more mature in this respect, technology may throw another curve ball in the mix and change the way we consume media in ways we cannot currently predict.

Either way, this is research worth following and a trend worth keeping one eye on – although not necessarily while engaging in other tasks.

15 Responses to “Multitasking – Can You Walk and Chew Gum at the Same Time?”

  1. Janet Camp says:

    I was going to reply, but then the mail pinged and as I was reading that, my texting ping went off and my son wants to come over to help plant some arborvitae, but I need to leave for my allergy shot as soon as the iPad charges so I have something to read while I wait for a bad reaction to occur and now the iTunes has an update so that will have to wait because I have to feed the chickens before I leave.

    This is made worse, of course, by my ADHD so now I have forgotten what I was going to say. Anyway, thanks for this because I have always felt that multi-tasking was a ridiculous concept and only leads to getting not much done in lots more time.

  2. CountryGirl says:

    If you ever need proof then just observe drivers talking on their cell phones. They sit at the light when it turns green, drive slow, make turns from the wrong lanes, nearly run over pedestrians etc. Sometimes when you can see their faces clearly you can tell they are mentally miles away and clueless to what is going on around them. I have even watched drivers almost have an accident that should have put fear in them and their expression didn’t even change because their expression is mirroring their conversation and not the reality of their driving.

  3. MadScientist says:

    The funny thing is I never believed I could do many things at once and I hate interruptions or being asked to do something else while in the middle of a task. I never could understand people who claim otherwise because I observe them and see that they botch things up – especially all that swerving all over the road as they fiddle with the car radio or their phones. What I find really remarkable is how people can believe one thing while the truth is so obvious and contrary to what they believe. Maybe that multitasking thing kills the part of the brain which separates reality from delusion.

  4. noen says:

    Actually, we don’t have any kind of “cores” at all. The brain does not do data processing and consciousness is not the result of computation. (Strong AI can be proven false.)

    Still, one easy way to demonstrate to yourself that you cannot multi-task is to recite the alphabet and then count from 1 to 26 and time how long it takes you to complete those two tasks sequentially. Then try to do them simultaneously by going “1, A, 2, B, 3, C….” and time yourself how long that takes.

    You’ll find that the latter experiment takes you longer than the former. Most likely because you have to switch between the two and remember your place every time you switch.

    • Clara Nendleshaw says:

      >The brain does not do data processing and consciousness is not the result of computation. (Strong AI can be proven false.)
      That is false. The brain does a whole lot of data processing; much more in fact than the machine standing on your desk. Furthermore, the theoretical possibility of strong AI has already been proven true. After all, it is theoretically possible (given a lot more computational power than we currently have) to run a physics simulation that mimics the brain.
      However, it has turned out that (after the initial enthusiasm in the days of early AI research) neural networks are actually very inefficient architecture for many things. Given this, it is likely that when the first self aware machine is built, it will be based on other architectures for the very simple reason that they require less processing power and are being developed and understood faster. That is not to say that a neural human simulation will never be built, but it will probably be ready later, and the first thinking machines may think quite differently from us.
      Which touches upon another issue… computers are already much better than us at a lot of tasks which were (before the advent of computers) considered prime examples of intellectual (as opposed to say, physical) exercise. But every time a computer can do something we say ‘oh, but that is not intelligence’ and we move the goalpost.
      Even when we create robots that look just like us, are sociable and intelligent and everything, a lot of people will still not accept them as people and will instantly think less of them should they find out somehow.

  5. Max says:

    Oh yeah? What about the SGU 24 intro?
    http://www.youtube.com/watch?v=cVi_BQv9M0A

    But seriously, does it make sense to switch tasks to “recharge”? Like, when you’re tired of doing one task, switch to another.

    • MadScientist says:

      Sure, but that’s tackling 2 things sequentially and with significant time spent on each. Chefs are fantastic at scheduling tasks and getting smaller tasks done in between the more complex jobs, but they’re not attempting to do things simultaneously (such as singing while driving and dialing a phone).

      • Max says:

        Steve talked about checking e-mail while writing a blog.

        Test pilots are incredible multitaskers. They fly a plane, communicate with the control tower, test new and often flawed equipment, and document the results.

      • MadScientist says:

        I’d like to see how test pilots perform when pulling maneuvers and talking or chatting about nothing in particular while executing a coordinated turn – how would they perform when talking about unrelated things?

  6. Max says:

    “Either way, this is research worth following and a trend worth keeping one eye on”

    Are you alluding to Google glasses?

  7. Clara Nendleshaw says:

    >Making the computer perform two processing-intensive tasks simulaneously takes more time then if both tasks were done alone,
    That is not necessarily true. Computing is mired with bottlenecks and they aren’t the same for different tasks. For instance, a typical task that we would get impatient with is file compression and this tends to be CPU intensive. A typical task that we would want to switch to while waiting could be reading some web pages and while that is time consuming for us, for the computer it’s essentially a free action; almost no CPU power is necessary and bottlenecks include only occasional net and disk activity. Performing both tasks at the same time comes at almost no penalty at all and you’ll finish much earlier than if you were to do these sequentially. (I’m saying this from experience, and my computer is a ten year old beast with a single core processor.) For a computer, context switches are cheap. People are not like computers.
    As for the HMM people, I can confirm this from my admittedly anecdotal experience. Some of my friends are a few years younger than I am, and they can’t concentrate on anything. If you’re doing something together, they always get the urge to do something else at the same time as well. Furthermore, I’ve noticed that a) they rarely read books and never ‘brainy’ ones and b) they often complain that they didn’t understand the plot of a film or television show, or that something was not explained when the show was abundantly clear. My hypothesis is that they lack the required attention span to really follow the plot, they start doing something else, maybe check a web page or discuss with someone, and when they miss an important plot point it’s obviously the film’s fault.
    But I think the change is permanent. They don’t want to hear that people can’t multi-task and society seems to slowly make it more and more necessary. People get upset when you don’t answer an e-mail within an hour, or if you turn off your phone because you want to concentrate on something. Don’t assume that future generations will change this because the alternative is better. The way society evolves has little to do with what is better; entirely different forces are at work work there. And multi-tasking presses a lot of the right buttons, like our innate desire to waste each other’s time, like our tendency to always prioritise urgency over importance, like the sense of control gained by micromanaging each other, … I could go on and on. It would be wrong to say it makes us feel good, quite the contrary, but it makes us feel bad in a way that pushes us to do it more, like we always chug our morning coffee even if all it does is return us do our default state of misery.

    • markx says:

      All very true.

      Although, the brain is apparently always ‘rewiring’ itself, so while such ‘short attention span’ behavioural and processing changes may already be ‘hardwired’ into the younger generation (and increasingly also into us, the ‘superseded’ models), it should be theoretically, with the correct approach, be reversible to some extent.

    • MadScientist says:

      I never liked computer analogies to the brain. I have never seen articles on the location of the MMU, ALU, GPU, branch predictor, and so in the human brain.

      • noen says:

        When the telegraph was the new technology people thought the brain was like a telegraph. When the telephone was invented they thought the brain was like a switchboard. Then in the 60’s and 70’s people compared the brain to a computer because they were modern and mysterious. Marvin Minsky actually believed that he was creating minds with his lisp programs.

        The strong AI claim that the mind *is* software is false.
        The brain does not do data processing and the cognitive program, that what the brain does is execute algorithms, is equally mistaken. We just. don’t. know.

      • Max says:

        It’s a neural network.