Custom Search

Sunday, January 23, 2011

INTUITION


Heuristics provide a means of reasoning, but they are short cuts, using strategies that generally work but are not guaranteed to work. At the same time, they can induce quite high levels of confidence in us regarding our decisions, even when we are wrong. To a large extent, heuristic reasoning overlaps considerably with the everyday idea of intuition. Intuitive thought is automatic, often fast and not derived from detailed analysis. It involves a strong feeling of conviction but – like heuristic reasoning – tends to be hard to justify. Problem-to-model mapping The mappings from the description of a problem to an automatic conception of that problem can be very strong, and constitute the basis of some very strong feelings of the intuitive ‘correctness’ of our understanding. Try the following problem (schematically illustrated in figure 12.5): Suppose there are three cups in front of you and the experimenter puts a coin under one of the cups. You don’t know which one it is under. Next you try to choose the cup ou think the coin might be under. Rather than tell you whether you are right or wrong, the experimenter removes one of the cups, but not the one you pointed at, and not the one the coin was under (which may be different).
The question is, would you have a greater chance of getting the coin if you stuck to your original choice, or shifted? Participants usually believe that they have a 1:3 chance of being correct when they start, and then that they have a 1:2 chance of being right once there are just the two cups left. They usually declare that there is no point in changing because after the cup has been removed thay have a 50/50 chance of being correct (and if they changed their choice at this stage they would still only have a 50/50 chance of being correct). This behaviour fits a simple mental model: with N choices, the chance of being correct is 1:N. The situation is mapped onto this simple model, and the result is coherent and compelling. Despite this, the answer is that you should shift. In the first place, the chance of being correct was 1:3, and the chance of being incorrect was 2:3. But the important point is that the experimenter does not remove the cup at random, and – the key point – she never moves the cup that conta ns the coin. So the chance of being wrong by sticking to the original decision is still 2:3 (as per the original decision), even though there are only two cups now left. But since there is only one other cup now remaining, the chance of that being the wrong choice is in fact 1:3 (because there is only one other of the original three cups under which the coin could now be located), so it makes sense to change. In fact, the odds in favour of changing are 2:1. This is a very difficult puzzle to think about (e.g. see Granberg & Brown, 1995). The usual mental model people set up does not have the capacity to deal with the correct solution, and yet it is very compelling. There is an intuitive way of making the point about shifting, though. Suppose there are 100 cups (each numbered), and one has a coin under it. The chance of your being incorrect in your choice is 99:100. You choose a cup – say, number 15. Now the experimenter takes away all of the cups except the one you chose and one other (say number 78), but you now she never takes the one with the coin under it. Do you now think that there are even odds on your having selected the correct one, or would you prefer to shift? Most people think it appropriate to shift under those circumstances.


The ‘Three Cups Problem’ is a good illustration of a stron mapping between a state of affairs (two cups are left) and a preexisting mental model (if there are two cups, one with a coin under it, then the odds on choosing the correct one are 50:50). The intuitive belief that goes with these problem-to-model mappings is very strong. Try it on your friends. The hindsight bias Just as discourse makes sense if it portrays a series of connected events that match some plausible possible world, so facts about things make sense if they fit a coherent scenario. Also, once we know the facts, it is often easy to find a way of linking them. Nowhere is this clearer than with the hindsight bias, in which people believe that they had a prior insight (‘I knew it all along’) and that an event was therefore not surprising. Hindsight judgements are made ‘after the fact’. In a typical hindsight experiment (Fischhoff, 1977; Slovic & Fischhoff, 1977), participants first answer binary-choice general knowledge questions, such as: Was laddin (a) Chinese? ( b) Persian? Subsequently, they are presented with the questions again, this time with the correct alternative marked, and are asked to say whether they got each one right on the previous occasion. In general, participants tend to falsely remember getting more right than they actually did, as though the correct answer interferes with their memories. Even if participants are paid for remembering correctly, the effect still occurs, so strong are the intuitions the paradigm generates. A major consequence of the hindsight bias is that things appear to be more obvious than they should. Before new experiments are carried out, it is never clear what the outcome will be – otherwise they would not be original experiments. Yet in one interesting study, the same information was presented to two groups of participants concerning an experiment with rats. One group was told that one result occurred, while the other group was told that another occurred. Although the two sets of results were quite diffe ent, both groups of participants rated the outcome as obvious (Slovic & Fischhoff, 1977).

No comments: