Custom Search

Saturday, November 13, 2010

NEURAL TRANSMISSIONS


Neural Transmissions
We know how electrical signals are generated – but how do they activate neuronal targets? If we compare neurons to electrical circuits, then the answer is clear. Each wire in a circuit connects to the next wire in a circuit and current flows uninterrupted through all of them. But very few neurons are connected together in this way. Instead, communication between neurons usually relies on neurochemical transmission. Where there are points of structural continuity between neurons, current can flow from one neuron to the next, just as it would from one wire to another that has been soldered to it. The electrical potential in one neuron then directly affects the electrical potential of the next, depolarizing the target neuron as though it were no more than an extension of the neuron in which the signal originated. Such connections – or gap junctions – do have some advantages. The signal is passed on at the maximum possible speed, and activity amongst groups of neurons is more easily synchronized, which can have its own advantages. But if all the neurons in our nervous system were interconnected in this way, whatever happened to one neuron would affect all the others and would at the same time be affected by all the others. This would radically reduce the system’s capacity for information processing. If all the neurons end up doing exactly the same thing, you might as well have only one neuron and be done with it. So to get the most out of each neuron, you want them to be able to operate to some extent independently. In particular, you may not want the activity of the target neurons to determine the activity of their own inputs. This problem of how to keep neurons independent and ensure that information flows in the right direction is solved by chemical neurotransmission. Chemical neurotransmission takes place at the synapse, across a very narrow gap called the synaptic cleft, where the axon of the input neuron most closely approaches its target both the axon terminal region and the post-synaptic membrane (i.e. the membrane on the target cell’s side of the synaptic cleft) are highly specialized. The axon’s terminal region contains small vesicles – packages filled with neurotransmitter. The neurotransmitter is released from the axon terminal, crosses the synaptic cleft and binds to specialized receptors in the membrane of the target neuron. For chemical neurotransmission to be fast, the chemical messengers need to be made of rather small molecules. The classical synapse in the brain is where an axon makes contact with the dendrites of its target neuron, although contacts from axon to axon and dendrite to dendrite are also known to occur. When an action potential reaches the axon terminal, some vesicles fuse with the external cell membrane of the neuron, and the neurotransmitter chemical they contain is released. Because the distance across the synapse is very small, the neurotransmitter rapidly diffuses across to the post-synaptic neuron, where it binds to its receptor sites. Stimulation of different receptor subtypes produces different physiological effects. For example, the classical acetylcholine receptor opens sodium ion channels, leading to a Nainflux that depolarizes its target cells, as described
earlier. Once a neurotransmitter has been released, two more events must occur. First, more of the transmitter chemical is synthesized in the cell body and transported along the axon to the terminal
region, ready for the next output signal. Second, the effects of the neurotransmitter in the target cell must be turned off again. Otherwise, a single input would depolarize its target forever, and no more information could be passed that way. A muscle, for example, would be left in a permanent state of contraction, whether it received a single impulse or a whole series of them. There are several mechanisms for deactivating neurotransmitters. The molecule might be degraded into a form that has no physiological effects. In the case of acetylcholine, this is done by the action of the enzyme cholinesterase, which completes the job within a millisecond. It can also be done by reabsorbing the neurotransmitter back into the axon terminal that released it, or by absorbing the neurotransmitter into an adjacent glial cell. You know by now that neurons’ output signals are all-or-nothing action potentials, and that a neuron must be depolarized beyond its threshold to generate an action potential. How are the neuron’s many different inputs combined? Input axons typically connect to target dendrites. The branching dendritic trees of some neurons may have as many as one hundred thousand synapses on them. At any moment, each of those synapses may be either active or inactive. Each active excitatory input will to some extent depolarize the target cell membrane around its synapse. But unless the target cell is depolarized all the way to its threshold potential, so that an action potential is generated, that individual excitatory input will not lead to any output from its target. The more inputs there are, the more sodium ions flow into the target cell, and the more likely it becomes that threshold potential will be reached. So the eventual activity of the cell depends on the overall pattern of activity of its many inputs. This means that, although neurons have all-or-nothing outputs, those outputs cannot control their targets in an all-or-nothing way. The effect on the target depends on the signals coming in at the same time from all its many inputs. In this way, neurons are effectively integrating their own inputs. Neurotransmission mechanisms are open to disruption. We can manipulate the receptors, or the transmitter release system, or the transmitter inactivation mechanisms. By designing drugs that affect the system in these ways, we can alter brain function. Curare is an Amazonian plant product. It paralyses movement by binding to the acetylcholine receptor on the muscles, and prevents acetylcholine released from motor nerves from reaching its intended target. Unlike acetylcholine, curare does not depolarize muscles, so the motor nerves can no longer cause muscle contractions. This loss of movement includes breathing. This is an example of an antagonist. Antagonists block the effects of neurotransmitters, often by occupying the transmitter’s receptor site. Curare was first used by South American Indians as a poison for hunting, but its synthetic derivatives are nowadays widely used in surgery. It can be very valuable for the surgeon to be able to control muscle movement and maintain respiration through artificial ventilation. Another way to produce essentially the same effects would be to block acetylcholine release. Botulinus toxin (from the bacterium clostridium botulinum, which sometimes grows in preserved foods that have been imperfectly sterilized) has this effect. It is one of the most lethal poisons known. You could kill off the entire human population of close to six billion people with about
28 grams of toxin. Nowadays it is sometimes used in cosmetic surgery to reduce brow wrinkles by paralysing the muscles under the skin. Neurotransmitter antagonists also have an important role in
psychiatry. The hallucinations and delusions in schizophrenia are often treated using dopamine receptor antagonists like haloperidol. Unfortunately, prolonged use of these drugs sometimes induces movement disorders as an unwanted side effect, by blocking the action of dopamine in the
nigrostriatal pathway. This is the pathway damaged in patients with Parkinson’s disease.

Neurotransmitter agonists are chemicals that have the same kind of action as the neurotransmitter
itself. If their action is irreversible, or much more powerful than the natural compound whose place
they usurp, then they are just as dangerous as powerful antagonists. They can equally disrupt function, by keeping signals permanently switched on. This can be done in several ways. Nicotine is a very widely used acetylcholine receptor agonist. It acts both centrally and peripherally. The lethal dose of nicotine for a human adult is 40–60 mg. There can be this much nicotine in just two or three
cigarettes, but smoking leads to much lower nicotine absorption than eating.

There are also indirect agonists, which work by inducing greater than normal neurotransmitter release, or preventing re-uptake. Amphetamine is a somewhat unselective, indirect dopamine agonist, which effectively increases dopamine release. Amphetamine abuse can lead to hallucinations
and delusions – essentially the opposite of the effect of haloperidol, described above. Direct neurotransmitter agonists also have an important role in neurology. Parkinson’s disease results from loss of dopamine neurons in the nigrostriatal system. One way to help restore normal movement in these patients is to boost dopamine function in the nigrostriatal pathway. This can be done by giving
apomorphine – a direct dopamine agonist, which simulates the effects of the missing dopamine. Or we can give a dopamine precursor, L-DOPA, which helps the surviving neurons to synthesize more dopamine. Too much L-DOPA can lead to terrible hallucinations. So clinical manipulations of dopamine activity need to manage some tricky balancing acts. A third way to increase neurotransmitters’ effects in the synapse is to disrupt deactivation mechanisms. So, although neurotransmitter is released perfectly normally, its period of effective action is abnormally prolonged. The result is similar to the effect of a direct neurotransmitter agonist. Cholinesterase inhibitors, which stop cholinesterase from performing its usual job of breaking down acetylcholine into inactive fragments, work in just this way. They are found in a number of plants and are widely used as insecticides. (They also form the basis of some of the most deadly nerve gases.) So the direct
acetylcholine agonist, nicotine, and the cholinesterase inhibitors are both synthesized by plants – presumably because they both make the plants toxic by overactivating the cholinergic systems of
animals that consume them. But they achieve this effect by two, quite different, biochemical routes.
These kinds of mechanisms can offer therapeutic benefits as well. Psychiatrists have for many years used monoamine oxidase inhibitors to treat depression. These drugs neutralize the enzymes that normally deactivate monoamine transmitters (noradrenaline, dopamine and serotonin). This increases the effectiveness of monoamine neurotransmission, leading to clear clinical improvements
after a few weeks of treatment.Although these drugs are still in clinical use, it is now more usual to treat depression with newer compounds that use a rather different mechanism aimed at prolonging the actions of monoamine neurotransmitters. Perhaps the best known is Prozac (fluoxetine). This is a specific serotonin re-uptake inhibitor (or SSRI). It reduces the re-uptake of a particular monoamine, serotonin, into the neuron from which it has been released. Once again, this means that whenever neurotransmitter is released, its effects on its targets are longer-lasting. So far we have considered only excitatory neurotransmission – how one cell induces an action potential in a target. But there is much more to chemical neurotransmission than signal amplification and one-way information flow. There are also inhibitory neurotransmitters, which reduce the excitability of a cell. If a cell has a constant but low level of incoming stimulation that keeps it firing at a regular rate, an inhibitory transmitter can reduce that firing. And if a target cell is quiescent, an inhibitory neurotransmitter can prevent it from being excited. The classic inhibitory neurotransmitter is GABA (gammaamino- butyric acid). GABA works by increasing chloride ion flow into the interior of the cell. Since chloride ions are negatively charged, they increase the cell’s negativity. This is called hyperpolarization. It is harder to excite an action potential in a hyperpolarized cell. Just as some drugs are designed to disrupt excitation, others work on inhibitory transmission to modify neuronal activity. Enhancing inhibition has much the same effect as disrupting excitation: both reduce neuronal activity. Yet because these two approaches depend on different chemical neurotransmitters, they can have rather different side effects. Among the various subtypes of GABA receptors is GABAA. This is a particularly interesting one, because it includes the site of action of the very widely used benzodiazepine minor tranquillizers – the class to which Librium and Valium belong. These drugs increase chloride flow into the cell. Alcohol and barbiturates act on other components of the GABAA receptor to have much the same effect, so in some ways they also act like
inhibitory neurotransmitters. It is an easy mistake to think of inhibitory neurotransmitters as simply inhibiting thought or action. But in a complex network of neurons, in which inhibitory projections may inhibit other inhibitory cells, it is hard to predict the eventual outcome of inhibiting neuronal activity. Nonetheless we can generalize about the effects of blocking inhibitory GABA transmission. Drugs that do this tend to produce epileptic seizures. Equally, drugs that enhance inhibitory transmission can be used to prevent epilepsy. If a patient has been treated for a long time with drugs that increase GABA transmission and that treatment is suddenly stopped, there is a risk that the patient may suffer from epileptic convulsions. So at some very general level, inhibitory transmitters do damp down the excitability of the brain. Some of the most horrific poisons, like strychnine, produce their effects by preventing inhibitory transmission (in this case at glycine-dependent inhibitory interneurons in the spinal cord).

Friday, November 12, 2010

Neural Networks

Neural Networks (Interconnections)
Until the nineteenth century, we really had no idea what neurons looked like. Early workers were only able to stain the neurons’ cell bodies: until the axons and dendrites could be seen, neurons looked not so very different from liver or muscle cells. This changed in 1862, when a cell staining method was developed (largely by accident) that enabled the structure of a single neuron to be seen clearly through a microscope. Camillo Golgi’s staining method was a bit hit and miss: sometimes no cells at all might be stained; at other times all the cells in a particular section of brain might be so densely stained that the whole section looked black and individual cells could not be distinguished at all. But sometimes just a few cells would be darkly stained, and their morphology could then be established. It soon became clear that there are many different kinds of neurons. The great brain anatomists like Ramón y Cajal ([1892] 1968) and Lorente de Nó (1934) used these kinds of techniques to examine, describe and draw the structures of the brain at a level of detail that would previously have been inconceivable. Nowadays you can inject a dye directly into a cell, so that it alone is filled; you can then visualize the neuron in its entirety. Where such studies are combined with functional studies recording the activity of that cell and the other neurons most intimately connected with it, the relationship between form and function can be established with great rigour. We also discover where each neuron’s incoming connections originate, and where their own outputs go, by injecting anatomical tracers. These are substances that are absorbed by cell bodies, or by axon terminals, and then transported through the cell. This, coupled with electrophysiological studies in which we stimulate activity in one area and determine its effects in others, enables us to identify how neurons interconnect and interact. Neuronal interaction is what the brain is all about.

[Santiago Ramón y Cajal (1852–1934) was born in the Spanish village of Petilla. His father, at that time the village surgeon but subsequently the Professor of Dissection at the University of Zaragoza, found him a difficult teenager, and apprenticed him first to a shoemaker and later to a barber. The young Ramón y Cajal himself wished to become an artist, but eventually went to medical school, graduating in 1873. He entered academic life in 1875, but his great life’s work began when, in 1887, he was shown brain sections stained by Camillo Golgi’s silver method. Ramón y Cajal was captivated. Thereafter he studied and drew the nervous system in great detail. His observations led him to propose that the nervous system is made up of vast numbers of separate nerve cells: the ‘neuron doctrine’. He shared the Nobel Prize with Golgi in 1906.]


Electrical Activity
Neurons are integrators. They can have a vast number of different inputs, but what they produce is a single output signal, which they transmit to their own targets. How is this done? The key lies in the electrical potentials they generate. There is a small voltage difference between the inside and the outside of the neuron. The inputs are tiny amounts of chemical neurotransmitters. The target cell has specialized receptor sites, which respond to particular neurotransmitters by subtly changing the cell’s electrical potential for a short time. If enough signals come in together, then the total change can become big enough for the target cell to ‘fire’ – or to transmit an output signal along its axon to modify the activity in its own target cells. So our first task is to find out how neurons produce electric potentials. Then we can see how these potentials change in response to inputs. Once we understand that, we can look at the way this same electrical potential system produces a fast and reliable output from the cell.

Resting Potential
The outside of a neuron is made of a highly specialized membrane.Within the neuron, much of the chemical machinery is made up of large, negatively charged protein molecules, which are too big to leak out through the membrane. Outside the membrane, in the gaps between neurons, lies the extra-cellular space, which contains fluid with electrically charged ions dissolved in it.
What does this mean? Well, common salt, for example, also called sodium chloride, is a compound of two elements – sodium and chlorine (giving a chemical formula of NaCl). When it is dissolved in water, it dissociates into a positively charged sodium ion (Na) and a negatively charged chloride on (Cl−). Potassium chloride also dissociates into its ionic constituents – potassium (K) and Cl−.
Mobile, positive ions are electrically attracted to the negatively charged proteins held within the neurons, but although the neuronal membrane lets potassium ions through, it is relatively impermeable to sodium ions. So potassium ions are pulled into the cell and held there by the electrical charge on the intracellular proteins. As potassium levels within the cell rise above those outside it, this inward flow of charged ions reduces, because there is now a concentration gradient tending to pull potassium out of the neuron. Equilibrium is reached (with the inside of the neuron more negative than the outside) when the opposing pulls of the concentration gradient and of the electrical gradient balance each other. There is also an active pumping of ions across the neuronal
membrane: for example, some sodium leaks into the neuron and is actively pumped out. These processes give neurons their characteristic electrical charge – the resting potential. Some ions have their own channels that let them pass through the cell membrane. These can be opened or closed, selectively altering membrane permeability Some pumps move ions inwards and others move them outwards. Neurotransmitters use these different ion channels to manipulate the cell’s membrane potential – a complicated balancing act. These activities consume a lot of energy. Your brain is only
2.5 per cent of your body weight, but uses some 20 per cent of your resting energy. This increases when the nervous system is actively processing signals. When a region increases its energy consumption, its blood supply needs to increase as well. This can be detected by functional neuro-imaging systems to help us identify which parts of the brain are activated during particular kinds
of mental processing.
Action Potential
When a neuron is activated by its input, the potential across the cell membrane changes. This is because when a neurotransmitter binds to its receptor, it can open channels that let particular ions
go through the membrane. Say we open a sodium channel. Positive Naions will flow through the membrane into the cell for two reasonsThe resting potential keeps the inside of the cell negatively
charged, so positive ions are attracted in.nThere is an attracting concentration gradient for sodium,
because there are many more Naions outside the cell than inside it. The resulting influx of positive ions makes the inside of the cell less negative, reducing the resting potential. This is called depolarizing the cell. If the cell is depolarized from its resting potential of around minus 70 millivolts
to its threshold potential of about minus 55 millivolts, an abrupt change is seen. This is called an action potential . It has been studied with great precision by controlling the membrane potential directly using electrical stimulation. The potential across the cell membrane suddenly flips radically from the normal state, in which the inside is negative relative to the outside, to a transient state in which, for a millisecond or so, the inside becomes positive relative to the outside. The normal direction of polarization is rapidly restored once the stimulation stops. In fact, the neuron becomes hyperpolarized for a few milliseconds, which means that its inside becomes even more negatively charged than usual. During this time – the refractory period – the hyperpolarized neuron is less readily able to respond to further input. So a single, relatively small, stimulation pulse can produce a radical change in the neuron’s electrical state. How does this happen? The crucial mechanism lies in the way that the different ion channels are controlled. While some are controlled by neurotransmitter receptors, others respond to the electrical potential across the cell membrane. When the cell has been depolarized all the way to the threshold potential, additional sodium channels suddenly open. More sodium ions pour into the cell through these channels, because there is still both a concentration gradient and an electrical gradient to attract them. This drives the depolarization further downwards, leading to further opening of sodium channels. So depolarization proceeds very rapidly. If we are to restore the original resting potential, ready for the next action potential, we have to reverse this current flow as quickly as possible. This is achieved by an outflow of positively charged potassium ions from the cell, combined with a process that deactivates sodium flow. Although the full picture is much more complicated than this, and involves many more different
ions and channel types, an understanding of the sodium and potassium currents conveys its essence.
Once an action potential has been generated, it will rapidly travel along the cell’s axon, changing membrane permeability as it goes. This active, self-regenerating method of spreading makes the classical action potential a very effective and reliable way to transmit information. If the neurons’ signals were conducted passively, in the way that heat is conducted along a wire, the signals would get weaker and weaker the further they had to go. If you use a long enough poker you can safely stir the red hot embers of a fire without your hand getting burnt. The hotter the fire, the longer the poker you need to use. But if heat were propagated actively, like an action potential, you would have to wear asbestos gloves, however long the poker. The action potential is the same size whether the depolarizing stimulus is only just strong enough to reach threshold or depolarizes well beyond threshold. This all-or-nothing property often leads people to liken action potentials to the digital signals in a computer. But this vastly underestimates the complexity of the nervous system and the potential subtlety of its responses. As we shall see, the propagation of the action potential may be all or nothing, but its effect can be very subtly graded.
[Lord Adrian (1889–1977) was a physiologist who initiated single neuron recording methods. He was the Nobel Prize winner in 1932, shared with Sir Charles Sherrington. Adrian pioneered the use of then state-of-the-art electronics to amplify the signals he recorded and display them on an oscilloscope. This was a crucial technological advance that allowed him to monitor activity in single nerve fibres. One of his key findings was that intensity of sensation was related to the frequency of the all-or-nothing action potentials of constant size – so-called ‘frequency coding’, as opposed to ‘intensity coding’. Adrian also studied the sensory homunculus in different species. He reported that in humans and monkeys both the face and the hand have large areas of the sensory cortex devoted to them, whereas in pigs, the greater part of sensory cortex dealing with touch is allocated to the snout. So the richness of sensory representation can be related to the typical needs and activities of the species concerned. Subsequently, Adrian moved from work on the peripheral nervous system to study the electrical activity of the brain itself, opening up new fields of investigation in the study of
epilepsy and other types of brain injury]

Thursday, November 11, 2010

Brain functions

Brain functions


The surface of the underside of the brain (looking up the string) is much smoother. If we work upwards from where the spinal cord joins the brain, at the brain stem, the first structure is the medulla. This is not just a relay station for incoming and outgoing communications; it also contains nuclei that control basic functions like breathing and heart rate. The brain stem also includes the pons. A variety of motor system connections are routed through the pons, and it includes some of the nuclei that seem to be important in sleep and arousal. Next we reach the midbrain (or mesencephalon). There are important early sensory relays here, particularly for the auditory system. The substantia nigra, which is the critical area lost in Parkinson’s disease patients, is also in this region. The midbrain merges with the thalamus, under which is the hypothalamus (hypo- means ‘under’). The thalamus contains major sensory relays to and from the cortex, but should not be thought of as an exclusively sensory-processing structure; for example, specific nuclei of the thalamus are involved in important functional capacities such as memory. The hypothalamus has major roles in motivation. Hypothalamic damage in one location can lead to gross over-eating (hyperphagia) and obesity,while damage at a different hypothalamic site can result in potentially fatal under-eating. The hypothalamus controls aspects of hormonal function: it can directly control hormone release from the pituitary gland, which lies just beneath the hypothalamus outside the brain itself. Pituitary hormones can themselves control hormone release from other endocrine glands, like the adrenal gland next to the kidneys, whose own hormones can in turn modify both peripheral function and brain function. So the brain and the endocrine system interact.

Further up still, we reach some of the crucial motor system nuclei in the basal ganglia. We also encounter limbic structures, like the hippocampus – crucial for normal memory function – and the amygdala, which appears to play a key role in aspects of emotion, especially fear. Animals with amygdalar damage are less frightened than normal animals by signals of impending shock (LeDoux, 1992). Humans with amygdalar damage cannot recognize facial expressions of emotion, particularly fear and anger (Young et al., 1995), or angry or fearful tones of voice (Scott et al., 1997).


Beyond the hippocampus, which is the simplest example of a cortical layered structure we come to, there are various transitional cortical regions with increasingly complex layered structures, before we reach the neocortex, the most complex of them all. The neocortex has specialized motor areas, sensory processing areas and more general purpose association areas. Within each area there may be further, more specialized, modules. In the visual system, for example, separate modules for colour, form and motion speed up visual processing by handling all these attributes in parallel. This high level of specialization means that damage restricted to particular cortical regions can have very precise effects. For example, people with a condition called prosopagnosia are unable to recognize particular people’s faces, despite other visual abilities remaining quite normal. Sometimes people think of the cortex as the most important part of the brain because it evolved later than other parts, and because of its complexity and its roles in high-level processing and human faculties. But a good deal depends on what you mean by ‘important’. If you ask neuroscientists whether they would prefer to lose a cubic centimetre of cortex or a cubic centimetre of some subcortical region, they would probably choose to give up some cortex. This is because damage to the subcortex tends to be more profoundly disabling. For example, the loss of neurons in the small subcortical region called the substantia nigra results in Parkinson’s disease, which eventually causes almost complete motor disability. The functions of the different areas of the cortex have, until recently, been determined either by experimental studies of monkeys (which have a much more highly developed neocortex than animals like rats) or by neuropsychological studies of the effects of brain damage in clinical patients. The development of functional neuro-imaging methods has given us a new way to study the roles of different brain areas in cognition in healthy humans by allowing us to observe which brain regions are active. Imagine an animal with a simple brain made up of a big blob of neurons. How could such a brain develop, allowing space for extra neurons? It cannot just grow larger. The bigger the blob, the harder it is to sort out all the input and output axons for the cells in the middle. Somehow, all those connections have to find their way through gaps between all the new neurons on the outside of the blob. The alternative solution is to arrange cell bodies in layers. The most complicated structure, the neocortex, is actually made up of six layers of cells (see figure 3.12). This allows all the inputs and outputs to run neatly along in a layer of their own. Fibres divert upwards to contact other cell bodies as needed, facilitated by the cortex being organized into columns. Further development of the brain becomes much easier with this arrangement. You can simply add more columns, or ‘bolt on’ more modules, rather like plugging in a new component on your computer. You would not need to reorganize any pre-existing connections. You can also place cells that need to interact alongside each other, forming cortical modules that minimize inter-cell communication distances. This speeds up communications and saves space. Much the same arrangement is used for laying out printed circuit boards. The wrinkles in the brain, which make it look like an outsized walnut, are all folds in the cortex. A valley, where the cortex is folded inwards, is called a sulcus, while a ridge is called a gyrus. This development enables the maximum cortical area to fit into a volume with minimal outside skull dimensions, just like crumpling up a newspaper to fit it into a small wastepaper basket. The volume and surface area of the newspaper are actually unchanged, but it fits into a neater space.

The neurons that make up the human brain are essentially the same as those making up the brains of other animals. So how do we explain our extraordinary capacity for complex, abstract thought? If you were to flatten out a human cortex, it would cover about four pages of A4 paper, a chimpanzee’s would cover a single sheet, and a rat’s would cover little more than a postage stamp. So we have big brains . . . but size is not everything. In mammals, brain size correlates with body size: bigger animals have bigger brains. But this does not make large animals more intelligent than smaller ones. Adaptable, omnivorous animals like rats are a favourite experimental subject for psychologists partly because they so readily learn new behaviours. Their opportunist lifestyle may well lead to greater behavioural flexibility, compared to larger but more specialized animals like the strictly herbivorous rabbit, whose food keeps still and does not need to be outwitted.

Nonetheless, there is something special about human brain size. Our brains are disproportionately large for our body weight, compared to our primate relatives ( Jerison, 1985). This overdevelopment is especially marked in the most general purpose regions of the cortex, the association areas (though the cerebellum is disproportionately enlarged, too). It is possible that at least some of this enlargement provides extra processing facilities that support the human capacity for abstract thought. The two halves of our brains have different cognitive processing specialities. In most humans, language processing takes place in the left hemisphere. Damage on this side of the brain can leave people unable to speak (aphasic), [aphasia loss of speech ability] though quite capable of understanding spoken language. Paul Broca (1861) was the first to describe a condition known as Broca’s aphasia and to identify the key area of damage responsible for it. Just a few years later, Wernicke (1874) reported that damage at a different point of the language system in the left hemisphere leaves people with a different kind of speech problem, Wernicke’s aphasia. These patients speak perfectly fluently, but what they say makes no sense, and they do not appear to understand what is said to them.

Other neuropsychological conditions are typically associated with right rather than left hemisphere damage. For example, severe hemi-neglect often results from damage to the right parietal lobe. Patients with hemi-neglect may ignore the entire left half of the world, so that they eat only the food from the right side of their plates, shave only the right side of their face, and, when dressing, pull their trousers on to their right leg only. Some patients will even try to throw their left leg out of bed, since they do not consider it as being their own! Neglect of the right-hand side of the world, resulting from left hemisphere damage, is much rarer. The underlying reasons for this are not yet certain, but it suggests that the right hemisphere might be able to support bilateral spatial attentional processes, whereas the left hemisphere (perhaps because of its own specialized allocation to language processing) can only support unilateral spatial attention. This would mean that when the left hemisphere is damaged, the right takes over processes that would normally depend on the left hemisphere. But when the right hemisphere is damaged, the left presumably continues to support its usual processing of events in the right half of the world, but cannot take over processing of events on the left. The two hemispheres are joined together below the surface by the corpus callosum, a massive fibre pathway. Split brain patients have had their corpus callosum cut, for example to stop the spread of epileptic seizures from one side of the brain to the other. This disconnection can have startling consequences. If a split brain patient sees a word briefly flashed up so that it falls on the part of the eye that is connected to the right hemisphere, then the patient cannot read out the word. This is because the visual information has not reached the left hemisphere, and so cannot be processed properly as language. But it is fascinating to see that if the word is the name of an object, the patient can use their left hand (which is connected to the right hemisphere) to select that object from among a variety of others.



[John Hughlings Jackson (1835–1911) was a co-founder of the famous journal Brain. He is sometimes referred to as the father of British neurology. His wife suffered from epilepsy, and perhaps his most important inferences about brain function derived from his observations of the consistency of the patterns of epileptic seizures. Jackson saw that in at least some patients the first signs of an impending seizure were twitchings of particular muscles. In the case of his wife, the seizures would start at one of her hands, then extend to include the arm, then the shoulder and then her face, eventually including her leg (all on the same side) after which the seizure would end. Jackson deduced that this kind of pattern could occur if the epileptic seizure was always initiated at the same point in the brain, from which it spread to related areas, assuming that each motor region of the brain had its own specialized function. He further suggested that the seizures were caused by electrical discharges in the brain, and that the condition might be treated by surgically removing the epileptic focus. In doing this he played an important role in the advance of neurosurgery.]

NERVOUS SYSTEM

NERVOUS SYSTEM
We can study behaviour and thought without necessarily knowing anything at all about the nervous system – or how the behaviour is generated. Cognitive psychologists studying slips of the tongue, for example, may not care whether the brain is made up of rubber bands or of neurons. In fact, if you simply count the cells in the brain, neurons are very much a minority group; most of them are non-neuronal, glial cells. [glial cells non-neuronal cells in the brain that provide ‘support’ for the neurons] But rubber bands are rarer still in there. Nevertheless, our interactions with the world around us depend crucially on the activity of the nervous system. Without it, we not only have no senses and no ability to move; we also have no thoughts, no memories and no emotions. These are the very essence of our selves, yet they can be disastrously changed, and even completely erased, by disorders of the nervous system. We can very effectively treat some psychological disorders simply by using words to change the ways in which patients think. But the only generally available palliatives for other conditions, like Parkinson’s disease or schizophrenia, are drug treatments. And some conditions, such as Alzheimer’s disease, are currently untreatable. The more we learn about how the nervous system operates, the better we can understand how it can go wrong. That, in turn, will increase our chances of finding out how to prevent, or even reverse, psychological disorders. If we do not understand the way that the nervous system works, then we, like our forebears throughout humankind’s history, are confined to a passive role as observers and documenters of the effects of nervous dysfunction.

The nervous system has both central and peripheral components. The central part includes the brain and the spinal cord; the peripheral part includes the nerves through which the central nervous system interacts with the rest of the body. ‘Nerve’ is a familiar word and is used in various ways in ordinary conversation. But in psychology we use it specifically to mean a cord of neuronal axons bundled together passing through the human body. We have probably all had the experience of hitting our ‘funny bone’ – the discomfort is due to the compression of the ulnar nerve. Nerves are typically sensory (afferent) – carrying information to the central nervous system from sensory neurons whose cell bodies are located in the periphery of the body – or motor (efferent) – extending out from the central nervous system to the organs and regulating muscular movement or glandular secretion.

The basic unit of the whole of the nervous system is the neuron. Neurons operate alongside various other types of cells, whose activity can be essential to normal neuronal function. Even in the brain, only about 10 per cent of the cells are neurons. Most are glial cells, which fall into several different classes, each with its own function. There are astrocytes, oligodendrocytes (in the central nervous system), microglia and ependymal cells. (The word ending -cyte means ‘cell’.) Glial cells were once thought of as the structural glue (that is what glia means in Greek) that holds the neurons in place, but their roles are proving to be far more complex. For example, astroctyes, which are the most common class, not only provide physical support to the neurons, but also help to regulate the chemical content of the fluid that surrounds the neurons. Astrocytes wrap closely round some types of synapses (the junctions between neurons) and help to remove glutamate (a neurotransmitter substance) from the synaptic cleft (the gap between neurons meeting at the synapse) via an active pumping system. If the pump fails, the system can become reversed, so that excess glutamate is released back into the synapse, which can be fatal to nearby neurons.

Neurons come in many shapes – or morphologies –which give them their different functions. For example, projection neurons have fibres that connect them to other parts of the nervous system. Even within this category, there are many different morphologies, but all projection neurons share some basic similarities. You can think of the neuron as having three essential components. The heart of the neuron is the cell body, where the cell’s metabolic activities take place. Input from other neurons typically comes via the dendrites. These can be a relatively simple tuft of fine, fibre-like extensions from the cell body, or highly complex branches like the twigs and leaves of a tree. The output of the neuron is transmitted via its axon to the dendrites of other neurons, or other targets such as muscles. Axons can be very long, reaching right down the spinal cord, or so short that it is difficult to tell them apart from the dendrites. Nerve cells with such short axons are called interneurons rather than projection neurons, because all their connections are local. Some neurons have just a single axon, although it may still make contact with a number of different target cells by branching out towards its end. Other cells have axons that are split into quite separate axon collaterals, each of which may go to an entirely different target structure.

PERPHERAL NERVOUS SYSTEM
Peripheral nerves are just bundles of axons. They appear as white matter, because most mammalian axons have a white myelin sheath around them, which helps to speed up nerve conduction. Although many neurons have cell bodies located in the central nervous system, there are clusters of cell bodies in the peripheral nervous system too. The simplest type of cluster is called a ganglion (plural, ganglia). The sensory division of the peripheral system deals with inputs from receptors sensitive to pressure on your skin, for example. The motor division deals with outputs, or signals, causing muscles to contract or relax. Together, these divisions make up the somatic nervous system, which enables you to interact with your external environment. The autonomic nervous system is the manager of your internal environment. It controls activity in structures like your heart and your gut and some endocrine glands (which secrete regulatory hormones), and it governs sweating and the distribution of blood flow. The autonomic nervous system is itself divided into the sympathetic and parasympathetic nervous systems, which have essentially opposite functions. The sympathetic system prepares you for emergency action. It redirects blood from your skin and your gut to your muscles, raises heart rate, dilates air passages to your lungs and increases sweating. These changes help you to run faster or fight more vigorously, and explain why people sometimes go white when they are really angry. The parasympathetic system calms you down: it slows heart rate, increases blood flow to the gut to facilitate digestion, and so on. Your bodily state in part reflects the balance between these two systems.

CENTRAL NERVOUS SYSTEM
The brain sits at the top of the spinal cord like a knotted end on a string or a walnut on a stick, with a smaller knot at the back (the cerebellum – Latin for ‘little brain’) which plays a key role in making movement smooth and efficient. The spinal cord, made up of both axons and ganglia, gives us some essential reflexes. You can withdraw your hand from a fire before the information from your fingers has reached your brain: the spinal circuitry is complex enough to go it alone. It is also complex enough to contribute to other motor sequences, like those involved in walking. Mammalian brains are made in two halves – or hemispheres –again like a walnut. The brain surface as viewed from the side or above is deeply wrinkled. This outer layer is the cortex (plural cortices), which comes from the Latin word meaning ‘bark of a tree’. What this view hides are the numerous subcortical structures. These process sensory input and relay it to appropriate areas of the cortex, or process motor output before relaying it to the spinal cord and from there to the peripheral nervous system.

But the brain should not be thought of as a sort of cognitive sandwich, with sensory information as the input, motor responses as the output, and cognition as the filling. Brain function is much more highly integrated than that. The motor and sensory systems are interactive, and each can directly modify activity in the other, without having to go through a cognitive intermediary. A cluster of cell bodies in the brain might form a blob, or nucleus (plural, nuclei), or be organized into an extended layer like the cortex. These nuclei are often connected by clusters of axons, called fibre bundles. If you cut into a nucleus, or into the cortex, the exposed surface does not appear white, but grey. The term grey matter, sometimes used colloquially, refers to areas that are composed
primarily of cell bodies rather than axons.

ETHICS IN RESEARCH

ETHICS IN RESEARCH
Psychology is a science, and science is part of society. It follows that psychological scientists must work within limits imposed by society, including the standards that society sets for behaviour. Psychological researchers are bound by research ethics – a code, or set of rules, that tells them which sorts of behaviour are acceptable when conducting research. These rules relate primarily to avoiding the risk of harm to research participants. One important feature of ethical research is informed consent. [informed consent the ethical principle that research participants should be told enough about a piece of research to be able to decide whether they wish to participate] The participants (or their guardians if they are children) must have the research procedures explained to them so that they can make an informed choice about whether they wish to participate. Any risks of harm to participants must be minimised, and if they cannot be eliminated, they must be justified. Imagine some clinical psychologists develop a new form of therapy to treat a mental illness. Rather than simply using the therapy in their practice, they must first decide how to evaluate the treatment. Suppose that, in reality, the treatment has a slight risk of causing harm to participants. Before the researchers can test the effectiveness of the treatment, they must be confident that the potential benefits heavily outweigh any potential harm. Where research involves animals, their treatment must be humane and meet the standards of animal welfare. Major psychological societies, such as the American Psychological Association and the British Psychological Society, maintain web links that provide details of their ethical codes, and all researchers need to be familiar with these.

UNDERSTANDING OF COORELATION METHODS

UNDERSTANDING OF COORELATION METHODS
A mistake that is made by researchers more often than it ought to be is to assume that, because two variables are highly correlated, one is responsible for variation in the other. Always remember that correlation does not imply causation. Suppose we conduct a study that reveals a strong positive correlation between the consumption of alcohol and aggressiveness. On this basis it cannot be concluded that alcohol causes aggressiveness. You could equally argue that aggressiveness causes people to drink more, or the relationship may be the product of a third factor, such as upbringing. Perhaps having hostile parents leads people to be aggressive and also to drink more. It is therefore possible that upbringing encourages alcohol consumption and aggressiveness, without each having a direct effect on the other. There are many real-life examples of spurious correlations that have arisen from the influence of a third factor. For example, when researchers found that there was a high correlation between the presence of ‘spongy tar’ in children’s playgrounds and the incidence of polio, they misguidedly inferred that ‘spongy tar’ caused polio. As a result, some schools went to great expense to get rid of it. In fact, both spongy tar and polio were both linked to a third factor: excessively high temperature. So it was this that needed to be controlled, not the type of tar in the playground. This inability to draw strict causal inferences (and the associated
temptation to do so) is by far the most serious problem associated with both correlational and survey ethodology.

Measurement of Correlation
Correlations are usually measured in terms of correlation coefficients. [correlation coefficient a measure of the degree of correspondence or association between two variables that are being studied] The most common of these is the Pearson product–moment correlation, or Pearson’s r. [Pearson’s r the commonly used name for Pearson’s product-moment correlation coefficient] The value of r indicates how strong a correlation is and can vary from −1.00 to 1.00.
As with t-tests, computation of Pearson’s r involves going through a series of standard steps. These allow us to establish whether high scores on one variable are associated with high scores on the other, and if low scores on one variable are associated with low scores on the other.
An r-value of 1.00 indicates a perfect positive correlation, and an r-value of −1.00 indicates a perfect negative correlation. In both these cases, the value of one variable can be predicted precisely for any value of the other variable. An r-value of 0.00 indicates there is no relationship between the variables at all.

[Karl Pearson (1857–1936) graduated from Cambridge University in 1879 but spent most of his career at University College, London. His book The Grammar of Science (1892) was remarkable in that it anticipated some of the ideas of relativity theory. Pearson then became interested in developing mathematical methods for studying the processes of heredity and evolution. He was a major contributor to statistics, building on past techniques and developing new concepts and theories. He defined the term ‘standard deviation’ in 1893. Pearson’s other important contributions include the method of moments, the Pearson system of curves, correlation and the chi-squared test. He was the first Galton Professor of Eugenics at University College, London, holding the chair from 1911 to 1933.]

Judgment of Two Variable Relationship

Judgment of Two Variable Relationship
A lot of what we have discussed so far relates to comparisons between means, which is typically what we do when we use experimental methodology. But in a range of other research situations we are interested in assessing the relationship between two variables. For example, how is height related to weight? How is stress related to heart disease? This type of question can be asked in experiments (what is the relationship between the amount of training and memory?), but is more typically addressed in surveys, where the researcher has multiple values of each variable. Suppose we are working on the concept of attraction, which occurs at many levels. We might have data recording both people’s attraction to their partners and the amount of time they have spent apart from them, our interest lying in whether higher levels of attraction are associated with higher levels of time spent apart, or whether high levels of attraction are associated with lower levels of time spent apart, or whether there is no clear relationship between the two variables. This type of data is described as bivariate, as opposed to univariate. [bivariate the relationship or association between two variables (‘variate’ is another word for variable)] One useful way to set about answering this type of question is to draw a scatterplot – a two-dimensional graph displaying each pair of observations (each participant’s attraction to their partner and the time spent apart). [univariate relating to a single variable] A negative correlation would be obtained when one value decreases as the other increases. Note that the stronger the relationship, the less scattered the various points are from a straight line, and the more confidently we can estimate or predict one variable on the basis of the other. In this example, it becomes easier to estimate from someone’s attraction how much time they have spent apart from their partner,
or to estimate level of attraction from the time spent apart.

Wednesday, November 10, 2010

Numerical Interpretation of Results

Numerical Interpretation of Results
Two key properties, referred to as descriptive statistics, come into play when we describe a set of data – or the results of our research.[ descriptive statistics numerical statements about the properties of data, such as the mean or standard deviation] These are the central tendency (what we usually call the average) and the amount of dispersion – or variation.[ central tendency measures of the ‘average’ (most commonly the mean, median and mode), which tell us what constitutes a typical value] Imagine a choreographer selecting a group of dancers for a performance supporting a lead dancer who has already been cast. The choreographer wants the supporting cast to be pretty much the same height as the lead dancer and also pretty much the same height as each other. So the choreographer is interested in the average height (which would need to be about the same as the lead dancer’s height) and the dispersion, or variation, in height (which would need to be close to zero). There are a number of ways in which the choreographer – or the psychologist – can measure central tendency (average) and dispersion. [dispersion measures of dispersion (most commonly range, standard deviation and variance) describe the distance of separate records or data points from each other]

Measures of central tendency
Measures of central tendency give us a typical value for our data. Clearly, ‘typical’ can mean different things. It could mean: n the average value; n the value associated with the most typical person; or n the most common value. [mean the sum of all the scores divided by the total number of scores] In fact, all three are used by researchers to describe central tendency, giving us the following measures: n The mean is the average value (response) calculated by summing all the values and dividing the total by the number of values.[ median the middle score of a ranked array – equal to the ((N 1)/2)th value, where N is the number of scores in the data set] n The median is the value with an equal number of values above and below it. So, if all values are ranked from 1 to N, the median is the ((N 1)/2)th value if N is odd. If N is even, the median is the mean of the two middle values. n The mode is the value that occurs most frequently in a given data set. [mode the most commonly occurring score in a set of data]

Measures of dispersion
We might also want to describe the typical distance of responses from one another – that is, how tightly they are clustered around the central point. This is typically established using one of two measures. The first and probably most obvious is the range of responses – the difference between the maximum and minimum values. But in fact the most commonly used measure of dispersion is standard deviation (SD). [standard deviation the square root of
the sum of the squares of all the differences (deviations) between each score and the mean, divided by the number of scores (or the number of scores minus 1 for a population estimate)] This is equal to the square root of the sum of the squares of all the differences (deviations) between each score and the mean, divided by the number of scores (in fact, the number of scores minus one if we want a population estimate, as we usually do). If this sounds complex, do not be too concerned: scientific calculators allow you to compute standard deviations very easily. The square of the standard deviation is called the variance.[ variance the mean of the sum of squared differences between a set
of scores and the mean of that set of scores; the square of the standard deviation]

Generalization of Results
Although psychologists often spend a lot of time studying the behaviour of samples, most of the time they want to generalize their results to say something about a whole population – often called the underlying population. Knowing how ten particular people are going to vote in an election may be interesting in itself, but it is even more interesting if it tells us who is likely to win the next election. But how can we make inferences of this sort confidently? By using inferential statistics we can make statements about underlying populations based on detailed knowledge of the sample we study and the nature of random processes. [inferential statistics numerical techniques used to estimate the probability that purely random sampling from an experimental population of interest can yield a sample such as the one obtained in the research study] The key point here is that, while random processes are (as the name tells us) random, in the long run they are highly predictable. Not convinced? Toss a coin. Clearly, there is no way that we can confidently predict whether it is going to come down heads or tails. But if we were to toss the coin fifty times, we could predict, reasonably accurately, that we would get around twenty-five heads. The more tosses we make, the more certain we can be that around about 50 per cent of the tosses will come up heads (and it is this certainty that makes the business of running casinos very profitable).


Of course, psychologists do not usually study coin tosses, but exactly the same principles apply to things they do study. For example, the mean IQ is 100 (with an SD of 15), so we know that if we study a large number of people, about 50 per cent will have an IQ greater than 100. So if we get data from 100 people (e.g. a class of psychology students) and find that all of them have IQs greater than 100, we can infer with some confidence that there is something psychologically ‘special’ about this sample. Our inference will take the form of a statement to the effect that the pattern we observe in our sample is ‘unlikely to have arisen as a result of randomly selecting (sampling) people from the population’. In this case, we know this is true, because we know that psychology students are not selected randomly from the population but are selected on the basis of their performance on tests related to IQ. But even if we did not know this, we would be led by the evidence to make an inference of this kind. Inferential statistics allow researchers to quantify the probability that the findings are caused by random influences rather than a ‘real’ effect or process. We do this by comparing the distribution obtained in an empirical investigation with the distribution suggested by statistical theory – in this case the normal distribution. We then make predictions about what the distributions would look like if certain assumptions (regarding the lack of any real effect on the data) were true. If the actual distribution looks very different from the one we expect, then we become more confident that those assumptions are wrong, and there is in fact a real effect or process operating. For example, the distribution of the mean IQ score of groups of people drawn from the population tends to have a particular shape. This is what we mean by the normal distribution. [normal distribution the symmetrical, bell-shaped spread of scores obtained when scores on a variable are randomly distributed around a mean] If a particular set of data does not look as though it fits the (expected) normal distribution, then we would start to wonder if the data really can be assumed to have been drawn at random from the population in question. So if you drew a sample of 100 people from a population and found that their mean IQ was 110, you can be fairly sure that they were not randomly drawn from a population with a mean of 100. Indeed, the normal distribution shows us that the likelihood of an event as extreme as this is less than one in a thousand.


Interpretation of Results
When we use inferential statistics, we might be in a position to make exact probability statements (as in the coin tossing example), but more usually we have to use a test statistic. Two things influence our judgement about whether a given observation is in any sense remarkable: 1. the information that something is ‘going on’; and 2. the amount of random error in our observations. In the IQ example, information comes from the fact that scores are above the mean, and random error relates to variation in the scores of individual people in the sample. For this reason, the statistics we normally use in psychology contain both an information term and an error term, and express one as a ratio of the other. So the test statistic will yield a high value (suggesting that something remarkable is going on) when there is relatively more information than error, and a low value (suggesting that nothing remarkable is going on) when there is more error than information.
Imagine we gave an IQ test to a class of 30 children and obtained a mean IQ of 120. How do we find out the statistical likelihood that the class mean differs reliably from the expected population mean? In other words, are we dealing here with a class of ‘smart kids’, whose performance has been enhanced above the expected level by some factor or combination of factors? Or is this difference from the population mean of 100 simply due to random variation, such as you might observe if you tossed a coin 30 times, and it came up heads 20 times? A statistical principl known as the law of large numbers tells us that uncertainty is reduced by taking many measurements of the same thing (e.g. making 50 coin tosses rather than one). [law of large numbers the idea that the average outcomes of random processes are more stable and predictable with large samples than with small samples] It means, for example, that although around 9 per cent of the population have IQs over 120, far fewer than 9 per cent of classes of 30 randomly selected students will have a mean IQ over 120. This statistical knowledge makes us more confident that if we do find such a class, this is highly unlikely to be a chance event. It tells us instead that these children are performing considerably higher than might be expected. We can summarize the process here as one of deciding where the sample mean lies in relation to the population mean. If there is a very low chance of sampling that mean from the population we conclude that the sample is probably not drawn from that population but instead belongs to another population. Perhaps more intelligent students were assigned to this class by the school authorities, or perhaps they came from an area where education funding was especially good. In short, we cannot be sure what the explanation is, but we can be relatively sure that there is something to be explained and this is the purpose of conducting statistical tests.

Think back to our ‘memory training study’, in which one group of participants in an experimental condition experience a new training method and another group in a control condition do not, then both groups take a memory test. Common sense tells us that we are likely to get two sets of memory scores – one for the experimental condition, one for the control – with different means. But how do we decide whether the difference is big enough to be meaningful? This is where inferential statistics come into play. Appropriate statistical procedures allow us to decide how likely it is that this difference could occur by chance alone. If that likelihood is sufficiently low (typically less than 1 in 20 or 5 per cent), we would reject the null hypothesis (expressed as 0)
that there is no difference between the means and that the manipulation of the independent variable has had no effect. Instead we would conclude that the manipulation of the IV has had a significant impact on the dependent variable – that is, that training does indeed improve memory. This process is typically referred to as significance testing, and this is one of the main approaches to statistical inference. While statistical tests can never tell us whether our results are due to chance, they can guide us in judging whether chance is a plausible explanation.
How does significance testing work in this case – that is, when comparing two means? In essence it comes down to the difference between the means relative to the variation around those means and the number of responses on which the means are based. The statistics that we calculate for comparing means are called t and F statistics. A large t or F statistic means there is a small probability that a difference as big as the one we have obtained could have occurred by randomly selecting two groups from the same population (i.e. it is not likely that the difference is due to chance). If that probability is sufficiently small, we conclude that there probably is a real difference between the means – in other words, that the difference is statistically significant.

STATISTICAL METHODS IN PSYCHOLOGY

STATISTICAL METHODS IN PSYCHOLOGY
Sampling and Population
You will often hear psychologists talking about samples and populations in relation to statistical analysis of research. What do they mean by these terms? A population is a set of people, things or events that we are interested in because we wish to draw some conclusion about them. The population could consist of all people, or all people with schizophrenia, or all right-handed people, or even just a single person. A sample is a set selected from the population of interest and used to make an inference about the population as a whole. This kind of inference is called a generalization. [generalization related to the concept of external validity, this is the process of making statements about the general population on the basis of research] A sample would normally be a group of people selected from a larger group, but it could also be a sample of behaviour from one person, or even a sample of neurons from a region of the brain (see chapter 3). If we wish to generalize to a population, we need to make sure that the sample is truly representative of the population as a whole. This means that the sample should be similar to the population in terms of relevant characteristics. For example, if we are doing research on the human visual system, then members of our sample group need to have eyesight that is similar to the rest of the human population (as opposed to being, for example, noticeably worse). The easiest and fastest way to achieve this is to draw a random sample (of a reasonable size) from the population [random sample a sample of participants in which each has the same chance of being included, ensured by using random participant selection methods (e.g. drawing lots)].

PRODUCTION OF TRUSTWORTHY RESULTS

PRODUCTION OF TRUSTWORTHY RESULTS
Internal validity
We can be confident about the results of psychological research when the methods are valid. An experiment is said to have internal validity when we are confident that the results have occurred for the reasons we have hypothesized, and we can rule out alternative explanations of them. These alternative explanations (or threats to internal validity) [internal validity the extent to which the effect of an independent (manipulated) variable on a dependent (outcome) variable is interpreted correctly] can involve an experimental confound – an unintended manipulation of an independent variable [confound an unintended or accidental manipulation of an independent variable that threatens the validity of an experiment]. The risk of confounds can be reduced by better experimental design. Suppose we conduct a study to look at the effect of crowding on psychological distress by putting 50 people in a crowded room and 50 people in an open field. Having found that the people in the room get more distressed, we may want to conclude that crowding causes distress. But the participants’ body temperature (generated by having a lot of people in one room) may represent a confound in the study: it may be the heat, not the crowding, that produces the effects on the dependent variable. The experiment could be redesigned to control for the effects of this confound by using air-conditioning to keep the temperature the same in both conditions.

External validity
A study has a high level of external validity when there are no reasons to doubt that the effects obtained would occur again outside the research setting. We might, for example, question a study’s external validity if participants responded in a particular way because they knew that they were taking part in a psychological experiment. They might inadvertently behave in a way that either confirms or undermines what they believe to be the researcher’s hypothesis. In experiments we usually try to deal with this specific potential problem by not telling experimental participants about the hypotheses that we are investigating until after the experiment has finished. [external validity the extent to which a research finding can be generalized to other situations]

Measurement methods in Research

Measurement methods in Research
As with the selection of IVs, the selection of dependent variables is often complicated by practical constraints. For example, if we are investigating the impact of alcohol consumption on road fatalities, we may manipulate the independent variable straightforwardly (by getting experimental groups to consume different quantities of alcohol). But it would be irresponsible (and illegal) to then get the participants to drive down a busy street so that we can count how many pedestrians they knock down! To get round this, we may ask the high alcohol group to consume only a few beverages. But there are two problems with this. First, alcohol may only affect driving behaviour when more than a few beverages are consumed. Second, our dependent variable (number of pedestrians killed) will not be sufficiently sensitive to detect the independent variable’s impact. In other words, we may have good reason to think that alcohol could impair driving performance, but the degree of impairment may not (fortunately!) be so profound as to cause a detectable increase in the number of deaths caused.
To deal with this, we therefore have to select dependent variables that are both relevant to the outcome we have in mind and sensitive to the independent variable. In the case of drink-driving, we may look at participants’ reaction time, because we believe that this is a critical determinant in driving safety and is likely to be a sensitive enough variable to detect an impairment in driving performance due to alcohol. We can then design and carry out a study in the laboratory, measuring the impact of alcohol consumption on reaction time. In our attributional style example, too, it is unlikely that our manipulation of the independent variable will have a dramatic impact on the participants’ depression. So if our dependent variable was the number of participants who need to be treated by a clinical psychologist, our experiment is very unlikely to uncover any effects. To get around this problem, we could administer a depression inventory, in which we ask the participants a battery of questions (e.g. ‘Are you self-confident?’, ‘Do you feel hopeless about the future?’) in order to measure their susceptibility to depression. We could then test our hypothesis by seeing whether scores on the depression inventory revealed a higher susceptibility to depression among participants who had been encouraged to make internal attributions.

The psychologist S.S. Stevens developed a famous distinction between forms of data that psychologists can deal with. The four types he came up with are nominal, ordinal, interval and ratio measures.

[Stanley Smith Stevens (1906–73) made significant contributions to several areas of psychology. He was an expert on the psychophysics of hearing and was interested in measurement and experimental psychology. Stevens set out to redefine psychological measurement by changing the perspective from that of inventing operations (the physical view) to that of classifying scales (a mathematical view). He also discovered that methods such as ‘just noticeable differences’, rating scale categories and paired comparisons produce only ordinal scales. Stevens’ most oustanding contribution was his successful argument that there are different kinds of scales of measurement, being the first to define and discuss nominal, ordinal, interval and ratio scales.]

Nominal measures
The data collected in this way are in the form of names, which can be categorized but cannot be compared numerically in any way. Examples include genders, countries and personality types.

Ordinal measures
These can be ranked in some meaningful way. Examples are the placings obtained by competitors in a race or an ordered set of categories (e.g. low stress, moderate stress and high stress).

Interval measures
Numerical measures without a true zero point are called interval measures, and cannot be used to form ratios. An example is temperature. The zero point has been arbitrarily chosen to be the freezing point of water rather than absolute zero (where there is no temperature), and it is simply not true that 40 degrees Celsius is twice as hot as 20 degrees Celsius. Similarly, it would not make sense to say that someone who responded with a ‘6’ on the attribution scale above was twice as much of an externalizer as someone who responded with a ‘3’.

Ratio measures
Full numerical measures with a true zero point are ratio measures. Psychologists frequently assume that scores obtained from psychological measurement can be treated as ratio measures. But this assumption is not always justified.

Tuesday, November 9, 2010

Manipulation in Research

Manipulation in Research
In selecting an independent variable for any piece of research, we must first decide what we are interested in. For example, we might be interested in whether attributional style (the way people explain events) affects people’s responses to failure. We might hypothesize that people who tend to blame themselves for failure (i.e. those who internalize failure) are more likely to become depressed than people who blame their failure on other things (i.e. who externalize failure). So the central theoretical variable – the focus of our interest –is the participants’ attributional style. But, how can we manipulate this for the purposes of our experiment? Clearly we cannot open up people’s heads and turn a dial that says ‘attributional style’ to maximum or minimum. To get around such obstacles, psychologists usually manipulate the theoretical variable indirectly. They do this by identifying an independent variable that they believe will have a specific impact upon a given mental process, and then check that this is the case. In our example, the researchers may expose participants to failure (e.g. in a test) and then ask some of them to answer questions like ‘Can you explain why you did so much worse than everyone else?’ – questions that encourage the participants to reflect on their own contribution to their performance (i.e. to internalize). They may then ask other participants questions like ‘Do you think the fact that you were not allowed to revise for the test affected your performance?’ – questions that encourage them to reflect on the contribution of other factors to their performance (i.e. to externalize). To be sure that this manipulation has had the desired effect on the theoretical variable, the researchers may then want to perform a manipulation check. [manipulation check a procedure that checks the manipulation of the independent variable has been successful in changing the causal variable the experimenter wants to manipulate] For example, in the case given above, the researchers might measure whether the ‘internalizing’ question produces greater agreement with a measure such as: ‘How much do you think you were responsible for the test outcome?’ Note also the significant ethical issues relating to this study. The experimental manipulation could have the effect of making some participants more depressed – indeed, that is the hypothesized outcome in the condition where participants are encouraged to internalize their failure.

How to choose the best methods

How to choose the best methods
This is a very complex issue and depends on many factors, not least practical ones – including the amount of time, money and expertise that a researcher has. However, as a general principle, it is worth emphasising that no one method is universally superior. Part of any research psychologist’s role is to make judgements about the appropriateness of a method for investigating the issues at hand. Being a good researcher is not a question of whether you do experiments or surveys: it is more a matter of when and how you do them. In view of the potential limitations of any one method, many researchers consider using multiple research methods to explore the same issue in many different ways. This is the process of triangulation. If consistent results are obtained from a variety of different methods (perhaps from a quantitative experiment, a survey and qualitative case studies), this will tend to justify greater confidence in the findings. For this reason, the need to make methodological choices should be seen as an asset for researchers, rather than a basis for arguments about who has the best methods. The challenge researchers face is to exploit that asset appropriately.

Experimental Vs Survey method

Experimental Vs Survey method
One common, but mistaken, belief is that the difference between surveys and experiments is a question of location, with surveys being conducted in the community and experiments in the laboratory. This is often the case, but not always. Experiments can be conducted outside laboratories, and surveys can be conducted in them. The main differences between experiments and surveys relate to the sorts of questions that each can answer. As we suggested earlier, experiments tend to be concerned with establishing causal relationships between variables, and they achieve this by randomly assigning participants to different treatment conditions. Surveys, on the other hand, tend to be concerned with measuring naturally occurring and enduring relationships between variables. Researchers who use surveys usually want to generalize from the sample data they obtain to a wider population. They do this by using the sample to estimate the characteristics of the population they are interested in. Why choose to carry out a survey rather than an experiment? Two reasons: sometimes we are only interested in observing relationships, and sometimes manipulations simply are not possible. This reasoning is not restricted to psychology. Astronomers or geologists rarely conduct experiments, simply because it is often impossible to manipulate the independent variables of interest (e.g. the position of certain stars or the gravitational force of a planet). Instead they rely largely on the same logic of controlled observation that underpins psychological surveys. But this does not mean that astronomy or geology are unscientific. Surveys can also allow researchers to eliminate some causal links. If there is no relationship (at least in the survey environment) between variables, this allows us to conclude that one does not cause the other. For example, if no relationship is found between age and intelligence, then it is impossible for intelligence to cause age, or vice versa (bearing in mind that a relationship
could be concealed by a third, or background, variable).

Qualitative method

Qualitative method
When researchers report and comment on behaviour, without attempting to quantify it, they are using a qualitative research method. This involves attempts to understand behaviour by doing more than merely converting evidence into numbers. Qualitative methods can include coding, grouping and collecting observations without assigning actual numbers to the observation. So a qualitative analysis of the speed of animals might result in the statement that the cheetah is a fast land animal, and quantitative analysis might involve comparing the maximum speed of animals over (say) 20 metres. To take an example of human behaviour, you probably take a qualitative approach to the friendliness of the people you meet. In other words, you probably judge people as relatively friendly or unfriendly, but you would be unlikely to come up with a number that expresses their friendliness quotient. Qualitative techniques are sometimes used in the initial stages of quantitative research programmes to complement the quantitative techniques, but they are also used by psychologists who challenge conventional approaches to psychological research. This may be because they believe that the conventional methods are inadequate for addressing the richness and complexity of human behaviour. In turn, many mainstream psychologists are critical of qualitative methods.



[Donald Thomas Campbell (1916–96) trained as a social psychologist. He was a master methodologist and is best known for devising the method of quasi-experimentation, a statistics-based approach that allows replication of the effects of true randomization, which is often impossible in the study of human behaviour. Campbell also supported use of qualitative methods, according to the goals and context of the study. He promoted the concept of triangulation – that every method has its limitations, and multiple methods are usually needed to tackle important research questions.]

Case study method

Case study method
Most of the above methods are used for studies involving large numbers of participants. But what if only a few are available? How, for example, would you do research if you were interested in the reading difficulties of people with particular forms of brain damage? To investigate questions like this, researchers often resort to the case study method, which involves intensive analysis of a very small sample. This has particular problems (often with reliability), but some of the most famous studies in psychology have used this method – in particular the work of Freud.

Survey (or correlational) method

Survey (or correlational) method
The survey method is commonly used to identify the naturally occurring patterning of variables in the ‘real world’ rather than to explain those patterns (though often people want to put an explanatory gloss on them). So to examine whether absence makes the heart grow fonder we could conduct a survey to see if people who are separated from their partners because of travelling away from home (group A) say more positive things about their partners than people who never travel away from home without their partners (group B). This might be an interesting exercise, but the validity of any causal statements made on the basis of such findings would be very limited. For example, if we found from our survey that group A said more positive things about their partners when they were traveling than group B, it would be impossible to demonstrate conclusively that absence was the cause of the difference between groups A and B. In other words, while our survey could show us that absence is associated with a fonder heart, it could not conclusively show that absence actually causes the heart to grow fonder. It is quite possible (odd as it may sound) that the sorts of people who travel away from home without their partners are simply those that like their partners more (so fondness makes the heart go absent). Or perhaps both fondness and absence are caused by something else – for example, social class (i.e. being wealthy makes people both fond and absent). In large part, then, surveys rely on methodologies that identify relationships between variables but do not allow us to make conclusive causal inferences.

Quasi-experimental method

Quasi-experimental method
In quasi-experimental studies the independent variable is not (or cannot be) manipulated as such, and so assignment to experimental groups cannot be random. The fact that no manipulation occurs interferes dramatically with our ability to make conclusive causal inferences. Examples of independent variables that cannot be manipulated by an experimenter include gender and age. Obviously experimenters cannot change the gender or age of participants, but they can compare the responses of groups of people with different ages or of different genders. Compared to the experimental method, there is no real control over the independent variable, so we cannot conclude that it is necessarily responsible for any change in the dependent variable. On this basis, as we will see, the quasi-experimental method actually has more in common with survey methodology than with
the experimental method. It has all the weaknesses of the experimental method, but it lacks the main strength. In practice, it is often conducted in conjunction with the experimental method. For example, in our learning study we might compare the effect of the new training method on both men and women.

Experimental Methods:

Experimental Methods:
One very common research method is to manipulate [manipulation the process of systematically varying an independent variable across different experimental conditions (sometimes referred to as the experimental treatment or intervention)] one or more variables and to examine the effect of this manipulation on an outcome variable. To do this, the researcher examines participants’ responses in the presence and the absence of the manipulation. Experimental control is used to make the different situations identical in every respect except for the presence or absence of the manipulation. Experiments can involve different people in each situation or the same people in different situations [experimental control the method of ensuring that the groups being studied are the same except for the manipulation or treatment under investigation]. People who take part in experiments are called participants, but if you read older research papers they are generally referred to as subjects. Here is an example. To test the effect of a new training method (a manipulation) on memory, we might take 100 people and expose half of them to the new method. For reasons we will discuss in more detail below, we would assign participants to the two groups on a random basis (e.g. by the toss of a coin). We will call the first group the experimental group, as it is subjected to a relevant experimental treatment [experimental group participants in an experiment who are exposed to a particular level of a relevant manipulation or treatment (as distinct from a control group)] . The other half of our participants would not be exposed to the new training method. As they receive no experimental treatment, [treatment the experimental manipulation of the independent variable] they are referred to as a control group (also discussed in more detail below) [control group participants in an experiment who are not subjected to the treatment of interest (as distinct from the experimental group)] . After administering the treatment, we would measure the performance of the two groups on a memory task and then compare the results. The various levels of treatment in an experiment (including the control) are referred to as conditions. [condition a situation in a research study in which participants are all treated the same way]
This experiment has two conditions and a between-subjects design (because the design involves making comparisons between different participants in different conditions). [between-subjects design a research study involving a systematic manipulation of an independent variable with different participants being exposed to different levels of that variable] Note, however, that the same question could also have been addressed in a withinsubjects design, [within-subjects design a research design in which the same participants are exposed to different levels of the independent variable] which would involve comparing the memory performance of the same people with and without the new training method. The two basic designs have different strengths and weaknesses, which we will discuss below in relation to issues of experimental control. The different conditions in the experiment make up the independent variable (or IV), [independent variable the treatment variable manipulated in an experiment, or the causal variable believed to be responsible for particular effects or outcomes] sometimes called the treatment variable. A variable is simply something that changes or varies (is not constant). In true experiments, the independent variable is systematically manipulated or varied by the experimenter.
Experiments can (and typically do) have more than one independent variable. Experiments also involve at least one dependent variable (or DV).[ dependent variable the variable in which a researcher is interested in monitoring effects or outcomes] This is an outcome or measurement variable, and it is this variable that the experimenters are interested in observing and which provides them with data. In our last example, the dependent variable is the level of memory performance. Use the initial letter ‘d’ to remember the link between the dependent variable and the data it provides. Control is the basis of experimental design. It involves making different conditions identical in every respect except the treatment (i.e. the independent variable). In a between-subjects experiment, this is achieved by a process of random assignment of participants to the different conditions[random assignment the process of assigning participants to study conditions on a strictly unsystematic basis]. For example, people should be assigned at random (e.g. on the basis of coin tossing),rather than putting, say, the first 50 people in one condition and the second 50 in another. This practice rules out the possibility that there are systematic differences in, say, intelligence, personality or age between the groups.
If there is a difference in results obtained from measuring the dependent variable for each group, and we have equated the groups in every respect by means of random assignment, we can infer that the difference must be due to our manipulation of the independent variable.

Psychological Research Methods

Psychological Research Methods
Psychological research involves four main methods: the (true) experimental method, [experimental method a research method in which one or more independent variables are systematically manipulated and all other potentially influential variables are controlled (i.e. kept constant), in order to assess the impact of manipulated (independent) variables on relevant outcome (dependent) variables] the quasi-experimental method, [quasi-experimental method embodies the same features as the experimental method but does not involve the random assignment of participants to experimental conditions] the survey method (sometimes called the correlational method), [survey method the systematic collection of information about different variables in order to investigate the relationship between them] and the case study method [case study method research method that involves a single participant or small group of participants who are typically studied quite intensively].

How to measure experiments in psychology

How to measure experiments in psychology?
Something that differentiates psychology from other sciences is that the things in which we are interested – mental states and processes – can never be directly observed or measured. You cannot touch or see a mood, a thought, a disposition, a memory or an attitude. You can only observe things that are associated with these phenomena. While this problem does occur in other sciences (such as astronomy), it can often be overcome through technological development (e.g. a better telescope). Psychology has made significant advances too (e.g. measuring skin conductance and brain blood flow), but these techniques still only allow psychologists to study the outcomes of mental activity, or things that are associated with it – never the activity itself. Psychologists have developed three main types of measure to help them examine mental processes and states:
A Behavioural measures These involve observation of particular forms of behaviour in order to make inferences about the psychological phenomena that caused or contributed to them. For example, developmental psychologists (see chapter 9) might observe which toys are approached or avoided by children in a play situation. On the basis of such observations, they might plausibly infer that decisions to approach a toy are determined by the toy’s colourfulness.
B Self-report measures These involve asking people about their thoughts, feelings or reaction to a particular question. Provided that it is possible for the participants to reflect consciously on the relevant thoughts or behaviours, their responses can be used either to supplement other behavioural measures or as data in themselves. So a researcher could ask a six-year-old (but clearly not a six-month-old) ‘Which toys do you like?’ or ‘Did you pick that toy because it was brightly coloured?’
C Physiological measures These involve measuring things that are believed to be associated with particular forms of mental activity. For example, heart rate or galvanic skin response (GSR –a measure of the electrical conductivity of the skin) can serve as measures of anxiety or arousal. In our developmental example, researchers might look at children’s heart rate to see whether they become more excited when particular toys are presented or
taken away.

Role of theory in Research

Role of theory in Research:
Science does not progress simply through the accumulation of independent facts. These facts have to be integrated in terms of theoretical explanations (theories). Theories (theory a coherent framework used to make sense of, and integrate, a number of empirical findings) are statements of why, not just what. They are capable of: 1. accounting for multiple facts, and 2. predicting what might happen in novel situations. The purpose of most psychological research is to test such predictions in the form of hypotheses – i.e. statements of cause and effect that are derived from a given theory and tested by research (hypothesis a statement about the causal relationship between particular phenomena (i.e. A causes B), usually derived from a particular theoretical framework, which is designed to be tested via research investigation) . So theories generally precede experimentation, not vice versa. For example, the statement that absence makes the heart grow fonder does not provide a theoretical framework, but the following statement is distinctly more theory-based: ‘separation from an object causes us to exaggerate an object’s qualities (whether good or bad) because memory distorts reality’. This is because this statement attempts to explain and not just describe the relationship between separation and emotion. Moreover, having made this statement, we can test it by generating hypotheses and doing appropriate research. One hypothesis might be that people with memory disorders will make less extreme judgements of absent loved ones than people without such disorders.

Qualities of good research

Qualities of good research:
As well as being valid and reliable, psychological research needs to be public, cumulative and parsimonious. To become public, research must be published in a reputable scholarly journal. Sometimes, though rarely, it is translated into popular writing, as was the work of Freud, Pavlov, Piaget and Milgram. The likelihood of a piece of psychological research being adopted for popular publication can depend on such things as topicality, shock value or political trends, and its impact may be transitory. In contrast, the criteria for publication in scientific journals are much more clearly laid out, and they provide an enduring record of the key findings that emerge from a particular piece (or programme) of research. Cumulative research builds on and extends existing knowledge and theory. It is not enough just to collect information in a haphazard or random fashion. Instead, research should build on previous insights in a given area. Newton expressed this idea clearly when he observed: ‘if I have been able to see further than others it is because I have stood on the shoulders of giants’. Generally speaking, a piece of psychological research does not have value in isolation, but by virtue of extending or challenging other work in the field.
The cumulative nature of research is often revealed through literature reviews. These are research papers (normally published in reputable scientific journals) that discuss the results of multiple studies by different researchers. In some cases these reviews involve statistical analyses combining the results of many studies. This process is called meta-analysis. Parsimonious research develops explanations of findings that are as simple, economical and efficient as possible. In explaining the results in a given field, psychologists therefore attempt to account for as many different findings as possible using the smallest umber of principles. For example, it may be that person A performs better than person B on a test of memory because A was more alert as a consequence of being tested at a different time of day. Or A might have ingested a psychoactive agent before testing took place, whereas B had not. By controlling for the possible influences of time of day, ingested substances and so on, we are left with the most parsimonious explanation for why A and B differ in their level of memory performance.