Custom Search

Friday, February 11, 2011


Approaches to working with offenders have inevitably changed as our understanding of antisocial and criminal behaviour has developed, moving from psychodynamic psychotherapy, through group therapy, to behaviour modification. Yet there are those who see little merit in using treatment to reduce offending. (Hollin, 2001, has documented the struggle between proponents of treatment and advocates of punishment.) But since the mid 1990s there has been a renewed interest in the treatment approach, stimulated by a clutch of studies using meta-analysis. Using meta-analysis to inform treatment programmes Meta-analysis allows inspection of the aggregated findings from a group of studies around a common theme. Its use in studies into the effects of offender treatment has had a profound effect on recent practice.

Offender treatment meta-analyses draw the critical distinction between clinical and criminogenic outcome variables. In this context, ‘clinical outcomes’ refers to changes in some dimension of personal functioning, such as psychological adjustment, attitudes or social competence. On the other hand, ‘criminogenic outcomes’ refers specifically to measures concerned with crime, such as selfreported delinquency, official reconvictions and type of offence. As a broad generalization, treatment of offenders (as with other populations) tends to produce beneficial clinical outcomes (Lipsey & Wilson, 1993). But a significant contribution of the meta-analyses has been to highlight influences on criminogenic outcomes (in other words, those characteristics of treatment interventions that produce a reduction in offending). Several meta-analytic studies have sought to identify the practical recommendations that can be taken from this empirical research (see McGuire, 2002, for an overview). The first major conclusion is that there is an overall reduction in reoffending after treatment – in the region of 10 per cent (Lipsey, 1992; Lösel, 1996). The second conclusion is that some interventions have a significantly greater effect than others – the most effective producing more than 20 per cent reduction in reoffending (Lipsey, 1992). As the evidence accumulates, a broad consensus has been reached regarding the characteristics of treatments that impact on offending:
1. Indiscriminate targeting of treatment programmes is counterproductive in reducing recidivism. Medium- to high-risk offenders should be selected and programmes should focus on criminogenic targets: that is, treatments should be concerned with those aspects of the offender’s thinking and behaviour that can be shown to be directly related to their offending.
2. The type of treatment programme is important, with stronger evidence for structured behavioural and multimodal approaches than for less focused approaches. (The term ‘multimodal’ means using a variety of treatment techniques to address a range of targets for change, as discussed below with reference to Aggression Replacement Training.)
3. The most successful studies, while behavioural in nature, include a cognitive component, i.e. they encourage the offender to focus on their attitudes and beliefs.
4. Treatment programmes should be designed to engage high levels of offender responsivity: that is, the style of treatment should engage the offender to make him or her responsive to treatment and, at the same time, be responsive to the needs of different offenders such as juvenile or adult offenders or male and female offenders.
5. Treatment programmes conducted in the community have a stronger effect than residential programmes. While residential programmes can be effective, they should be linked structurally with community-based interventions.
6. The most effective programmes have high treatment integrity, in that they are carried out by trained staff, and treatment initiators are involved in all the operational phases of the treatment programmes.
The translation into practice of these principles derived from meta-analysis has become known as the What Works form of treatment programmes (McGuire, 1995). [What Works generic name given to a recent approach to offender treatment, which is based on findings from metaanalyses of the offender treatment literature]

The possibilities raised by the What Works principles have been recognized in the UK at a government policy level (Vennard, Sugg & Hedderman, 1997) and have significantly influenced work with offenders in prison and on probation. The development of national programmes for working with offenders has become a major initiative, seeking to capitalize on the possibilities raised by What Works (Lipton et al., 2000).
Offending behaviour programmes – an example Aggression Replacement Training (ART) is an excellent example of a programme approach to working with offenders. [Aggression Replacement Training (ART) research-based programme for working with violent offenders] ART was developed in the USA during the 1980s as a means of working with violent offenders.
This training programme has proved to be an effective way of reducing aggressive behaviour (Goldstein & Glick, 1987; 1996). ART has continued to be developed as the evidence base grows and practice techniques become more refined (Goldstein, Glick & Gibbs, 1998).
ART consists of three components, delivered sequentially, and so would qualify as a multimodal programme:
1. Skillstreaming involves the teaching of skills to replace out-of-control, destructive behaviours with constructive, prosocial behaviours. Social skills are taught in terms of step-by-step instructions for managing critical social situations. For example, offenders might be taught conflict negotiation skills for use in situations where previously they would have used aggression.
2. Anger control training first establishes the individualspecific triggers for anger, then uses the anger management techniques of
(i) enhancing awareness of internal anger cues,
(ii) teaching coping strategies,
(iii) skills training,
(iv) self-instruction and
(v) social problem solving.
Thus, offenders are taught to recognise their own feelings of anger and then helped to develop strategies, using new skills and enhanced self-control, to control anger and hence reduce aggression.
3. Moral reasoning training is concerned with enhancing moral reasoning skills and widening social perspectivetaking. This is achieved through self-instruction training, social problem solving and skills training. The focus here is on increasing the offenders’ understanding of the effects of their actions on others people, thereby enhancing the values that young people have for the rights and feelings of others.

[Don A. Andrews (1941– ) is Professor of Psychology at Carleton University, Ottawa. He is arguably the most influential psychologist currently working in the field of offender rehabilitation. At a time when the practice of offender rehabilitation was vigorously challenged, Andrews was one of its staunchest defenders. He is a fierce critic of those criminologists who have questioned the place of psychology in understanding crime and criminal behaviour. A strong advocate of evidence-based practice and theoretical integrity, he developed a risk assessment instrument (the Level of Service Inventory) which is widely used by practitioners in the criminal justice systems of several different countries. His book, with James Bonta, The Psychology of Criminal Conduct, is a fine example of both his forthright style and the outstanding quality of his work.]

The effectiveness of rehabilitation revisited - A research issue
Not surprisingly, there was significant opposition to the notion that ‘nothing works’ in offender rehabilitation. The literature included in the original review was re-examined by several researchers, who all reached different conclusions. Other researchers assembled different sets of relevant studies, which showed (they claimed) that treatment was effective. But still the broadly accepted position in government policy and community practice was that nothing works, and policies for managing offenders become increasingly punitive. (Even Martinson’s later article published in 1979, recanting much of his earlier views, failed to have any impact.) The complex task of making sense of a large body of literature using a narrative review is always liable to lead to disagreement. The development and refinement in the mid 1980s of the technique of meta-analysis presented a more systematic and objective alternative to the narrative review as a means of making sense of the findings of a large body of literature. The largest and most influential meta-analysis of the offender treatment literature to date was conducted by Lipsey (1992).
Design and procedure:- The first step in Lipsey’s work was to establish the eligibility criteria for studies to be included in the meta-analysis. There were six criteria used in making decisions about inclusion, ranging from the nature of the outcome variables to the type of research design. The next step was to gather the research studies together using searches of bibliographic databases. Lipsey noted that these searches produced ‘more than 8000 citations’ (p. 89) of potential relevance to the study. Once the individual studies had been collected and passed through the eligibility criteria, they were coded for analysis. For the 443 studies included in the meta-analysis, Lipsey used a 154-item coding scheme, incorporating study characteristics such as type of treatment, research design, length of treatment, type of outcome measure, and so on. Once coded, the data represented the characteristics and findings of the 443 individual studies to be statistically analysed using metaanalytic procedures.
Results and implications:- In meta-analysis a key outcome is effect size, which can be calculated in several ways but represents the outcome of the comparison between treatment and no treatment. It is also possible to calculate whether an effect size is statistically significant. So with regard to recidivism, a positive effect size would indicate that treatment reduced offending, while a negative effect size would indicate that treatment increased offending. The magnitude of the effect size indicates the numerical difference in recidivism between treated and untreated offenders. A meta-analysis allows comparisons of the effect size of, say, different treatment types or treatment effects in different settings. Lipsey reported an overall small positive effect size (a statistical measure of the impact of the treatment), so while it would not be true to say that ‘nothing works’, neither could an overwhelmingly strong case be made for treatment. Importantly, meta-analysis also allows researchers to identify the characteristics of ‘high effect’ treatments (those treatments that produce a significantly high reduction in recidivism compared to no treatment). For example, Lipsey’s analysis strongly indicated that structured treatments, generally using cognitive–behavioural methods of treatment, gave greater positive effects in reducing recidivism than treatments based on non-directive counselling. The impact of Lipsey’s work, taken in conjunction with other meta-analyses, can be seen in a large-scale resurgence in methods of offender treatment. This renewed interest has attracted significant government funding under the banner of ‘What Works’, with renewed endeavours in both research and practice.

Lipsey, M.W., 1992, ‘Juvenile delinquency treatment: A meta-analytic inquiry into the variability of effects’ in T.D. Cook, H. Cooper, D.S. Cordray et al. (eds), Meta-analysis for Explanation: A Casebook, New York: Russell Sage Foundation.
Martinson, R., 1979, ‘New findings, new views: A note of caution regarding sentencing reforms’, Hosfra Law Review, 7, 242–58.


The academic relationship between criminology and psychology has not always been harmonious (Hollin, 2002a). Studies of the first criminologists, in the late 1800s, focused on the individual offender, and it was hard to distinguish between criminologists and psychologists. In the 1930s, the focus in mainstream criminology shifted from the individual to society, and psychological theories of criminal behaviour held little sway compared to sociological theories. But since the 1990s, there has been an increasing dialogue between the disciplines as the study of the individual once again becomes a concern in criminology (Lilly, Cullen & Ball, 2001).

THE CAMBRIDGE STUDY - Predicting delinquency
One of the main findings from longitudinal research is that most juvenile crime is ‘adolescence limited’. [longitudinal research type of research design in which data are collected from a group of people, termed a cohort, over a long period of time (typically decades)] In other words, most young offenders ‘grow out’ of crime by the time they are 18 (Moffitt, 1993). But some juveniles (called ‘life-course persistent’ offenders) will continue offending into adulthood (Moffitt, 1993). Developmental criminology attempts to identify the factors that predict longer-term offending, in turn contributing to preventative efforts.

The Cambridge Study in Delinquent Development is an extensive longitudinal study conducted in Great Britain that has generated a wealth of data (Farrington, 2002). [Cambridge Study in Delinquent Development longitudinal study, based at the University of Cambridge, concerned with the development of delinquency and later adult crime] It began in 1961 with a cohort of 411 boys aged eight and nine, and it is still in progress, with over 90 per cent of the sample still alive. The methodology used in the Cambridge Study has involved not only access to official records, but also repeated testing and interviewing of the participants, as well as their parents, peers and schoolteachers. Approximately 20 per cent of the young men involved in the survey were convicted as juveniles, a figure that grew to 40 per cent convicted (excluding minor crimes) by 40 years of age. The official convictions matched reasonably well with self-reported delinquency.
By comparing the worst offenders with the remainder of the cohort, predictive factors began to emerge. These are factors evident during childhood and adolescence that have predictive value with respect to behaviour in later life. The Cambridge Study strongly suggests that the intensity and severity of certain adverse features in early life predict the onset of antisocial behaviour and later criminal behaviour.
Farrington (2002) lists these predictive factors as follows:
1. antisocial behaviour, including troublesomeness in school, dishonesty and aggressiveness;
2. hyperactivity–impulsivity–attention deficit, including poor concentration, restlessness and risk-taking;
3. low intelligence and poor school attainment;
4. family criminality as seen in parents and older siblings;
5. family poverty in terms of low family income, poor housing and large family size; and
6. harsh parenting style, lack of parental supervision, parental conflict and separation from parents.

Other studies have found similar predictors for aggression and violent conduct (Kingston & Prior, 1995). It is also evident that childhood antisocial behaviour and adolescent delinquency are related to other developmental problems. Stattin and Magnusson (1995) found clear relationships between the onset of official delinquency and other educational, behavioural and interpersonal problems. Farrington, Barnes and Lambert (1996) showed that these developmental problems are frequently concentrated in specific families. In their sample of 397 families, half the total convictions in the whole sample were accounted for by 23 families!
The force of the Cambridge Study and other similar research is to suggest that we need prevention strategies to reduce child and adolescent antisocial behaviour (Farrington, 2002). Such strategies might include improving young people’s school achievement and interpersonal skills, improving child-rearing practice and reducing poverty and social exclusion.

[David P. Farrington (1944– ) is currently Professor of Psychological Criminology at the Institute of Criminology at the University of Cambridge. A prolific researcher, he is widely cited for his work (initially with Donald West) on the Cambridge Study in Delinquent Development. He has also published on a range of other topics, including shoplifting, bullying, crime prevention, and methodologies for evaluating criminological interventions. He has the distinction of being the first non-American President of the American Society of Criminology. He is a former chair of The British Psychological Society’s Division of Criminological and Legal Psychology.]

Adult criminals
A longitudinal study allows us to compare adult offenders and non-offenders to discover even more about the pathways to crime. When the cohort in the Cambridge Study reached the age of 18, the chronic offenders had a lifestyle characterized by heavy drinking, sexual promiscuity, drug use and minor crimes (mostly car theft, group violence and vandalism). They were highly unlikely to have any formal qualifications, they held unskilled manual jobs, and had had frequent periods of unemployment. By 32 years of age, the chronic offenders were unlikely to be home-owners, had low-paid jobs, were likely to have physically assaulted their partner, and used a wide range of drugs. As you might expect, they had an extensive history of fines, probation orders and prison sentences. It was clear from their life histories and current circumstances that these men were leading a bleak and socially dysfunctional existence.
The data also point to protective factors. These are factors that appear to balance the negative predictors, so that at times when you would expect offending to occur, it does not. When males show all the predictive signs for a criminal career and yet do not commit offences, Farrington and West (1990) label them ‘good boys from bad backgrounds’. These men were generally shy during adolescence and socially withdrawn as adults. While not involved in crime, they did experience relationship problems with their parents or partners. Forming close relationships in early adulthood also seems to be related to a decrease in offending. In particular, those offenders who married showed a decrease in offending – providing that their partner was not a convicted offender (Farrington & West, 1995).

It would be a mistake to try to construct an exact model of a criminal career from all these data. There are too many unanswered questions for us to be overly confident in predicting the outcome, and simply describing the predictive factors is not the same as explaining how they bring about delinquent behaviour. Thus far, there is no grand theory to explain how the interaction between a young person and his or her environmental circumstances culminates in criminality. However, there are enough positive developments in the extant literature to indicate that this might be feasible in the future, at least with respect to certain probabilities and confidence limits.

Criminal behaviour takes many forms, but there is little doubt that violent acts are a source of great public concern. A recent World Health Report (Krug et al., 2002) referred to violence as ‘a global public health problem’. Contemporary psychological theory characterizes violence in this context in terms of an interaction between the qualities of the individual and characteristics of their environment.

The development of violent behaviour
As with delinquency, the development of violent behaviour can be studied over the lifespan, leading to the formulation of complex models of violent conduct. Nietzel, Hasemann and Lynam (1999) developed a model based on four sequential stages across the lifespan. This is an excellent example of an attempt to integrate social, environmental and individual factors to characterize the key factors underlying violence. At the first stage, there are distal antecedents to violence. These are divided into biological precursors (including genetic transmission and lability of the autonomic nervous system – psychological predispositions (including impulsivity and deficient problem solving) and environmental factors (such as family functioning and the social fabric of the neighbourhood). At the second stage, there are early indicators of violence as the child develops, such as conduct disorder and poor emotional regulation. Third, as the child matures the developmental processes associated with the intensification of vio lent behaviour come into effect, including school failure, association with delinquent peers, and substance abuse.

Finally, as the adolescent moves into adulthood there is a stage at which maintenance variables come into force, including continued reinforcement for violent conduct, association with criminal peers, and social conditions that provide opportunities for crime.

A psychological profile of violence
Research has also begun to uncover some of the psychological processes characteristic of the violent person. For example, the influential work of Dodge and colleagues has drawn on social information processing [social information processing theoretical model of how we perceive and understand the words and actions of other people] (i.e. how we perceive and understand and the words and actions of other people) to seek to understand the psychology of violence. Crick and Dodge (1994; Dodge 1997) suggested that we follow a sequence of steps when we process social information:
1 encoding social cues
2 making sense of these cues
3 a cognitive search for the appropriate response
4 deciding on the best option for making a response
5 making a response
Dodge proposed that violent behaviour may result from deficits and biases at any of these stages.
Beginning with social perception, there is evidence that aggressive young people search for and encode fewer social cues than their non-aggressive peers (Dodge & Newman, 1981) and pay more attention to cues at the end of an interaction (Crick & Dodge, 1994). This misperception may in turn lead to misattribution of intent, so that the actions of other people are mistakenly seen as hostile or threatening (Akhtar & Bradley, 1991; Crick & Dodge, 1996). Working out how best to respond to a situation is a cognitive ability often referred to as social problem solving. It involves generating feasible courses of action, considering potential alternatives and their likely consequences, and making plans for achieving the desired outcome (Spivack, Platt & Shure, 1976). Studies suggest that violent people show restricted problemsolving ability and consider fewer consequences than non-violent people (Slaby & Guerra, 1988). This sequence of cognitive events culminates in violent behaviour, which the violent person may view as an acceptable, legitimate form of conduct (Slaby & Guerra, 1988).

The role of anger
Cognitions interact with emotions, and anger (particularly dysfunctional anger) is the emotional state most frequently associated with violent behaviour (Blackburn, 1993). Anger may be said to be dysfunctional when it has significantly negative consequences for the individual or for other people (Swaffer & Hollin, 2001). It would be wrong to say that anger is the principal cause of violence, or that all violent offenders are angry, but clearly it is a consideration in understanding violence.
Currently, the most influential theory of anger is Novaco’s (1975). According to Novaco, for someone to become angry, an environmental event must first trigger distinctive patterns of physiological and cognitive arousal. This trigger usually lies in the individual’s perception of the words and actions of another person.

When we become angry, physiological and cognitive processes are kicked into action. Increased autonomic nervous system activity includes a rise in body temperature, perspiration, muscular tension and increased cardiovascular activity. The relevant cognitive processes (Novaco & Welsh, 1989) involve various types of information-processing biases concerned with the encoding of interpretation and triggering cues. For example, attentional cueing is the tendency to see hostility and provocation in the words and actions of other people, while an attribution error occurs when the individual believes that his or her own behaviour is determined by the situation, but that the behaviour of other people is explained by their personality. The progression from anger to violence is associated with the disinhibition of internal control, which can result from factors such as high levels of physiological arousal, the perception that there is little chance of being apprehended or punished, and the perpetrator’s use of drugs or alcohol.

Moral reasoning
There is a long history of research into the relationship between moral reasoning and offending (Palmer, 2003). Gibbs has examined the specific association between moral reasoning and violent behaviour, focusing on the bridge between theories of social information processing and moral development. Gibbs and colleagues suggest that this bridge takes the form of cognitive distortions (Gibbs, 1993; Goldstein, Glick & Gibbs, 1998) by which we rationalize or mislabel our own behaviour.
For example, if I perceive someone else’s actions as having hostile intent, leading me to assault them, my distorted rationalization might be that ‘he was asking for it’. Cognitive distortion is also seen in my biased interpretation of the consequences of my behaviour. So I might say that my victim ‘could have had it worse’ or ‘wasn’t too badly hurt’ or that ‘no real damage was done’ (Gibbs, 1996). These powerful types of distorted thinking are often socially supported and reinforced by the offender’s peer group.

The effectiveness of rehabilitation - A research issue
Historically, there is a longstanding struggle between liberal proponents of rehabilitation of offenders and conservative advocates of punishment for those who commit crimes. While this debate operates on many levels, engaging both moral and philosophical issues, there is an important role for empirical research: do efforts to rehabilitate offenders demonstrably lead to a reduction in offending?
The issue for researchers therefore seems plain: do attempts at rehabilitation work? Arriving at an answer to this question is not so straightforward. How is it possible to make sense of the outcome evidence from studies using different types of treatment, conducted with different offender populations, and carried out in a range of settings?
In the late 1960s, a research team was commissioned in New York state specifically to address the issue of the effectiveness of rehabilitation with offenders. The research team was given the task of conducting a comprehensive review of the effectiveness of rehabilitative efforts in prisons.
Design and procedure:- The researchers set about their survey in the traditional way. They conducted a search of the literature, identifying a total of 231 relevant treatment outcome studies appearing between 1945 and 1967. Following traditional review procedures, they then set their criteria for successful and unsuccessful studies and categorized each study according to these criteria. This procedure is sometimes referred to as ‘vote counting’.
Results and implications:- The dissemination of the findings of government-sponsored applied research is not always straightforward. The researchers presented a 1400-page manuscript to the state committee in the early 1970s, and eventually a book was published giving details of the research (Lipton, Martinson & Wilks, 1975). But the research findings presented in the book were not what created the study’s major impact. In 1974 one member of the research group, Robert Martinson, individually published an article in a general interest journal, pre-empting the book (Martinson, 1974). His general stance is that there is very little evidence that treatment has any significant effect on offending. Martinson’s paper begins by asking ‘What works?’ and concludes with a section ‘Does nothing work?’ (to which he finds the answer, with some caveats, to be affirmative). The notion that ‘nothing works’ took hold in many quarters, including academic researchers, policy-makers in the criminal justice system and the public at large. The impact of the message was significant, as funding was withdrawn from projects aimed at rehabilitation, prisons espoused a custodial rather than rehabilitative role, and theory and practice shifted to punishment, deterrence and ‘just desserts’ for offenders.

Lipton, D.S., Martinson, R., & Wilks, J., 1975, The Effectiveness of Correctional Treatment, New York: Praeger.
Martinson, R., 1974, ‘What works? Questions and answers about prison reform’, Public Interest, 35, 22–54.


Interviews are one of the most common ways of gathering information across a range of settings for a variety of reasons (Memon & Bull, 1999). In the context of crime investigation, there will be interviews with witnesses, suspects and victims, all conducted with various aims, including gathering evidence, cross-checking information and eliciting confessions (Milne & Bull, 1999). Interviewing children has become something of a speciality in its own right (Lamb et al., 1999). The less salubrious aspects of police interviewing have been highlighted by investigators of false confessions, but there are other, more constructive, aspects of the interview process to consider. A technique known as the cognitive interview [cognitive interview method of questioning witnesses, devised for use by the police, based on principles taken from memory research] illustrates the application of psychology to facilitate investigative interviewing. A great deal of the research on eyewitness testimony points to the frailties of memo ry and questions the reliability of eyewitness evidence. The cognitive interview is an attempt to find a constructive solution to these problems and improve the accuracy of eyewitness recall.

Fisher, McCauley and Geiselman (1994) describe how the original cognitive interview protocol, used by police officers, incorporated four techniques to enhance memory retrieval:
1. Context reinstatement – the witness is encouraged to recollect aspects of the situational context (such as sights and sounds at the time of the event and relevant personal factors, such as how they felt and what they were thinking at the time of the incident).
2. Report everything – the witness engages in perfectly free recall, unconstrained by focused (and potentially leading) questioning, or self-censoring of what is reported. The theory underpinning these two techniques lies in the contextual similarity between encoding and retrieval. So if the process of retrieval from memory can take place in a similar psychological context to that in which the information was encoded, the witness should have facilitated access to stored memories, improving the accuracy and completeness of recall (Fisher et al., 1994).
3. Reverse order – the witness is encouraged to begin their description of an event from different starting points (such as a mid-point), or to start at the end and work backwards to the beginning.
4. Change perspective – witnesses are encouraged try to give an account of the event from the point of view of another person, such as another witness or the victim.

Techniques 3 and 4 are intended to encourage witnesses to try to use many different paths to retrieve information from memory. If memories are stored as networks of associations, increasing the number of retrieval points should lead to more complete recall of the original event (Fisher at al., 1994). As the research and practice base developed, so the protocols for the cognitive interview expanded to include, for example, a broader range of specific questioning techniques and the use of guided imagery (Fisher & Geiselman, 1992). A body of evaluation studies, conducted in both laboratory and field settings, has accumulated since 1984. According to Milne and Bull (1999), the weight of evidence shows that the cognitive interview elicits more correct (that is, truthful) information than other types of interview. While there are some reservations, the technique is generally well received by police officers and has become widely used. Furthermore, recent research suggests that it is a reliable and helpful technique with child witnesses (Milne & Bull, 2003).

How easy is it to tell when someone is telling lies and seeking to deceive? Kassin (1997) cites several examples taken from police training manuals that suggest suspects’ verbal and nonverbal cues can be read to determine if they are lying. For example, it has been suggested that guilty suspects do not make eye contact, while innocent suspects give clear, concise answers.

It is possible that these general rules are useful, but the empirical evidence suggests that even skilled questioners are not good at detecting deceit simply on the basis of a suspect’s verbal and non-verbal cues (Ekman & O’Sullivan, 1991). Vrij (2000) suggested that most liars are caught because it becomes too difficult to continue to lie, and they have not made sufficient preparation to avoid detection. Vrij lists seven qualities that make a good liar:
1. having a well prepared story;
2. being original in what is said;
3. thinking quickly when the need arises;
4. eloquence in storytelling;
5. having a good memory for what has been said previously;
6. not experiencing emotions such as fear or guilt while lying; and
7. good acting ability.

If verbal and non-verbal cues are hard to read, how does an investigator catch out an individual who possesses all the attributes listed above? One approach is a highly structured analysis of verbal content, known as Statement Validity Assessment (SVA). Originally developed as a clinical tool for analysing children’s statements in cases of sexual abuse (Undeutsch, 1982), SVA consists of three elements:
1. A statement is taken in a structured interview.
2. The content of the statement is judged by the forensic psychologist in a criterion-based content analysis (CBCA). These content criteria are concerned with the general characteristics of the statement (such as whether it has a logical structure), the specific contents of the statement (such as descriptions of events and people), motivation-related content (such as admission of a lack of memory) and offencespecific elements (concerning the fine details of the offence).
3. The CBCA is necessarily subjective, and needs to be evaluated against a standard set of questions set in the ‘validity checklist’ (Raskin & Esplin, 1991). This checklist raises questions about the conclusions drawn from the analysis.
In other words, the content analysis itself is put to the test by systematic consideration of interviewee characteristics.

The interviewee’s psychological and motivational characteristics, the characteristics of the interview and a ‘reality check’ against other forensic evidence are all examined. It is clear that SVA represents an attempt to bring order and rigour to the essentially subjective matter of judging the veracity and reliability of an interviewee’s statement. However, in a review of the substantial evaluative literature with regard to SVA, Vrij (2000) has expressed several reservations about the technique and highlighted areas where questions remain. He concludes that ‘SVA evaluators appear to be able to detect truths and lies more accurately than would be expected by chance’ (p. 153). In other words, while not a perfect technique, SVA does help improve accuracy beyond guesswork and inaccurate beliefs about how to judge accuracy.

If ever a topic generated a great deal of heat and rather less light, offender profiling [offender profiling constructing a picture of an offender’s characteristics from their modus operandi together with the clues left at the crime scene] would be high on the list of most forensic psychologists.

But as our knowledge base increases, it is likely that the technique will become increasingly sophisticated (Ainsworth, 2001; Jackson & Bekerian, 1997). Wrightsman (2001) distinguishes between profiling historical and political figures, profiling likely criminals from crime scene characteristics, and profiling the common characteristics of known offenders. Turvey (2000) draws the distinction between inductive and deductive methods of profiling. Inductive methods rely on the expert skills and knowledge of the profiler – a method often referred to as ‘clinical’ in style. By contrast, deductive methods rely on forensic evidence, such as crime scene characteristics and offencerelated empirical data – an approach often referred to as ‘statistical’.
Profiling historical and political figures Attempts have been made to construct psychological profiles of historical figures (from Jack the Ripper to Adolf Hitler) by systematically gathering and organizing information in an effort to understand their motives and behaviour. Experts will undoubtedly have constructed psychological profiles of Saddam Hussain in order to try to predict his behaviour during the 2003 conflict in Iraq. These types of profile typically rely on specialist knowledge (e.g. military, historical).
Profiling criminals from the crime scene Way back in the late 1880s, forensic pathologists were trying to link series of crimes by the similarity of crime scene characteristics, such as the nature of a victim’s wounds. More recently, the American Federal Bureau of Investigation (FBI) pioneered an investigative system based on central features (such as the details of a crime scene and forensic evidence) in order to construct a profile of the psychological and behavioural characteristics of the criminal (Douglas et al., 1986). While forensic evidence can yield many clues, the starting point for the FBI was to use the crime scene to construct a picture of the type of person who committed the offence. This approach yielded various classifications of types of offender associated with their psychological characteristics.
For example, a much used distinction (mainly concerned with serious offenders such as murderers or rapists), incorporated within the FBI framework, is that between ‘organized’ and ‘disorganized’ offenders (Ressler, Burgess & Douglas, 1988). An organized offender will plan the offence, be careful not to leave evidence, and target the victim. The disorganized offender will seemingly offend at random, use a weapon that is discarded near to the scene of the crime, and make few attempts to hide evidence or potential clues. In terms of psychological characteristics, the organized offender is seen as intelligent and socially adjusted, although this apparent normality can mask a psychopathic personality. According to this framework, the disorganized offender is said to be less intelligent and socially isolated, may have mental health problems, and is likely to offend when in a state of panic. The obvious criticisms of such distinctions (and the FBI approach more generally) is that they are inductive, highly subjective and lacking in robust empirical validation. Profiling common characteristics of known offenders.

The third approach to profiling is to look to empirical data, rather than an expert’s opinion, to construct profiles. This approach emphasizes the rigorous gathering of data about the crime from multiple sources (such as geographical location and victim statements), the application of complex statistical analyses to databases of crime scene details (and other forensic evidence), and attempts to build a profile of the offender with theoretical integrity. Adopting this approach, Canter and Heritage (1990) analysed data from over 60 cases of sexual assault and were able to identify over 30 offence characteristics, such as level of violence, use of a weapon, type of assault and use of threats. Statistical analyses were used to search for relationships and patterns between the factors, and to build up characteristic profiles of types of sexual assault. This and other similar studies provide preliminary support for the central premises of offender profiling based on the common characteristics of known offenders.


In law, a confession is exceptionally powerful evidence – an irrefutable admission of guilt. But while most confessions are true, some people have been known to ‘confess’ to a crime they did not commit. Gudjonsson (2003) offers a catalogue of cases in which people have been imprisoned for long periods, or even executed, on the basis of a false confession. In the UK these infamous cases include those of the ‘Guildford Four’ and ‘Birmingham Six’, two court cases from the mid 1970s, in which four and six innocent people respectively received long prison sentences based on evidence that included false confessions. How often such cases arise is impossible to know – matters of guilt and innocence are not always clear-cut, and the discovery of a mistake in sentencing can take years to come to light. Undoubtedly, some such errors never do. Why people make false confessions, another issue raised by Münsterberg (1908), is a very ‘psychological’ question. A distinction has been drawn between two types of false confessio – voluntary and coerced. Coerced false confession can be broken down further into two sub-types – coerced–compliant and coerced– internalized false confessions.

A voluntary false confession occurs when, in the absence of any obvious external pressure, an individual presents himself to the police and admits to a crime he did not commit. Kassin and Wrightsman (1985) suggest several possible reasons for this behaviour:
1. the desire for notoriety – it is a feature of many high-profile crimes that substantial numbers of people come forward to confess;
2. the individual may feel guilty about a previous event in his life, and believe he deserves to be punished;
3. inability to distinguish between fact and imagination, so internal thoughts of committing a crime become ‘real’ (this type of behaviour is often associated with major mental disorders such as schizophrenia);
4. the desire to protect someone else, such as a child or partner (this type of false confession can be coerced as well as voluntary).

Gudjonsson (2003) notes revenge as another motive that can lead to a false confession. In one case, a man made a false confession deliberately to waste police time as revenge for what he perceived as his previous wrongful treatment by the police. In contrast to voluntary false confessions, the essential element of a coerced confession is that the individual is persuaded to confess. As Kassin (1997) suggests, to understand coercion within the context of a false confession it is necessary to begin with the process of police interrogation.

The laws relating to the conduct of police interrogation of suspects vary from country to country. But there are some psychological principles that can be applied whenever one person is seeking information from another, irrespective of location. Suspects may spend time isolated in police cells before and during interrogation, an experience that can be frightening and stressful (Irving, 1986). For some, this situation may create psychological distress or exacerbate existing psychological and emotional conditions. Police interrogation manuals from both Britain (Walkley, 1987) and America (Inbau, Reid, & Buckley, 1986) tell us that, from a police perspective, the interrogator must overcome the suspect’s natural resistance to tell the truth, and so must be skilled in the use of strategies to persuade the suspect to confess. These interrogational tactics, based on the social psychology of conformity, obedience and persuasion (see chapter 18), increase the pressure on suspects so that they will fall into line with the interrogator’s view of events. The interrogator will do this by suggesting that they have the power to determine what charge will be brought, whether the suspect will receive bail or be remanded in custody, and whether to involve other people known to the suspect. The interrogator might also use persuasive tactics designed to encourage the suspect to confess, suggesting, for example, that there is evidence proving the case against the suspect, or that accomplices have confessed, or even, as Gudjonsson and MacKeith (1982) noted, by producing dummy files of evidence.

More recently, there have been various legal changes in the rules governing the conduct of interrogations to eliminate dubious practice (Gudjonsson, 2003). There is guarded optimism that the changes are having the desired effect. But in such a highly charged and complex arena, where there are often pressures on the police to solve a high-profile crime, it can be difficult to be certain of how the minutiae of social exchanges during interrogation influence the final outcome.

Gudjonsson and Clark (1986) suggested that a suspect will come to an interrogation with a general cognitive ‘set’ that may be hostile, suspicious or cooperative. This cognitive set (itself related to factors such as intelligence, level of stress and degree of previous experience of police questioning) will influence the suspect’s appraisal of the situation, and so affect the suspect’s strategy for coping with the interrogation.

Gudjonsson and Clark describe two styles of initial coping response:
1. a logical, realistic approach, which seeks actively to deal with the situation and may lead to active resistance (which may weaken as the interrogation progresses) to the interrogator’s persuasion to confess; and
2. a passive, helpless stance, which avoids confrontation with the interrogator, and so reduces stress but may lead to increased susceptibility to the interrogator’s persuasive tactics.
During questioning, the suspect has to recall information, but she must also make some difficult decisions. She has to decide how confident she is in her memories, what answer to give the interrogator (which may not be the same as the suspect’s private knowledge of events) and whether she trusts the interrogator. Resistant suspects are likely to hold onto their own version of the truth, rebutting persuasive attempts to bring them to confess. Coerced suspects may change their version of the truth so as to agree with the interrogator.

Where a false confession ensues, this process of coerced agreement can be seen in two distinct ways:
1. The suspect remains aware that her confession and her private, internal knowledge of the event disagree, but the suspect nevertheless comes to agree with the interrogator.
This is called a coerced–compliant false confession.
2. In some circumstances, the suspect’s internal account of events actually changes to fall into line with the interrogator, so that, both publicly and privately, the suspect comes to agree with the interrogator’s version of events. This is called a coerced–internalized false confession.

Coerced compliance
The notion of compliance has a long history in psychological research (Asch, 1956; Milgram, 1974; see also chapter 18). The compliant suspect copes with the pressures of interrogation by coming to agree with the interrogator (even while knowing that the agreement is incorrect, in the case of the coerced–compliant false confession). This might happen for several reasons: the suspect might wish to please the interrogator, avoid further detention and interrogation, avoid physical harm (real or imagined) or strike a deal with the interrogator that brings some reward for making a confession (Vennard, 1984).

Coerced internalization
The essential element in a coerced–internalized confession is the suspect’s coming to believe that their own memory for events is incorrect and that the police version must therefore be true. Kassin (1997) has drawn the analogy between this type of confession and the phenomenon of false memories. There are perhaps also parallels with the notion of cognitive dissonance, (whereby a person comes to change their attitudes to make them more consistent with their behaviour) and the kind of obedience which occurs towards authority figures (discussed in chapters 1 and 18) may well also be relevant here.

Drawing on the psychology of suggestibility (Gheorghiu et al., 1989), Gudjonsson (1987) developed the notion of interrogative suggestibility – the extent to which, during intense questioning, people accept information communicated by the questioner and so change their responses. [interrogative suggestibility the degree to which individuals are inclined to accept as true the type of information that is communicated by the questioner during interrogation] The powerful combination of situational stress, individual factors such as self-perception, intelligence and memory ability, and current psychological state may trigger suggestibility to misleading information on the part of the suspect, and so produce a false confession.


The capacity and fallibility of human memory was one of the first areas of investigation in psychological research. Through careful experimental work, several distinguished scholars, including Hermann Ebbinghaus (1850–1909), began to unravel some of the fundamental properties of memory functioning (Ebbinghaus, 1885/1994). One model that emerged from this early work described the three memory stages of (i) acquisition (when memories are formed), (ii) retention (holding them in storage) and (iii) retrieval (fetching them from storage).
While memory theory has moved on from this basic model, it is still useful in a discussion of eyewitness memory. [eyewitness testimony the evidence given by witnesses to a crime, typically in the form of a verbal account or person identification] Research into the accuracy of eyewitness testimony has focused on initial observation of the incident (acquisition), the period between seeing and recalling (retention) and, finally, giving testimony (retrieval). Researchers have engaged with a wide range of relevant variables over a long period (Goodman et al., 1999; Ross, Read & Toglia, 1994; Sporer, Malpass & Koehnken, 1996), including:
1 social variables, such as the status of the interrogator;
2 situational variables, such as the type of crime;
3 individual variables, such as witness age; and
4 interrogational variables, such as the type of questioning.

The role of the expert witness
As professional chartered or registered psychologists, forensic psychologists are often called upon to provide reports on particular individuals for court hearings. For example, in a legal context, a forensic evaluation may subsequently be used to assist the court in making an appropriate decision regarding family, civil or criminal matters. A forensic psychologist might be called to give expert evidence in the accuracy of eyewitness memory, or the likelihood of a false confession, or the reliability of children as witnesses when subjected to certain questioning procedures.
The evaluation of the client provided by the forensic psychologist will often involve characterizing the relationship between psychological factors and relevant legal issues. For example, what is the forensic psychologist’s best opinion regarding the possible precipitating factors preceding the crime or civil offence? Findings should be clearly communicated and reflect standard psychological practice, including nationally and internationally accepted psychological instruments and norms. Relevant empirical research that is consistent with the psychologist’s conclusions should be noted. Any recommendations that are made (for example, with respect to rehabilitation) must be legally sound, practical and involve services that are widely available in the individual’s local community. Forensic psychologists should be able to defend their conclusions logically. It is especially important that the psychologist uses explanations that can be understood by non-psychologists, such as the judge, barristers and, of course, members of the jury. The relevant issues should therefore be presented clearly and simply, but without ‘dumbing down’. This takes great skill on the part of the forensic psychologist. The conclusions and recommendations of the forensic psychologist should assist the relevant person or agency in reaching a decision, and should not add unnecessary confusion to that process. In addition to having a relevant training and education background, it is therefore critical for psychologists who undertake forensic evaluations to possess excellent assessment and communication skills. They must also have experience and/or a thorough training in completing psychological evaluations in a legal setting so that they will not be ‘fazed’ by the process. Lawyers engaged in cross-examination can be hostile and seek to undermine the credibility of the psychologists’ professional opinions. ‘Wherever possible, stick to the facts’ is a piece of advice frequently offered to individuals who are presenting in court. Psychologists offeri ng a professional opinion in court are protected by the court and therefore cannot be sued for defamation.
Nevertheless, they should evaluate the core facts of the case in order to reach a professionally informed opinion regarding the psychological issues only. As with any professional, psychologists should not offer opinions outside their area of expertise. For example, they should not speculate on whether a defective mechanism in the workplace may have contributed to the event they have identified; this would be the province of another forensic professional. Egeth, H.E., & McCloskey, M., 1984, ‘Expert testimony about eyewitness behaviour: Is it safe and effective?’ in G.L. Wells & E.F. Loftus (eds), Eyewitness Testimony: Psychological Perspectives, Cambridge: Cambridge University Press.

Research has also considered the effect of particular types of crime. For example, can witnesses to a violent crime be as accurate as witnesses to a non-violent crime? Controlled experimental studies, typically during which witnesses see videotaped crimes of varying degrees of violence, suggest that violence results in poorer witness accuracy (Clifford & Hollin, 1981). But strangely, field studies of real-life witnesses suggest that those who are exposed to highly violent events can give very accurate testimony (Yuille & Cutshall, 1986). Indeed, adult victims of rape usually give a reasonably accurate account of this extreme personal experience of violence (Koss, Tromp & Tharan, 1995). One possible explanation for this apparent contradiction is that, in a stressful situation such as a violent crime, a witness’s attention may narrow to the central (rather than the peripheral) details of the incident. The theory is that the deployment of attention narrows to central details of the event, such as the criminal’s actions, thereby producing less reliable memory for peripheral detail, such as what colour shirt the criminal was wearing (Clifford & Scott, 1978). When the central detail is a life-threatening weapon, witnesses may pay much more attention to the weapon, to the exclusion of other details. This phenomenon is known as ‘weapon focus’ (Loftus, Loftus & Messo, 1987). It is vital to understand the impact on witness memory of factors such as the type of crime. What is encoded during acquisition is critical because it forms the basis for what is stored in memory and eventually retrieved when giving testimony.

During the retention stage, witness memory may be subject to various influences, such as discussion with other witnesses and exposure to media accounts of the crime, not to mention the fact that memory becomes less accurate over time. So the time interval between acquisition and retrieval is an obvious consideration. Several studies have compared the accuracy of eyewitness face identification over short and long time intervals. Malpass and Devine (1981), for example, chose short (three-day) and long (five-month) intervals. They found, not surprisingly, that after three days there were no false identifications, but after five months the rate of false identifications had risen (table 21.1). Conversely, the rate of correct identifications was initially high but fell significantly at five months. Krafka and Penrod (1985) reported a similar finding with the much shorter time intervals of two hours and 24 hours. The force of the evidence suggests that identification accuracy does decrease with time, although the ra tes for false and correct identifications may be different.

Finally, during the retrieval stage, factors that potentially influence the accuracy of eyewitness testimony include interview style and the use of aids to recall, such as the photofit and identity parades.

Studies of the impact of leading questions show that even subtle changes in question wording can influence testimony. For example, Loftus and Palmer (1974) asked witnesses to a filmed traffic accident to estimate the speed of the cars when ‘they – into eachother’: for different groups of witnesses the blank read ‘contacted’, ‘hit’, ‘bumped’, ‘collided’ or ‘smashed’. The witnesses’ estimates of the speed increased according to the level of force implied by the verb contained in the question. In later questioning, those witnesses who had been asked about the car ‘smash’ were more likely to say – mistakenly – that they had seen broken glass. Additional studies have established that misleading information presented to witnesses is more likely to have an influence on peripheral details than central events (Read & Bruce, 1984). Furthermore, it seems that the effects of leading questions such as those used by Loftus and Palmer (1974) are a direct product of the demands of the questioning procedures, rather than the questions leading to permanent changes in memory (Zar agoza, McCloskey & Jarvis, 1987). This last point emphasizes that witnesses can give incorrect replies to questions even though the memory trace (‘retention’) itself has apparently not been distorted.

Narby, Cutler and Penrod (1996) have created three categories of witness-related evidence based on reliability and magnitude of effect:
1. reliable and strong factors that show consistent effects on eyewitness memory (e.g. there are differences in memory performance between adult and child witnesses; if a person is wearing a disguise, such as a hat, this influences accuracy of memory; and the length of time, termed ‘exposure duration’, that the witness has to observe an incident);
2. reliable and moderate factors that show effects in some studies but not in others (e.g. the match between the level of confidence a witness has in their memory and how accurate it really is; weapon focus; and crime seriousness); and
3. weak or non-influential factors that have little or no effect on witness accuracy (e.g. witness gender; the personality of the witness; and (within limits) the witness’s level of intelligence).
An issue that is broader than the strength of the evidence concerns its validity when applied to the real world. Do the findings from psychological studies parallel what happens to real crime witnesses? Should research findings be made available to the court to influence real trials? In other words, can psychological studies of eyewitness memory be generalized to real life?
Critics such as Konecni and Ebbesen (1986) and Yuille and Cutshall (1986) note the lack of realism in many experimental studies, such as the use of filmed crimes, and the participants’ awareness of the research aims. The matter boils down to one of control – laboratory studies allow a high degree of control at the expense of realism, while field research is more realistic and ‘ecologically valid’ but prey to a host of influences that reduce control over the variables being measured. This is a problem which has bedevilled many areas of psychological research but its influence is arguably no more profound than in the field of forensic psychology.

The strongest conclusions that can be validly drawn will most likely be derived from a variety of studies (including laboratory studies, case studies, field studies and archival studies) which employ a broad range of different experimental designs and methodologies (see chapter 2 on ‘triangulation’).

Measuring crime - A research issue
A great deal of research in forensic psychology relies on a measure of crime as key outcome measure. For example, the evaluation of crime prevention initiatives, innovative police procedures or offender treatment strategies all rely on measuring their impact on crime in order to estimate their effectiveness and make decisions regarding their continued funding.
There are several ways of measuring crime (for example, conducting victim surveys to gain knowledge of local or national estimates of levels of crime, asking known criminals to give self-reports about their offending, or looking to official reconviction records). We know that there will be differences across measurement of crime according to the use of victim reports, self-reports or official figures. But what can we say about variation within a type of measurement? For example, can it be taken for granted that there will be consistency in official records of crime?
A study by Friendship, Thornton, Erikson and Beech (2001) looked at the two main sources of criminal history information held in England and Wales. It is from these sources that researchers take official reconviction figures.

Design and procedure:- The research was concerned with two sources of criminal history data:
1. The Offenders Index (OI) is a computerized database containing criminal histories, based on court appearances, of all those convicted of standard offences in England and Wales since 1963.
2. The National Identification System (NIS) is based on police records held both on microfiche and a computer database.
In order to compare these data sources, the researchers took a sample of 134 sexual offenders and compared the data for offence history and reconvictions for this group as recorded on the OI and the NIS.

Results and implications:- There were variations between the two data sources in their recording of criminal history variables of the sexual offenders.
This variation, in turn, indicated that the reconviction rates derived from the two data sources differed. Based on the OI, the reconviction rate for the sample was 22 per cent reconviction for general offences and 10 per cent for further sexual offences, but for the NIS, the comparable rates were 25 per cent and 12 per cent, respectively. Also, when a composite measure of reconviction was derived by combining OI and NIS data to give the ‘best’ estimate, the reconviction rates, again for general and sexual offences, were 32 and 13 per cent respectively. Friendship et al. are correct in stating that their findings will be of great benefit to researchers. The use of composite measures of reconviction, based on both the OI and NIS data sources, gives a more complete indication of reconviction than either source used alone. This is essential information, based on empirical study, for researchers whose work may inform both policy-makers and practitioners.

Friendship, C., Thornton, D., Erikson, M., & Beech, A., 200 1, ‘Reconviction: A critique and comparison of two main data sources in England and Wales’, Legal and Criminological Psychology, 6, 121–9.


Crime is part of our everyday lives. Switch on the television and there will be documentaries about crime, films about crime and crime stories in the news. Pick up a newspaper and there will be coverage of local crimes, and articles about crimes of national and international significance.
Browse in a bookshop and you will probably find a crime section with novels about crime, true crime stories, books about criminals and books written by criminals. Listen to a conversation on the bus or in the pub and there is a good chance that you will hear someone talk about a burglary in their street, or their car being broken into, or a friend’s credit cards being stolen.
From the time that Cain killed Abel, crime has been in the news. For centuries before psychology appeared on the scene, philosophers struggled to understand evil and antisocial acts, while students of jurisprudence wrestled with issues of criminal law and punishment. It was not until the turn of the 1900s that psychology was first applied to understanding criminal behaviour, and forensic psychology did not really emerge as a speciality until the middle of the twentieth century.
But forensic psychology has quickly grown in popularity, aided and abetted by several wellknown television series. University postgraduate courses have expanded to include forensic psychology, and there is now a range of professional opportunities for those with the appropriate qualifications.

According to The Concise Oxford English Dictionary, ‘forensic’ means ‘Of, used in, courts of law’. So, strictly speaking, forensic psychology is the application of psychology to matters concerning the court of law.
Wrightsman’s Forensic Psychology takes just this approach in proposing that ‘Forensic psychology is reflected by any application of psychological knowledge or methods to the task facing the legal system’ (2001, p. 2). This correct usage of the term ‘forensic’ is similarly reflected in other texts given specifically to forensic psychology (Gudjonsson & Haward, 1998) or more generally to psychology and law (Bartol & Bartol, 1994; Kapardis, 1997; Stephenson, 1992).
But ‘forensic psychology’ has also come to be used in a much broader sense – when psychology is associated with any topic even remotely related to crime, such as the development of antisocial behaviour, the study of different types of offender, and crime prevention. This improper use of the term ‘forensic’ has, rightly, met with disapproval (Blackburn, 1996), but its use has become widespread.
In considering the topic of forensic psychology in the broad sense it is helpful to distinguish between legal psychology [legal psychology the application of psychology to matters of concern in a court of law.] – which can be thought of in terms of Wrightman’s definition – and criminological psychology [criminological psychology the application of psychology to enrich our understanding of crime and criminal behaviour]– the application of psychological knowledge and methods to the study of crime and criminal behaviour.

The application of psychology to the legal arena took place even as psychology first developed as a university-based academic discipline. In their history of forensic psychology, Bartol and Bartol (1999) note that several eminent figures, such as J. McKeen Cattell (1895), Alfred Binet (1905) and William Stern (1910), conducted studies of the accuracy of memory, drawing parallels with the precision of real-life eyewitness testimony. Even Sigmund Freud showed an interest in legal psychology, publishing in 1906 a paper titled ‘Psychoanalysis and the ascertaining of truth in courts of law’.

But there is little doubt that the most influential figure of the time was the American-based German psychologist Hugo Münsterberg (1863–1916). A doctoral student of Willhelm Wundt in Leipzig, Münsterberg met William James at Harvard in 1889, eventually taking a post there in 1897 (Spillmann & Spillmann, 1993). While writing on many areas of psychology, often in a controversial manner (Hale, 1980), Münsterberg’s major contribution to the fledgling discipline of forensic psychology is to be found in his book, published in 1908, On the Witness Stand. He advanced the view that psychology could usefully be applied to enhance understanding of courtroom issues and procedures. In particular, Münsterberg drew attention to the psychologist’s understanding of perception and memory, claiming that psychological knowledge provided insight into the reliability of witness testimony (thereby making the case for the psychologist as expert witness). At the time, Münsterberg’s claims for the practical benefits of psychology in the courtroom drew fierce attack from the legal profession (Wigmore, 1909). But his writings have stood the test of time in anticipating important areas of research, such as the study of the reliability of evidence, as seen in investigations of eyewitness memory and confessional evidence.

[Hugo Münsterberg (1863–1916) is often referred to as the founding father of forensic psychology. A German psychologist, Münsterberg was invited to America in 1892 by William James to set up a psychological laboratory at Harvard University. Münsterberg’s insistence that psychology could be applied to education, industry, and law was variously applauded as inspired by his supporters, or derided as opportunistic by his critics. During the First World War his political views (as seen in his pro-German sympathies, and a critical stance that he adopted to American involvement in the war) led to his becoming a social and academic outcast.]

[L. R. C. Haward (1920–98) can rightly be acclaimed as the first major figure in British forensic psychology. A clinical psychologist by training, Lionel Haward saw the potential for psychology to inform legal proceedings. He published on the topic of forensic psychology in the 1950s – well before The British Psychological Society formed the Division of Criminological and Legal Psychology (now the Division of Forensic Psychology) – and in 1981 he wrote the classic text Forensic Psychology. Alongside his academic work, he appeared as an expert witness in many cases, including the infamous 1960s trial of the underground magazine Oz.]


As organization designs change, psychologists have investigated new ways to analyse those organizations. One approach that has caught the attention of many social scientists is to view organizations as ‘cultures’.

Manifestations of culture
Imagine describing to your friends the experience of visiting a distant foreign country. You might talk about the dress, laws, religious beliefs, cultural values and traditions, physical environment, social attitudes, buildings, night life, recreational activities, language, humour, food, values and rituals of that country. Organizations can also be described in terms of their cultures, including their values, attitudes and beliefs. Manifestations of culture include:
 Hierarchy – e.g. the number of levels of command or management, from the head of the organization to the lowest level employee.
 Pay levels – high or low, whether there is performance-related pay, and what the differentials are between people at different grades.
 Job descriptions – how detailed or restrictive they are, and what aspects they emphasize (e.g. safety, productivity, cost saving or quality).
 Informal practices – e.g. norms such as management and nonmanagement employees sitting at separate tables in the canteen; strictly formal dress, uniforms or casual dress.
 Espoused values and rituals – e.g. an emphasis on cooperation and support vs. cut-and-thrust competition between teams; cards, gifts and parties for those leaving the organization; celebrations at certain times of the calendar or financial year.
 Stories, jokes and jargon – e.g. commonly told stories about a particular personal success or the failings of management; jokes about the sales department; jargon or acronyms (most government departments have a lexicon of acronyms and jargon, which is often impenetrable to outsiders).
 Physical environment – office space, canteens, rest rooms. Are all spaces clean, tidy and comfortable or only the areas on public display? Are there decorations, such as plants and paintings, and adequate employee facilities, such as water fountains? The meanings of all these aspects of the organization taken together tell us about its underlying culture (Schein, 1992).
There has been particular interest in how to ‘manage’ organizational culture, and considerable resources have been spent trying to create ‘a service culture’ or ‘an open culture’ or ‘a people culture’, to name but three examples.

Understanding culture
Organizational psychologists have adopted three approaches to understanding culture (Martin, 1992): integration, differentiation and fragmentation. These differing dimensions suggest that organizational culture is complex and that we can best understand it by adopting a multidimensional perspective.
1. The integration perspective Those who adopt this view believe that a ‘strong culture’ will lead to more effective organizational performance. A strong culture is consistent throughout the organization, and there is organization-wide consensus and clarity. Senior management set the values and develop a mission statement. When this is effectively communicated and implemented via managerial practices, organization-wide consensus is shaped. So employees know what they are supposed to do and agree on the value of doing it. McDonald (1991) described such a culture in the Los Angeles Olympic Organizing Committee. The employees wore attractive uniforms, developed elaborate rituals, introduced brightly coloured stadium decorations, adopted an intense working pace and told many stories about their charismatic leader, which all reinforced an organization-wide commitment around a shared set of values. However, organizational psychologists now believe that culture is more complicated than the integration perspective alone implies.
2. The differentiation perspective This view recognizes that employees or members have differing interests, task responsibilities, backgrounds, experiences and expertise, which means that work attitudes and values, as well as pay and working conditions, will vary throughout the organization. Add the differing social identities due to gender, class and ethnic background, and, according to this perspective, the concept of a unifying culture seems inappropriate. Instead, it is proposed that within the organization there are overlapping and nested sub-cultures, which co-exist in relationships of harmony, conflict or indifference. Van Maanen (1991) found just this differentiation even in the ‘strong culture’ of Disneyland. Food vendors and street cleaners were at the bottom of the status rankings whereas, among ride operators, those responsible for ‘yellow submarines’ and ‘jungle boats’ had high status. Some tension was noted between operators, supervisors and even customers as the different groups interacted. At the same time, supervisors were engaged in an endless struggle to catch operators breaking the rules. According to Van Maanen, the conflict or differentiation perspective offers a more realistic account of organizational culture than the integration perspective.
3. The fragmentation perspective Ambiguity is a defining feature of many organizations. According to the fragmentation
perspective, this ambiguity occurs because there simply is no consensus about meanings, attitudes and values of the organization. Meyerson (1991) demonstrated this approach in a study of a social work organization. Where goals were unclear, there was no consensus about appropriate ways to achieve them, and success was hard to define and to assess. In this organization, ambiguity was the salient feature of working life. As one social worker reported: ‘It just seems to me like social workers are always a little bit on the fringe; they’re part of the institution, but they’re not. You know they have to be part of the institution in order to really get what they need for their clients, but basically they’re usually at odds with the institution’ .
There is considerable debate about the types of cultures that are associated with organizational effectiveness but some researchers have gathered data from the employees of successful companies on which characteristics they associate with their companies’ success. These include emphases on customer service, quality of goods and services, involvement of employees in decision making, training for employees, teamwork and employee satisfaction.

Why do people get up out of their warm beds to get to work on time on a cold winter morning? Why do they conform to the office dress code? Why do they allow the boss to talk to them in a way they would not permit from others? The explanation goes beyond the simple need for pay – it relates to issues of power and contro
l. Customer service
 Strong emphasis on customer service
 Company provides quality service
 Customer problems corrected quickly
 Delivers products/services in a timely fashion
 Quality Senior management committed to quality service
 Senior management demonstrates quality is a top priority
 Supervisors provide service guidance
 Supervisors set good examples in relation to quality
 Work group quality is rated
 Continuous improvement
 Clear service standards are set
 Quality is a priority vs. meeting deadlines
 Quality is a priority vs. cost containment
 Involvement
 Front line staff have the authority necessary to meet customers’ needs Encouragement to be innovative Encouragement to participate in decisions
 Sufficient effort to get opinions of staff
 Management use employees’ good ideas
 Training
 Plans for training and development
 Opportunities for staff to attend training
 Staff given opportunities to improve skills
 Staff are satisfied with training opportunities
 Staff have the right training to help them improve
 New employees get necessary training
 Information/Knowledge
 Management gives clear vision/direction
 Staff have a clear understanding of goals
 Staff are informed about issues
 Departments keep each other informed
 Enough warning about changes
 Satisfaction with organizational information
 Teamwork/Cooperation
 Cooperation to get the job done
 Management encourages teamwork
 Workload divided fairly
 Enough people to do the work
 Problems in teams corrected quickly
 Overall satisfaction
 High job satisfaction
 Jobs use skills and abilities
 Work gives a feeling of accomplishment
 Satisfaction with organization
 Rate the organization as a place to work
 Proud to work for the organization
 Would recommend working at the organization
 High job security
 Not seriously considering leaving the organization

The pursuit of power
‘Power’ can be defined as the probability of someone carrying out their own will, despite resistance (Weber, 1947). It is not usually wielded nakedly in organizations because it creates resentment and resistance. Instead, those in power tend to use influence andpersuasion, which is generally effective because we know that they have the power to achieve their ends ultimately. The pursuit of power for its own ends can be very destructive. McClelland (1975) conducted an analysis of people’s needs for power and showed how those with a strong power motive may present themselves well at interview but be a disaster at work, alienating others and reducing the capacity of the organization to achieve cooperative, collaborative, concerted action. This is because they tend to interpret most situations in power terms and act in Machiavellian or manipulative ways to assert or gain power.
Power, according to French and Raven (1959), derives from five sources:
Legitimate power comes from position in the hierarchy and is imposed by authority.
Expert power results from access to knowledge and information, so the computer wizard often gains considerable power in an organization.
Reward power is illustrated by the person who allocates offices, parking spaces, pay rises, equipment or stationery –such people may have considerable power without being in a senior position in the hierarchy.
Coercive power is the power to force others into action or inaction by the threat of punishment, such as delaying the payment of expenses claims.
Referent power is wielded by someone whose persuasiveness, popularity or charisma lead others to accede to his/ her wishes or suggestions.
A pluralist view The power and politics perspective (Pfeffer, 1981) examines the way individuals and groups within organizations compete for resources and other desired ends (e.g. office space, visibility, recognition, promotion). This ‘pluralist’ view regards organizations as made up of a variety of interests and beliefs that should all be heard. It contrastswith the notion that organizations can (with appropriate management) be one ‘happy family’ with everyone in the organization believing in the same ideals as the strong leader. This latter perspective is the ‘unitarist’ view (Burrell & Morgan, 1979). The pluralist perspective is particularly relevant as businesses become more global and our societies become more multiethnic.
Organizations must reflect their societies if they are to be sensitive to the needs and desires of their customers, quite apart from the moral issues of equal opportunities. Organizational psychologists are therefore becoming increasingly concerned with managing a workforce that is diverse in terms of ethnicity, disability, age, culture and gender. Women at work A major area of research on power in organizations examines the experiences of women at work. The list of potentially relevant themes (some of which also apply to men) is long, including: bias in selection, placement, performance appraisal and promotion; sexual harassment; obstacles to achievement and advancement; conflict between work and family responsibilities. Other concerns relate to being in a non-traditional (i.e. ‘male’) job and being in the minority (worse still, a ‘token’) as a female manager (Gutek, 1993). A significant problem is stereotyping. The effects reach deep into adult employment, where 52 per cent of employed women work in occupational groups in which more than 60 per cent of their co-workers are women, such as clerical and secretarial work, service work and sales. Similarly, 54 per cent of men work in occupational groups where more than 60 per cent of their coworkers are men, including occupational groups such as managers and administrators, craft and related occupations, plant and machine operatives (Equal Opportunities Commission, 1998).
Women are also vastly over-represented in part-time work and pregnancy is still (illegally) treated by some employers as a cause for dismissal. In 1998, the UK Equal Opportunities Commission reported that 34 per cent of complainants had been dismissed or threatened with dismissal when they first announced their pregnancy; 28 per cent were told so before going on maternity leave, 18 per cent while on leave, and 3 per cent on their return to the workplace (Equal Opportunities Commission, 1998). Perhaps most revealing of the pervasive discrimination against women in the workplace is the data on pay. The gender gap in average hourly pay of full-time employees, excluding overtime, narrowed between 1998 and 2003 to its lowest value since records began. However, women’s average hourly pay was still only 82 per cent of men’s. Average gross hourly earnings, excluding overtime, of full-time women were 82 per cent of the equivalent average for men. Although women have increased their representation somewhat in the ranks of executives (from 8.9 per cent in 1991 to 18 per cent in 1998), they still account for less than 5 per cent of company directors (Equal Opportunities Commission, 2004) in the UK. In the US in 2004, only 8 of the top 500 companies were headed by a woman. One issue, which is much debated, is whether women have different managerial or leadership styles from men. The bulk of theresearch suggests there are large differences within genders as well as between them, but that women adopt a consistently more democratic and participative style of management than men do (Eagly & Johnson, 1990; Powell, 1993). Some researchers argue that women also have a more ‘transformational’ style, inspiring and encouraging their employees, whereas men tend to use a ‘transactional’ style, punishing and rewarding selectively to achieve the desired task-related behaviours (e.g. Rosener, 1990).

The fierce competition of globalization has led organizations to outsource parts of their operations that other companies can do less expensively. This has led to more insecurity within organizations, and to waves of downsizing as organizations cut jobs that seem to add cost but little value. Our concern here is with the effects of redundancy and unemployment on those who experience these events.
While the beginning of an individual’s experience in an organization is a process of learning new behaviours, the end may be a process of letting go as a result of redundancy. Redundancy can come about because of downsizing. It can also be a result of skills obsolescence, as when e-mail networks reduce the need for an internal post system and the traditional mail coordinator is no longer required. Or it can be a result of outsourcing. For example, school meal services may be contracted out to private catering firms, making ‘dinner assistants’ redundant.Some employees volunteer for redundancy, and are happy to leave the organization with some financial package as compensation. Often, though, redundancy is perceived in terms of loss – loss of income, prestige, status and social identity. Those who are left behind in the organization often experience guilt, and, although they may be willing to work harder, they generally feel more insecure having witnessed the dismissal of colleagues (Daniel, 1972; Hartley et al., 1991). Redundancy has even been compared to bereavement, with associated psychological stages of shock, denial, disbelief and, later, acceptance.
Unemployment usually has very negative psychological consequences. Research from the 1930s to the present day has consistently shown that the unemployed have poorer mental health than comparable groups of employed people. Unemployed people have worse profiles on measures of anxiety, depression, life dissatisfaction, experienced stress, negative self-esteem and hopelessness about the future. They are also more likely to report social isolation and low levels of daily activity. Their physical health is poorer, and they are more likely to attempt and commit suicide (Fryer, 1992; Warr, 1987).
The average psychological wellbeing of school leavers who become unemployed diverges from those who get satisfactory jobs, even when their wellbeing before leaving school is similar. And people who move out of unemployment into satisfactory jobs show sharp improvements in mental health. These findings are striking in their consistency. The same picture emerges across studies, samples, different research groups, countries and over time. Striking, too, is the fact that the psychological effects of unemployment extend to the whole family. In a classic study of a whole village affected by unemployment, the effects were shown to spread across the whole community, lowering its spirit and functioning ( Jahoda, Lazarsfeld & Zeisel, 1972).


Work groups, or teams, are increasingly common in organizations. Formal groups are those designated as work groups by the organization. The members of these groups usually have shared task objectives. Examples of these formal groups include health care teams, management groups, mining crews and research and development project groups. Informal work groups are not defined by the organization as functional units, but nevertheless have an impact on organizational behaviour. Examples include friendship and pressure groups.
Group influences on work behaviour
Early studies of organizational behaviour show that work groups profoundly influence individual behaviour. In the 1920s and 1930s, several studies were carried out at Western Electric’s Hawthorne Works in Chicago, USA, to examine the effects of illumination levels on workers’ performance in assembling and inspecting relays used in telephone equipment. The researchers varied the level of illumination and studied the effects on workers’ performance. The results showed that any variation in the level of illumination (down to a level almost the equivalent of moonlight) led to improvements in performance. This effect was explained in terms of the workers’ appreciation of the attention and interest shown in their work by researchers and managers, which manifested itself in better work performance. This effect has come to be known as the Hawthorne effect, and field studies that test methods of intervention in organizations have to demonstrate that positive results are not simply due to this effect (this is somewhat analogous to the ‘placebo effect’.
Further studies in the Hawthorne Works examined the effects of several other factors (such as number and length of rest periods, and hours of work) on the performance of a small group of female workers (see Everyday Psychology for more detail on thisphenomenon). The results suggest that the characteristics of the social setting or group are at least as important as the technical aspects of the work in explaining performance (Roethlisberger & Dixon, 1939).

The Hawthorne Effect
The scientific management approach dominated thinking about human performance in organizations in the early part of the twentieth century. It assumed that there was one best way to manage, and that productivity could be maximized by careful study of job content, combined with ergonomic studies, standardized methods of job performance and appropriate selection and training in the precise components required for the job. This approach informed the continued development of assembly line methods in the early twentieth century, best typified in the Ford Motor Company’s approach to vehicle production.
Roethlisberger and Dixon (1939) were inspired by the scientific management approach to investigate the effects of (among other things) illumination levels on workers’ performance in Western Electric’s Hawthorne Works, near Chicago. Their aim was to discover how to optimize the workplace by manipulating factors such as levels of lighting and hours of work, in order to achieve maximum productivity. Two groups of female employees took part in the first element of the investigation, which took place in a relay assembly department.
The control group worked without any changes in the level of illumination in their workroom. In the experimental group the lighting was systematically varied (being sometimes brighter and sometimes dimmer than the standard level of illumination for the control group), and the productivity of the workers was continually monitored. Subsequent investigations examined the effects on productivity of variables such as length of rest pauses, length of the working day and week, and a free lunch. The findings were quite baffling. Both the control group and the experimental group increased their productivity during the study. Regardless of whether illumination levels were increased or decreased, the productivity of the experimental group went up. Even when the illumination was turned so low that the women could barely see what they were doing, productivity went up! The introduction of changed lengths of working hours, weeks and rest pauses had a similar impact. Even the introduction of a free lunch led to improved performance. The results suggested that productivity rose because the women responded favourably to the ‘special attention’ they felt they were getting from the investigators. Knowing they were being studied apparently made them feel important and valued, and they were motivated to do their best, regardless of what changes were introduced. In a second component of the investigation, conducted in the bank wiring room, members of work groups (this time all men) were observed during their work and interviewed at length at the end of the working day or week. There was no intervention here, since the aim was simply to observe the work process and discover how it could be done more efficiently and productively. The men did not improve their productivity. Quite the contrary – they stopped work before the end of the working day and later told the investigators that they were capable of being much more productive. It appeared that the men feared the study would lead the company to raise the level of productivity required for the same rate of pay. So they deliberately kept productivity low to ensure they were not required in the future to achieve unreasonable levels of performance. The men had agreed informal rules between themselves about the level of productivity they would achieve, and they maintained this through their cooperation and shared goals. In contrast to the assumptions of the scientific management approach (i.e. that technological and ergonomic factors are the predominant influences on workplace productivity), these investigations reveal the importance of social factors in work performance. In both cases, interpersonal processes played the major role in determining productivity. These findings mark the birth of the ‘human relations’ movement, which drew attention to the importance of workers’ needs, attitudes, social relationships and group memberships in the workplace. It is an orientation that continues to have a major influence on managerial practice today, most notably in the domain of human resource management.
Roethlisberger, F.J., & Dickson, W.J., 1939, Management and the Worker, Cambridge, MA: Harvard University Press

Types of group and what makes them effective
Sundstrom, De Meuse and Futrell (1990) distinguish four main types of formal work teams:
advice/involvement teams – e.g. committees, review panels, boards, quality circles, employee involvement groups, advisory councils;
production/service groups – e.g. assembly teams, manufacturing crews;
project/development groups – e.g. research groups, planning teams, specialist functional teams, development teams, task forces; and
action/negotiation groups – e.g. entertainment groups, expeditions, negotiating teams, surgery teams, cockpit crews.
In some organizations, groups as a whole may be hired, fired, trained, rewarded and promoted. This trend has developed as organizations have grown and become increasingly complex, demanding that shared experiences and complementary skills are constantly utilized in decision-making processes. Another reason for the dominance of the work team is the belief that the combined efforts of individuals may be better than the aggregate of individual contributions – the principle of synergy.
A good deal of effort is now directed toward understanding the factors that promote group effectiveness and this has led to the development of models for understanding teams. A typical model combines inputs, processes and outputs. Inputs include (for example) organizational context and group composition; processesinclude decision-making leadership. Outputs refer to group performance and team member well-being.
This work suggests that, ideally:
teams should have intrinsically interesting tasks to perform (Guzzo & Shea, 1992);
each individual’s role should be essential and unique (Guzzo & Shea, 1992);
each individual should be subject to evaluation and receive clear performance feedback Pritchard et al., 1988);
the team as a whole should have clear objectives, be subject to evaluation, and receive performance feedback (Poulton & West, 1999); and
the team should frequently reflect on their task objectives, strategies and processes, modifying these as appropriate (West, 1996).

Psychological safety in work teams
The research issue In recent years, there has been a wave of research into teams at work. In particular, researchers seek to understand how the climate in work teams affects their performance. In a study of hospital patient care teams (Edmondson, 1996), there were clear differences between members’ beliefs about the social consequences or the safety of reporting medication errors (giving the wrong drug to a patient, or giving too little or too much of the right drug). In some teams, nurses openly reported and discussed errors. In other teams, they kept information about errors to themselves. A nurse in one team said, ‘Mistakes are serious, because of the toxicity of the drugs [we use] – so you’re never afraid to tell the Nurse Manager.’ In contrast, a nurse in another team reported, ‘You get put on trial! People get blamed for mistakes . . . you don’t want to have made one.’
In a subsequent study of 51 work teams in a manufacturing company, Edmondson (1999) examined whether psychological safety was evident, and whether it predicted learning in the team (e.g. about how to do the work better and meet customer requirements).
Design and procedure:- Edmondson studied teams in Office Design Incorporated, an innovative manufacturer of office furniture with some 5000 employees. There were four types of team in the organization: (i) functional teams, including sales, management and manufacturing teams; (ii) self-managed teams in sales and manufacturing; (iii) time-limited cross-functional product development teams; and (iv) time-limited cross-functional project teams. There were three phases of data collection. The first phase involved preliminary qualitative research, in which Edmondson observed eight team meetings, each of which lasted one to three hours. She also conducted 17 interviews (lasting for about an hour each) with members or observers of these eight teams. The second phase involved a questionnaire survey of 496 members of 53 teams, and two or three managers identified as observers of each team. The survey measured learning behaviour in the team (‘we regularly take time out to figure out ways to improve our team’s work process’) and team feedback (e.g. team goals, job satisfaction, team task design, internal motivation). Phase 3 involved follow-up qualitative research with the six teams with the lowest level of learning behaviour, and the six with the highest level of learning behaviour.
The objective was to study these teams in more depth and explore differences between high- and low-learning teams. Edmondson reviewed field notes and tapes to construct short case studies describing each team, which were then used to reveal which factors were most closely related to team learning. Customers’ and managers’ ratings of all the teams in the study were used to provide measures of team performance and learning. Results and implications:- The study revealed considerable support for the relationship between team psychological safety and team learning behaviour. Team psychological safety was conceptualized as a shared belief among members of a team that it is safe to take interpersonal risks and that team members will not embarrass, reject or punish someone for speaking up (a confidence that stems from mutual respect and trust among team members). Edmondson found that psychological safety predicted team learning and that this, in turn, predicted team performance, as rated by managers outside the teams. or example, team members’ own descriptions illustrated how a climate of safety and supportiveness enabled them to embrace error and make changes in product design as a result of seeking customer feedback. A lack of team safety contributed to reluctance to ask for help, and unwillingness to question team goals for fear of sanctions being imposed by managers. Quantitative analyses provided consistent support for the study’s hypotheses: learning behaviour appeared to mediate the relationship between team psychological safety and team performance (i.e. team safety predicted performance because safety led to learning, which, in turn, led to improved performance).
The findings from Edmondson’s research indicate how team design and leadership enable effective team performance.
By producing a climate of psychological safety, they enable team members to explore errors and difficulties and learn from them. Members then make improvements in their work (i.e. products or services), and this, in turn, leads to improved performance. The theoretical and practical implications of this work point to the importance of team psychological safety as a central concept in understanding team composition, processes and outcomes (such as member mental health, and team performance). At the same time, the results have practical implications for how we can make teams more effective and innovative in the workplace.
Edmondson, A.C., 1996, ‘Learning from mistakes is easier said than done: Group and organizational influences on the detection and correction of human error’, Journal of Applied Behavioral Science, 32 (1), 5–28.

Factors in poor decision making
A principal assumption behind formal work groups is that a group will make better decisions than members working alone. And yet a good deal of research shows that social processes can undermine the effectiveness of group decision-making. While group decisions are better than the average of the decisions made individually by group members, experimental groups consistently fall short of the quality of decisions made by the best individual member.The implications of this for board and top management teams are serious. Organizational and social psychologists have therefore devoted considerable effort to identifying the processes that lead to poor group decision making:
Personality factors can affect social behaviour: for example, individual members may be too shy to offer their opinions and knowledge assertively, therefore failing to contribute fully to the group’s store of knowledge (Guzzo & Shea, 1992).
Social conformity effects can cause group members to withhold opinions and information contrary to the majority view, especially an organizationally dominant view (Hackman, 1992; Schlenker, 1980).
Communication skills vary, and some members may be unable to present their views and knowledge successfully, while someone who has mastered ‘impression management’ may disproportionately influence group decisions, even in the absence of expertise (Leery & Kowalski, 1990).
Domination by particular individuals can mean they claim a disproportionate amount of ‘air time’ and argue so vigorously that their own views generally prevail. Interestingly, ‘air time’ and expertise are uncorrelated in groups that perform poorly (Rogelberg, Barnes-Farrell & Lowe, 1992).
Egocentricity might take some individuals to senior positions, but people with this trait tend to be unwilling to consider opinions and knowledge contrary to their own, making for poor communication within the group (Winter, 1973).
Status and hierarchy effects can cause some members’ contributions to be valued and attended to disproportionately.So, when a senior executive is present in a meeting, her views are likely to have an undue influence on the outcome (Hollander, 1958).
Group polarization is the tendency of work groups to make decisions that are more extreme than the average of individual members’ decisions (Myers & Lamm, 1976).
Groupthink – a phenomenon identified by Janis (1982) in his study of policy decisions and fiascos – is when a tightly knit group makes a poor decision because it is more concerned with achieving agreement than with the quality of its decision making. This effect can be especially strong when different departments see themselves as competing with one another or when teams have very strong leaders.n Satisficing – or making minimally acceptable decisions – is another group tendency, and is related to this last point. Observations of group decision-making processes repeatedly show that, rather than generating a range of alternative solutions before selecting the most suitable one, groups tend to identify the first minimally acceptable solution and then search for reasons to accept that decision and reject other possible options (March & Simon, 1958;
Social loafing is the tendency of individuals to put less effort into achieving quality decisions in meetings than they do when individual contributions can be identified and evaluated, their perception being that their contribution is hidden in overall group performance (Latané, Williams & Harkins, 1979)
Diffusion of responsibility can inhibit individuals from taking responsibility for their actions when they are in a group (e.g. Yinon et al., 1982). In this situation, people seem to assume that the group will shoulder responsibility. For example, if there is a crisis involving the functioning of expensive technology, individuals may hold back from tackling the issue on the assumption that others in their team will take responsibility for making the necessary decisions. This can threaten the overall quality of group decisions.
Production-blocking is when individuals are inhibited from both thinking of new ideas and offering them aloud to the group by the competing verbalizations of others (Diehl & Stroebe, 1987). This effect has been shown in the study of brainstorming groups: quantity and often quality of ideas produced by individuals working separately consistently exceeded those produced by a group working together.
The hidden profile is the powerful but unconscious tendency of team members to focus on information all or most team members already share and ignoring information that only one or two team members have (even though it may be brought to the attention of the group during decision making and may be crucial) (Stasser, Vaughan & Stewart, 2000). This catalogue of deficiencies indicates that group decisionmaking within organizations is more complex than is commonly appreciated or understood.

Some useful techniques
Recently researchers have begun to identify ways of overcoming some of these deficiencies. For example, research on groupthink has revealed that the phenomenon is most likely to occur in groups where a supervisor is particularly dominant, and cohesiveness per se is not the crucial factor. Supervisors can therefore be trained to be facilitative, seeking the contributions of individual members before offering their own perceptions (see West, 1996). Rogelberg, Barnes-Farrell and Lowe (1992) have offered a structured solution called the ‘stepladder technique’. Each group member has thinking time before proposing any decisions, and then pairs of group members present their ideas to each other and discuss their respective opinions before making any decisions. The next step involves pairs of pairs presenting their views toeach other. The process continues, with each sub-group’s presentation being followed by time for the group as a whole to discuss the problem and ideas proposed. A final decision is put off until the entire group has presented. Initial evidence suggests that the quality of group decisions made using procedures like this is at least as good as that of decisions made by their best individual members. This is consistent with the finding that fostering disagreement in a vigorous but cooperative way in organizations leads to better decisions (Tjosvold, 1998). Teams can avoid the hidden profile problem by ensuring that members have clearly defined roles so that each is seen as a source of potentially unique and important information, by ensuring that members listen carefully to colleagues’ contributions in decision making, and by ensuring that leaders alert the team to information that is uniquely held by only one or two members.
Finally, there is some evidence that work groups that take time out to reflect on and appropriately modify their decision-making processes are more effective than those that do not (Maier, 1970; West, 1996, 2004). While organizational psychologists have contributed a great deal to our understanding of how individual performance can be improved, it should be apparent from the issues considered in this section that research on techniques for optimizing group decision making is still in its infancy. Researchers and practitioners in organizational psychology are increasingly exploring how to structure and manage organizations that ensure that team working fulfils its potential (West & Markiewicz, 2003). This requires that organizations devolve decision making to teams, that the various teams work cooperatively across team boundaries, that teams are well led, and that people management processes (sometimes called Human Resource Management systems or HRM) support team working. The challenge is to discover how to transform traditional organizations into team-based organizations

Most of the research on work groups has been carried out by psychologists. But the study of organizations has attracted attention from the full range of social and economic sciences. In recent years, psychology has begun to play a relatively larger role, particularly in collaboration with other disciplines.

The choice of structures and associated managerial processes that enable an organization to operate effectively are described as organizational design. These structures and processes will largely determine how we experience an organization (Pugh, 1998a, b, c). An army is large, highly structured, very formalized and hierarchical, with clear status and rankings that determine authority structures. Army rules and regulations provide strict decisionmaking guidelines as well as restrictions on activities. On the other hand, a small firm of consultants, which offers advice to companies on how to select people for job openings, may have an entirely different form. All consultants may have equal say in
how the business is run; they may operate as independent practitioners; and there may be few rules and regulations determining their behaviour. There are five interrelated concepts within the overarching theme of organizational design: (i) organization, (ii) how they design themselves, (iii) structure, (iv) effectiveness and (v) choice.

The concept of organization can refer to a range of types, including businesses, governmental organizations, hospitals, universities, schools, not-for-profit organizations, churches and so on.
Design as a concept implies a deliberate effort to find an appropriate and effective organizational form (Daft, 1992). Having the army run like a small consultancy business, with few rules, no hierarchy and lots of independent action, would render it ineffective in a crisis, unable to orchestrate appropriate action. So design also implies a managerial authority to put organizational structure into effect, i.e. to ensure that particular groups of
people work together on tasks specified by management.
3. Structure
An organization’s structure consists in its rules and regulations (degree of formalization) and the organizational elements that determine procedures for making decisions (degree
of centralization). The military and government departments are examples of highly centralized organizations, whereas decentralized organizations include voluntary organizations and partnerships (Hall, 1992). The trend today is to decentralize decisions as much as possible (though in practice this turns out to be very difficult to achieve), in order to ‘empower’ employees and derive maximum benefit from their knowledge, skills and abilities (Spreitzer, 1995). Structure also includes the degree of specialization – that is, how particular and unique each person’s job is. In some organizations, there is a low degree of specialization and one person may be expected to fill many roles. In a small rural health care team, for example, a nurse may act as receptionist, record keeper, telephonist, computer operator, diagnostician, treatment provider, counsellor and even cleaner. In another organization, people might have highly specialized roles, such as the telesales manager for one specific product line for one particular geographical area.
4. Effectiveness
Organizations are designed to be effective, but defining ‘effectiveness’ is not easy (Cameron, 1986). For a car manufacturer, being effective might mean maximizing productivity and profitability. But there may be other dimensions of effectiveness that serve these ends, too, such as a high level of innovation and creativity in product design, a satisfied workforce strongly committed to learning new skills, reducing waste to improve operating efficiency, and ensuring high quality standards for the product or service that is offered. two core but complementary dimensions – (i) internal vs.external orientation and (ii) flexibility vs. control.To be effective, organizations must focus on the internal environment (safety, rules and regulations) as well as on the external environment (customers, the actions of competitors, government regulations), but they will do so with different degrees of relative emphasis. Organizations will also tend to be predominantly either controlling or flexible in relation to the internal and external environment. Internal control means bureaucracy and rules and regulations. Internal flexibility means developing staff and giving them autonomy to work their own way. External control involves focusing on meeting customer requirements and productivity goals. External flexibility implies a concern with innovation, and adapting the organization to the outside world.
The model, developed by Quinn and Rohrbaugh (1983), suggests that organizational effectiveness must be achieved in four domains:
 human relations (internal flexibility)
 goals (external control)
 internal processes (internal control)
 innovation (external flexibility)
Yet these four domains represent underlying conflict (Woodman & Pasmore, 1991) between internal and external orientations of organizational activity, and between control and flexibility (e.g. tightly defining employees’ roles as against encouraging them to develop or ‘grow’ their jobs). How organizations resolve these dilemmas determines both
organizational strategies and effectiveness. It is an interesting exercise to apply the analysis to organizations you are personally familiar with, and to then decide whether you consider that the predominant orientation of the organization is external or internal, and whether the emphasis is on flexibility or control.5. Choice Finally, there is the concept of choice. Structures and processes do not simply evolve. They are a consequence of managerial choices, external factors (e.g. safety issues, or government legislation on equal opportunities) and stakeholder pressures (such as shareholders demanding bigger returns on their investments, or employees pressuring for better working conditions).

Derek Pugh (1930– ) inaugurated and led the Aston Research programme, a major series of studies on the structure, functioning and performance of organizations, and the effects on the attitudes and behaviour of groups and individuals within them (Pugh, 1998a, b, c). This programme began at the University of Aston and later continued at the London Business School and other centres throughout the world. Pugh contributed to the development of the new discipline of Organizational Behaviour in business schools, and he was appointed the first British Professor of the subject at the London Business School in 1970.

The downsizing trend
A critical element of organization design is size, or number of employees (Hall, 1977). The experience of working in large organizations (for example a major oil company such as BP) is very different from working in a smaller organization (such as a research institute which employs about 40 people). Until the 1980s, the general trend was for organizations to grow, but now reductions in size are more common. This is partly because the spread of information technology, the development of networked computers and the evolution of the personal computer have all enabled networks of smaller organizations to collaborate. So nowadays call centres are replacing bank tellers and airlines reservation staff. Organizations are also creating flatter, team-based and less centralized structures with fewer levels of management. And there is a trend towards outsourcing (or contracting out) certain core organizational services, such as catering, cleaning or computer maintenance, thereby reducing the need for a large labour force within an organization.