Custom Search

Wednesday, December 15, 2010

Seeing in colour

So far, we have looked at how the retina responds to rapid spatial changes in illumination. But it also selectively signals temporal changes, such as occur when there is a flash of lightning or (more usefully) when a tiger suddenly jumps out from behind a tree or moves against a background of long grass, so breaking its camouflage. There are various mechanisms involved in processes of adaptation to static scenes (see chapter 8). Perhaps the best-known form of adaptation occurs when we enter a dark room on a sunny day. At first we cannot see anything, but after a few minutes we begin to notice objects in the room, visible in the faint light there. This phenomenon occurs because our receptors become more sensitive when they are not stimulated for a while, and also because there is a change from cone vision to rod vision. You may have noticed that in a faint light all objects appear to have no colour, quite unlike our daylight vision. This is because there is only one type of rod but three different types of co e, and the cones have a dual function: they encode the amount of light present, but they also encode colour, since they are maximally sensitive to different wavelengths in the visible spectrum. It is important to realize that they must be compared, since the output of a single cone cannot unambiguously encode wavelength. Suppose you have a cone maximally sensitive to light which has a wavelength of 565 nm. By using an electrode to measure the output of this cone (do not try this at home!), suppose you find that the cone is producing a ‘medium’ level of output. Can you deduce that the cone is being stimulated by a ‘medium’ amount of light whose wavelength is 565 nm? No – because precisely the same response would arise from stimulation by a larger quantity of light of a slightly different wavelength – say 600 nm. This is because cones do not respond in an ‘all or none’ manner to rays of light of a given frequency. Instead, they show a graded response profile. We have three different types of cone in the retina They are sometimes called ‘red’, ‘green’ and ‘blue’ cones. More strictly, we refer to these cones as ‘L’, ‘M’ and ‘S’ cones, which refers to the fact that the L cones respond most to long-wavelength light, M cones to medium-wavelength light, and of course S to shortwavelength light. So the output of a single cone is fundamentally ambiguous, and for a meaningful colour sensation to arise, we must know how much one cone type responds compared to another cone type. This is achieved through chromatic opponency, [chromatic opponency a system of encoding colour information originating in retinal ganglion cells into red– green, yellow–blue and luminance s gnals; so, for example, a red–green neuron will increase its firing rate if stimulated by a red light, and decrease it if stimulated by a green light] a process that also explains why we can see four ‘pure’ colours – red, yellow, green and blue – even though there are only three kinds of cone. The ‘yellow’ sensation arises when L and M cones receive equal stimulat on. Their combined output is then compared to that of the S cones. If L+M is much greater than S, we see yellow, and if less, we see blue. If L+ M is about the same as S, we see white. This effect was achieved using a special kind of camera, first constructed by Parraga, Troscianko and Tolhurst (2002), which produces different cone responses for each point in the scene (or pixel in the image). Parraga et al. found that the red–green system is suited to encoding not just the colour properties of images of red fruit on green leaves, but also the spatial properties of such images for a foraging primate. We know that the receptive fields for colour are different from the receptive fields for luminance. Specifically, they lack the ‘centre-surround’ structure that makes the centre effectively as big as the whole receptive field. As a result, we are less sensitive to fine detail in colour than in luminance. Early photographs were only in black and white, but the technique of using watercolours to paint the various bjects (e.g. blue sky) on top of the photograph became quite popular. The interesting point is that the paint only needed to be added in approximately the right areas – some creeping across object boundaries did not seem to matter. About fifty years later, the inventors of colour TV rediscovered this fact. The trick is to find a way of transmitting as little information as possible. So only a sharp luminance image is transmitted. The two chrominance (colour) images are transmitted in blurred form, which means that less information needs to be transmitted without a perceived loss of picture quality (Troscianko, 1987). The main consequence of this ‘labour-saving’ trick in the brain is that the optic nerve can contain relatively few neurons. The optic nerve conveys the action potentials generated by the retina to other parts of the brain, principally the primary visual cortex, [primary visual cortex a region at the back of the visual cortex to which the optic nerves project, and which carries out an initial anal sis of the information conveyed by the optic nerves] also known as Area V1, where the information is then analysed and distributed further to other visual areas.


[John Lythgoe (1937–92), a biologist at Bristol University, studied the relationship between the sense organs and visual apparatus of an animal, and between its surroundings and the tasks it has to perform within these surroundings. Lythgoe’s main research was on fish living at different depths of water, since the depth of water affects the wavelength composition of daylight reaching that point. He found a marked relationship between where the fish lived and what their cones were like. His research founded a flourishing new research discipline called ‘ecology of vision’ with the publication of his book in 1979 (The Ecology of Vision).]

No comments: