Custom Search

Wednesday, December 15, 2010

HOW DO WE SEE

We know that light travels in straight lines. It therefore makes sense for a biological transducer of light to preserve information about the direction from which a particular ray of light has come. In fact, this consideration alone accounts for a large swathe of visual evolution. As creatures have become larger and therefore begun to travel further, they have developed an ever greater need to know about things that are far away – so eyes have developed an increasing ability to preserve spatial information from incident light. Where is each ray coming from? To achieve this, there must be some means of letting the light strike a particular photoreceptor. This is the name given to the smallest unit that transduces light. If a given photoreceptor [photoreceptor a cell (rod or cone) in the retina that transforms light energy into action potentials] always receives light coming from a given direction, then the directional information inherent in light can be preserved. Pinhole cameras and the need for a lens The implest way to illustrate light transduction is to make a pinhole camera – a box with a small hole in it. From the geometry of rays travelling in straight lines, it is possible to see that a given place on the rear surface of the pinhole camera will only receive light from one direction. Of course, this is only true until you move the camera. But even then, the relative positional information is usually preserved – if something is next to something else out there, its ray will be next to its neighbour’s ray on the back surface of the camera. One of the drawbacks with a pinhole camera is that the image (the collection of points of light on the back of the box) is very dim, and can only be seen in very bright, sunny conditions. If you make the pinhole bigger to let more light through, the image becomes fuzzy, or blurred, because more than one direction of incident light can land on a given point on the back surface. With this fuzziness we begin to lose the ability to encode direct on. The solution is to place a lens over the now-enlarged hole. The lens refracts (bends) the light so that the sharpness of the image is preserved even if the pinhole is large. Add film and you have a normal camera. The same construction in your head is called an eye. Nature evolved lenses millions of years ago. We then reinvented them in Renaissance Italy in about the 16th century. Possibly the earliest description of the human eye as containing a lens was given by Arab scholar Abu-’Ali Al-Hasan Ibn Al-Haytham, often abbreviated to Al Hazen, in the eleventh century AD. Al- Haytham was born in Basra – now an Iraqi town, which has had a sadly turbulent history recently.
[Abu-’Ali Al-Hasan Ibn Al-Haytham (965–1040) often abbreviated to Al Hazen, was born in Basra, Iraq. He studied the properties of the eye and light at a time when European science was singularly lacking in progress. He is remembered for the discovery that the eye has a lens which forms an image of the visual world on the retina at the back of the eyeball.]


Looking at the eye in detail In human vision, there are two types of photoreceptors, called rods and cones. Rods are cells that only work at low levels of illumination, at night; cones are cells that give us our vision in normal daylight levels of illumination. [rods cells in the retina that transform light energy into action potentials and are only active at low light levels (e.g. at night)] There is only one kind of rod, but three different kinds of cone, [cones cells in the retina that transform light energy into action potentials, different kinds responding preferentially to different wavelengths] each responding preferentially to a different range of wavelengths of light – the basis of colour vision. and lens, eventually falling on the retina. When a ray of light hits a photoreceptor (a rod or a cone), it sets up a photochemical reaction, which alters the electrical potential inside the photoreceptor. This, in turn, produces a change in the firing rat of the neuron connected to that photoreceptor. There are four types of neuron in the retina – horizontal, bipolar, amacrine and ganglion cells. Now we meet with a problem: there are about 100 million photoreceptors but only about one million neurons in the optic nerve. Nobody really knows why, but the most persuasive argument is that if you made the optic nerve thick, then the eye could not move! How can all the important information be squeezed into these few neurons? The only way is to discard a lot of redundant information. Think about how you would give instructions for someone to find your home. It is usually a waste of time to describe exactly how long they need to walk in a straight line. Instead, you might say, ‘turn left, then right, then second left’. What you are doing is noting the points of change in the route information. The retina does pretty much the same thing. It signals the points of change in the image – i.e. the places where intensity or colour alter – and ignores regio s where no changes occur, such as a blank uniform surface. Figure 7.11 shows how each retinal ganglion cell has a receptive field – a particular part of the visual world. If you change the amount of light in this field, you will produce a change in the cell’s activity. A neuron only changes its firing rate when there is an abrupt change in the amount of light falling on the receptive field – for example the boundary between a white object and a dark background. The retina contains many such receptive fields in any one location, so there is a large degree of overlap between them. They are smallest in the area called the fovea, [fovea the central five degrees or so of human vision, particularly the central, high-acuity part of this area (about one degree in diameter)] the high-acuity part of which occupies approximately the central one degree of the visual field. This is the part of the retina that receives light rays from the direction you are looking in. Since a receptive field cannot distinguish between dif erent locations within it, the smaller the receptive field is, the finer the spatial detail that can be resolved. So the fovea is able to resolve the finest detail. To convince yourself of this, try looking at the opposite page out of the corner of your eye and then try to read it. If you cannot do so, it is because the receptive fields in the periphery of your retina are larger and incapable of resolving the small print.



[Thomas Young (1773–1829) was a physicist who postulated that there are only three different kinds of photoreceptors in the retina, even though we can distinguish thousands of different colours. The basis of this argument was that, to have thousands of different photoreceptors would compromise the acuity of the eye, since the acuity is determined by the distance to the nearest neighbour of the same type. Later, Hermann von Helmholtz added the physiological basis of this argument. Thomas Young also studied the mechanical properties of materials, defining a number later known as Young’s Modulus to describe how stretchable a material is. In Young’s days, there was no distinction between the subjects which we now call physics, psychology and physiology.]

No comments: