RESEARCH INTO THE FIRST
This chapter is an account of the development of my interest in neurobiology and it also begins to develop the main theme.
My first serious attention to the brain problem was aroused by an article in the Scientific American in about 1967 concerning holography. Holography was originally described in about 1945 by Dennis Gabor, before the invention of lasers. More recently holography has come into its own as a result of that invention. The article in the Scientific American described a holographic system of photography without lenses but using a laser. The interesting thing about this system is that the information stored on the photographic plate is not of the usual point to point variety. Instead of this, the information about the intensity of illumination of a single point on the object photographed is stored in a diffuse pattern all over the photographic plate. The same is true of all of the points on the object. In spite of this mix up of all of the information it is possible to reconstruct a three dimensional image by a fairly simple method. It is also possible to store several pictures one on top of another and retrieve each one separately.During my studies in physiology for the Primary Fellowship examination for the Royal College of Surgeons of England, I became aware of Karl Lashley's work on rats ("In Search of The Engram"). He was trying to determine where the rat's brain stored its memory. He trained rats to negotiate a maze in search of a piece of food. He then chunked out pieces of the brain in order to determine whether or not the memory trace had been stored there. He found that it was very difficult to destroy this memory and he concluded that memory storage was diffuse. He was not altogether correct, as we now do know that certain very specific memories are stored in very definite places, but he was in general correct in saying that memory storage is a diffuse process. I was intrigued to find a diffuse information storage system with a simple recovery method and I began to think about whether the brain could be organized in a similar way. My conclusion is that this is indeed the case. In order to analyze this problem it is necessary to have a clear picture of how holography works.
Refer to Figure #2. (The Set Up For Holography) A parallel beam of laser light falls on the object and is reflected onto a photographic plate. Another part of the beam falls onto a mirror and is also reflected onto the plate. This is referred to as the "Reference beam". One might well expect that this would simply result in a uniformly fogged plate but this is not what happens. In fact you get an interference pattern between the two sets of reflections and the result is a highly structured image. You might need a microscope to see it but even then you can see no resemblance whatsoever to the object. In order to recover the information from the developed plate it is necessary to illuminate it with laser light at the exact same angle as you originally had. You then get a three dimensional image of the object.
Holograms have some very interesting properties. You can even store several holograms one on top of another on the same plate and retrieve the images separately, so long as you use a different angle for the laser beam each time. In addition to this you can still recover images if you break off a small piece of the plate and use only that. The image remains unbroken though you do lose some resolution. There are no missing chunks. You just have "Graceful degradation" (Which is what Lashley found in the rat's brain when he removed pieces of it). In an ordinary photograph there is a point to point correspondence between every point on the object and a point on the plate. With a hologram this is not the case. The record on the plate is quite diffuse and every part of the object is recorded all over the hologram.
The usual explanation as to how the image is recovered is that there is a wavefront reconstruction. The idea is that when you reverse the process by putting the laser behind the developed plate the various wavelets of the light retrace their steps backwards in the exact same way as they originally came when the hologram was made. You get optical interference between the various wavelets but the retraced pathways are a carbon copy of what happened before, traced in the opposite direction.
There is an alternative way of visualizing what happens and it turns out that this is much more helpful when one starts to compare this process with what happens in the brain. The key to understanding this is at first to look at what happens to just one point on the object. The laser light reflected from this point towards the plate can be thought of as a cone of homogeneous and coherent rays where all wavelets are in step. It is not difficult to see that the interference pattern this makes with a parallel reference beam is a series of concentric circles, or ellipses if the plate is tilted. What we get on the plate from a number of points from the object is a system of overlapping ellipses, each group being centered differently. I was already aware of a phenomenon described by Fresnel which is complementary to this. He inscribed a series of finely drawn concentric circles on glass. This acted like a lens and focused a parallel beam to a single point (See Figure #3: The Fresnel Zone plate). You also get focusing if the beam strikes the plate obliquely. This occurs because of the diffraction of light and the optical interference of one part of the beam with another. This system is in use in cameras and other contemporary optical apparatus in the form of "Fresnel" lenses. It is not difficult to see that ellipses would do just as well as circles. In holography each point is reproduced in its appropriate place by the focusing effect of each individual system of ellipses.
It would also be possible to use a cone of rays from a single point as a reference beam instead of a parallel beam. You still get a pattern of ellipses. An extension of this is to use many points in the form of another picture, on a lantern slide as a reference beam. Each one of those points would reproduce the picture quite accurately and other points would reproduce exactly the same in the same place. Such a picture consists of many points. In this case the presentation of one picture becomes the key to recovery of its twin, which was recorded at the same time with the laser at the same angle. A further extension of this is to use part of an image as the reference beam for recovery of the whole image. Illuminate the plate with part of an image and the remainder pops up. You can do all of this with holograms. This forms the basis of an associative memory. Maybe the brain is organized in a similar way. We have many memories stored one on top of another. One mental image can be recovered by presentation of another of two images originally presented together. Or you can recover the rest of one original picture by using another part of it as the cue. I found the description of an apparatus for scanning documents holographically for a key word by David Redman, in Science Journal Feb. 1968 based on work by Vander Lugt and P.J.Van Heerden. (See Figure #4). This was being used for scanning the literature for material containing some key word like "Mirror", so that one could hook out literature of interest. In this apparatus a hologram was made of the document and a hologram of the key word was used to identify it. If the key word was there, a dot appeared revealing its position on one half of a page and the whole document appeared on the other half. This was the manifestation of an associative memory. I have also come across the description of just such a scheme of recovery, using only a part of a photographic image as the key, in an article by Yaser S. Abu-Mostafa and Demitri Psaltis in the Scientific American (Probably 1987) complete with photographs. A part of a single photograph pulled out the one complete original picture from a mixture of four superimposed pictures.
All of this is very relevant to what may be going on in the brain. The parallel to this would be the storage of many sensory complexes each derived from an individual experience, one on top of another over a considerable area of cerebral cortex. We substitute cerebral cortex for the photographic plate, the hologram. For something analogous to be taking place in the brain one must look for some kind of interference phenomenon such as "Brain waves" which could interfere with each other. An essential requirement is that an element of one pattern must have either positive or negative values of some parameter and therefore be able to cancel out another element so as to mimic the phenomenon of optical interference. It turns out that such a property does exist. It is inhibition. Lateral inhibition is a basic brain mechanism. Cancellation does occur.
As I began to search the literature, I soon found that the holographic idea was not altogether new. The idea of interference between traveling brain waves had already been put forward by Beurle (Beurle, R.L. Phil. Trans., Roy Soc London, Ser. B, 240 55 1956.) Sherrington had ideas about cerebral integration but he offered no explanation as to how it happened. A holographic model was proposed by H.C.Longuet-Higgins. (Nature 217, 104, 1968) His model depended upon resonators, not interference. Karl Pribram examined holography in detail comparing the properties of the hologram with those of the brain. He did not offer an explanation as to how it worked. He invited me to give a seminar at his neurosurgical and neurobiological unit at Stanford Medical Center, after my first communication to Nature.
I do not think that the brain works in quite the way that Beurle envisaged. It is a mistake to look for critical timing and interference between what are essentially digital pulses, not waves. (See figure #5: Record of electrical response of a stimulated nerve). The height of the pulses is constant and the signal is frequency modulated in the sense that the strength of the signal is represented by the number of pulses in a given time. (i.e. the frequency.) What we have is a mixed system which is partly digital and partly analog. At the synapse we have analog. A number of pulses produce packets of neurotransmitter. Packets of excitatory chemicals are mixed with packets of inhibitory chemicals, resulting in some cancellation and that is where the interference occurs. The packets of excitatory and inhibitory neurotransmitters are summated. When we reach a certain threshold the neuron fires and this is digital. In fact the synapses are analog and the neuron is digital. There are exceptions to this rule. For example, some of the retinal neurons do not fire and they are analog.
This preliminary view of the model of brain function is not essentially a dynamic pattern of interference by traveling brain waves such as that envisaged by Beurle or Sherrington. It is more static in nature. The storage area for the memory comprises a considerable area of higher cortex. This would correspond with the photographic plate in holography. We might take an area which would light up on an MRI scan including some specific higher cortical areas, the hippocampus, the thalamus and the amygdalae. Messages would be coming from primary sensory cortex relative to the senses in operation (Occipital, temporal, olfactory etc.). This would initially be in topographic form, as in a photograph. In the higher cortex the information would be in distributed form. There would be an irregular pattern of excitation all over the memory storage area but between the excited areas there would be inhibited areas as a result of lateral inhibition. Here we are thinking of the higher cortex as corresponding to the photographic plate, the hologram, with positive and negative values of nervous excitation instead of the positive and negative values of electric field. This pattern would be formed by the superposition of the patterns created by many individual neurons firing at the lower level. What you get at the upper cortical level after the inevitable mix up of signals is a pattern whereby some cortical cells are inhibited and some are excited.Each lower neuron would produce its own unique pattern of excitation and inhibition over the higher cortex. These would not produce neat patterns of ellipses (As we had in holography). The pattern of excitation and inhibition in the higher cortex, arising from a particular neuron firing would be quite irregular but unique. Here we are making a comparison which equates with a single point on the original photographic image with the firing of a single neuron in the primary sense area of the brain (Taking vision as the modality and holography as the model). Nevertheless the pattern would be quite unique to that particular neuron and just as effective as in holography. This combined pattern of excitation and inhibition would be formed by the firing of many neurons from the lower level, with the individual synaptic patterns of each neuron superimposed. (See Figure #6.)
The pattern would be maintained during sensory stimulation and acting as in holography would call forth memories encoded in similar patterns. All of those properties which we found with holograms would apply here. We would have an associative memory. Whenever part of the pattern coincided with part of a pattern from a previous experience, the whole of that experience would be called up in the appropriate sensory cortex and would enter consciousness. We have an associative memory. The argument is that if it works in photography, then it works in the brain. It does not matter at all that the parameters are different (Negative electric field or nervous inhibition), so long as they can have positive and negative values. Nor does it matter that the pattern produced by a particular neuron firing should be irregular rather than ellipses. In the face of cerebral damage we would have graceful degradation as Lashley found.
This was the position when my first communication to Nature was made in 1968. I was able to define some of the necessary mathematical properties of a decoding network but not how these were created. This model of brain function and memory recovery was published in Nature Feb. 1968 Vol. 217 No. 5130 pp.781-782 under Chopping P. T. "Holographic Model of Temporal Recall." Shortly after this I was invited to give a seminar at Karl Pribram's research unit at Stanford Medical Center, and later at Norman Geschwind's Psychiatric unit at Harvard and J.Lettvin's unit at M.I.T. This was done in Sept. 1969. By that time I had been able to make further progress, so we can go on from there.
The representation at the level of the primary visual cortex would correspond to the object in photography and it would be in topographic form. It is a fact of observation using MRI on monkeys that in the occipital region that information is stored in topographic form at least to a certain extent. It is reasonable to assume that what appears in the occipital cortex (Or in other sensory cortex) is what appears in consciousness. Signals would go upward to higher cortex and this would trigger off an association and that would be sent down again to primary sensory cortex in the various sensory modalities, bringing out the memory associated with the stimulus. This would again be in topographic form and would enter consciousness. This in turn would be propagated upwards and would give rise to a train of thought as signals continued to bounce back and forth. Signals would be bouncing upwards and downwards between lower and higher areas bringing forth a series of associations. The picture would keep on changing because when the first one bounces back, it gets to other sensory modalities pulling out something from memory there. There would also be some change because of imperfections in the system resulting in changes in the sequence.
What is wrong with this, my original view, is that it would give a speed of mental thought that is much faster than what we actually have. We probably hold a single thought for 300 milliseconds or so.
According to my more recent view, what actually happens is that we have a positive feedback loop and oscillations occur around the loop and this goes on for about a third of a second, holding the image, until something breaks the cycle. Then we are able to go on to another thought and this is again held for about a third of a second by the persistence of other feedback loops. We know that the persistence of vision is such that a 40 or 50 cycle flicker is perceived as a continuous light. It is therefore no surprise that you get oscillations of some 40 times per second before oscillations stop. We get some 12 to 15 oscillations to a single thought. We now do know that low frequency electrical oscillations at about 40 or 50 cycles per second have actually been observed. Francis Crick has said that these are associated with conscious thought. They disappear in anaesthesia and non REM sleep. It will be necessary to explain how the cycle of oscillations is started up and how it is broken after some 300 milliseconds but this must wait until we have worked out the nature of the reciprocal network which gives rise to the oscillations.
In the optical analog we have a wavefront reconstruction whereby the light waves can exactly retrace their original pathways. If nerves could conduct backwards, reconstruction in the brain could be readily explained by this method, but there are problems with this idea because it is contrary to the usual neuron doctrine which calls for forward conduction in the CNS. However there are examples of transmission via antidromic "Graded potentials" using "Cable" properties in the horizontal cells of the retina and this idea has to be considered. According to "Cable" theory, when you have antidromic (backward) nervous conduction, when the area of depolarization associated with the nervous pulse comes to a junction, you encounter a greater aggregate area of cross section. This results in an increase in capacitance. That in turn reduces the voltage of the pulse. The result is that we are no longer at firing level and the pulse is extinguished. It would be possible for backward transmission to continue, if another pulse were to arrive at the junction at the exact same time. This would allow the voltage to be maintained across the junction. Such exact timing is in principle quite possible and we cannot immediately rule out the idea. However, this is a bit far fetched and the idea of widespread antidromic conduction proved to be very unpopular with neurobiologists. An alternative system had to be found.
The idea of using a separate reciprocal pathway seems much more reasonable, particularly as an extensive reciprocal network is already known to exist. Let us say that we actually do have reciprocal network which correctly unscrambles all of the information. We then have the signal back in its original form. If the gain in the circuit exceeds unity, we would have an oscillatory condition because we are feeding a similar but slightly amplified signal right back into the input of the "Amplifier". (We would have to consider the system to be an amplifier if there is a gain in the signal.) It would be necessary to restrict the gain in some way in order to maintain stability. Any reciprocal pathway would have to have a large amount of negative feedback distributed generally, using inhibitory neurons, otherwise it would be quite unstable and continuous oscillations would occur. We do in fact have inhibitory fibers. An essential part of the model is that oscillations do in fact occur for a limited time, locally in the active circuit. We would require an automatic mechanism to restrict the gain and keep it balanced so that it is close to unity most of the time. This would not be difficult to do. Automatic volume control is a part of every commercial radio and television set. This is achieved by the use of negative feedback. In the CNS this would also be achieved by the negative feedback of inhibitory neurons. There would be very many feedback circuits bordering on oscillation, so the gain would have to be generally restricted. However oscillation would be allowed to continue in the circuit which was active. After a short time the gain in the active loop would have to be reduced so that oscillations would cease there. Then they would start in some other place. We will examine shortly the way in which the gain in the active circuit is reduced.
We would have to have a fairly well tailored reciprocal network which would unscramble the scrambling, in order to have the same effect as a wavefront reconstruction. It is not immediately obvious what the required properties of this network should be and this now has to be worked out. Evolution is able to shape major connections by bundles of nerves between different parts of the brain but there are simply not enough genes to program the exact anatomy of hundreds of thousands of multiple neural connections and the strength of every synapse. We have to explain how the right strength of all these connections would be made by some kind of learning process. The first thing to do is to look at the simplest possible reciprocal network which is able to reconstruct the input and to solve that. We can then approach the more complex situation.
We will first analyze a simple two channel network where there is just one lateral branch on each side connecting to the other channel. We make this inhibitory, as lateral inhibition is a common phenomenon. We also make the assumption that nervous transmission across a synapse is linear. This is not unreasonable because we do have packets of excitatory neurotransmitter adding up and the inhibitory transmitter could well be neutralizing these in an arithmetic fashion, that is by subtraction. This may not strictly true but it is a good starting point. Most likely the transfer function is more accurately given as having a sigmoid shape but we would be working on the linear portion of the response curve most of the time. This mode of activity will be the first order effect. We will be summing the number of pulses arriving at the synapse.
For the simplest basic network see the diagram in figure #7 (basic two fiber network). We have two lines of communication, A & B. The upper half of the diagram shows a simple scrambling network. We have one lateral branch on each side which links with the other line. From an algebraic point of view we will regard the strength of these two side branches as being defined and known. The lateral branches have been made inhibitory to represent lateral inhibition and as this usually means a different kind of fiber we show an internuncial (Intermediate) neuron which would have a different neurotransmitter such as GABA (Gamma amino butyric acid). We can ignore this refinement for the moment. Figure #8 shows an apparatus which is an electrical analog of this (Omitting the internuncial neurons).
In order to unscramble the information we need another pair of side branches in the bottom half and these would have to be of exactly the right strength to do the unscrambling, and this is what has to be worked out. With this simple basic network it is fairly easy to see what the required strength should be, and one can find this by simple algebra or just by inspection of the diagram. If the upper side branch is inhibitory, the lower one must be excitatory and just enough to balance out the input of the inhibitory input higher up. We can calculate the required strength of the lateral connections in the restorative part of the network by using two simultaneous equations. This is soluble as we have two equations and two unknowns.
(In the diagram there are two recurrent side branches just as a reminder that the network is reciprocal. As already mentioned, the effect of these can be instability, which occurs if the positive feedback is high.)At my seminars I demonstrated a piece of electronic hardware which imitated this two fiber network. It did the required scrambling, followed by unscrambling. See Figure #8 (Electronic Version of the Basic Network). I constructed this using transistor pulse generators to simulate firing neurons. I used rectifiers with capacitors to summate the output with a firing threshold sensitive to voltage. This represented a synapse. I was able to show that with appropriate adjustment it was possible to unscramble the two signals completely. However, one had to use non zero signals for inhibition to work. What my two fiber machine was doing is describable by a simple arithmetic process. With three scrambled fibers it becomes more complex. You have six lateral fibers, six equations and six unknowns. (See Figure #9.) This is still soluble. I had intended to make a four fiber analog in order to represent the Aplysia eye, which has four receptors. This would require twelve side branches for a full network, implying very large numbers for a full CNS network. This never got constructed. In practice we would have to deal with a very large number of side branches as the CNS is a neural net. Nevertheless, in principle, provided that we have the same number of simultaneous equations as we have unknowns, the network should still be soluble, even if there are a thousand unknowns. This is more difficult and theoretically it would have to be tackled by matrix algebra, which is a convenient way of handling a large number of equations. The important point here is that we would still have a defined solution. The required strengths of the reciprocal side branches would be defined by a mathematical process which could be determined by the inversion of the matrix.
Click on image
In the biological case the equations are not solved by an algebraic process. This will be examined when we come to the consideration of the properties of artificial and biological neural nets. Artificial neural nets are able to solve any equation where the output is a function of the input; in other words when there exists a unique solution. This process is not handled algebraically.
A valid objection might be that we would not have a side branch going from every fiber connecting to every other one and that would take away the argument that there is a defined solution. We will meet this objection shortly. Artificial neural nets do not have mutual connections between every cell but it is known that neural nets have tremendous power. It is probably not desirable to have an exact solution anyway. It would be better to have some change of signal. A partial solution would be sufficient to perform the holographic recall.
The objection is met by the following argument. In the biological case the mixup is not entirely random. During development, nerve fibers are laid down in layers in a highly ordered fashion, so that groups of fibers are laid down successively in layers on the waiting groups of afferent neurons. The lateral fibers may not stray very far away so that the spread is to some degree local and mix up is incomplete. Consequently "Sort Out" does not have to be very widespread. It would only be necessary to have a rich supply of lateral fibers to other neurons which are making connections nearby at about the same time.
This incomplete mix up is probably basic to the development and structure of the CNS. It would achieve the objective of supplying sufficient redundancy to allow graceful degradation in the case of damage, rather than catastrophic failure. At the same time it could supply sufficient dispersion of the information to make the holographic recall possible. A further consequence of this incomplete mix up is that we have fairly well defined areas of cerebral cortex performing specific tasks. This is basic to the architecture of the CNS. In the fetus, as development proceeds, the CNS becomes vastly over-connected and many connections are ultimately removed. This is because during the learning process many of them which are not necessary for efficient function are underused. Consequently they are not reinforced by the Hebbian process. (More later about Donald Hebb.)
I will now make a digression which is not immediately in line with this train of thought but which is relevant to it as it concerns neural nets. After that I will return to this problem. I brought this up at my seminars in 1969 at Stanford, Harvard and MIT. A very good paper presented later by Westlake in April 1970 (Kybernetik 7, 129-153 gives an extensive analysis of a holographic system using the Fresnel zone plate idea and also the idea that pattern recognition is achieved in the visual system by the use of Fourier transforms. I had already put this view forward in my unpublished communications at the seminars in September 1969 with some further details. The author was kind enough to send me a copy of his paper.
It is important to realize that the visual system is able to recognize a particular shape instantly and this can be done wherever the object appears in the visual field and whatever the size. For this to be possible it is clear that some kind of transform has to be used somewhere in the visual system and we might have to decide exactly where the transform fits in and at what point we have the information in topographic form. The idea which I presented in 1969, also was that in the visual system independence of position in the visual field depended upon the formation of Fourier transforms in the visual cortex, followed by dropping out the lowest spatial frequencies. The lowest frequencies (Essentially DC) represent the position of the object in the visual field and that is certainly what we have to get rid of. Pattern recognition is easy enough to devise if a picture is to be presented in register such as a series of numbers in a check reading machine. With the eye it is necessary to recognize an object anywhere in the visual field. It is first necessary to explain general terms what Fourier transforms are, in a relatively non-algebraic way.
Fourier showed that any periodic wave, of any shape whatever could be reproduced by summing a series of simple sine waves which are harmonics (Multiples) of the fundamental wave, so long as they were of the correct phase and amplitude. This idea could be extended to apply to a single cycle of the fundamental wave. In other words a graph of any shape could be simulated by the addition of series of simple sine waves superimposed. See figure #10. Suppose that you draw a straight line across a picture and express the density of the picture along that line by a graph. Then you could reproduce that density variation across that line of the picture by summing a series of sinusoidal waves. I proposed that you could represent the entire picture in two dimensions by superimposing a series of sinusoidal grids of various widths and orientations. This would be a Fourier transform.
The grid sensitive units in the visual cortex would be formed by linking together in parallel a number of the famous line sensitive neurons discovered by Hubel and Weisel. Grids of various different spaceings could be represented by pairs of grid sensitive neurons each at slightly different angles to each other. To convince yourself of this possibility, hold two ordinary hair combs in front of one another at a slight angle and you will see a wider grid. This is formed by a Moire pattern which is due to the overlap of the spatial frequencies in the two combs - a kind of interference phenomenon. We certainly do have grid sensitive cells in our own visual system, as can be shown by optical fatigue experiments. See New Scientist Jan. 16 1973. Fergus Campbell reported from Cambridge that staring for a considerable time at a grid of a particular orientation produced an after image which showed evidence of fatigue in grid sensitive neurons. This confirms my idea. Further evidence of grid sensitive neurons was also found by Hartline. He used bright grids flashed in front of the eyes and that also produced fatigue and persistence of the pattern for a short time as is shown by staring at a blank background. Independence of size might be obtained by making a second Fourier transform or by making some other kind of transform, possibly a Fourier transform using the logarithm of the spatial frequencies.I am not a sufficiently good mathematician to say what transform is required in order to cover change of size as well as translational independence, but there must be one because the visual system has it. In the biological case the Fourier transform is not complete and perfect but only performs what is useful to the animal. In the human case it does not cover inversion and only covers modest angles of tilt. This implies that some of the elements required for a complete Fourier transform are missing. During the learning process we only develop what we need and what we are actually going to use. This is illustrated by the well known experiment with kittens where they were restrained by collars round the neck, which made it impossible for them to see horizontal lines. They consequently became defective in the development of certain laterally connecting fibers in the visual cortex.
In the discussion concerning the holographic memory we continually refer to primary sensory cortex. The outcome of this diversion is that in the case of visual cortex we might have to relate this to an area which follows the Fourier transform in order to be independent of position in the field when a scene is recalled. I am not clear about this. The visual cortex is exceedingly complex and there are about twenty or so visual maps in different places dealing with different aspects of vision.
This was approximately my position in 1969. In 1994 I submitted a paper developing my ideas further, both to Science and to Nature but it did not get published. It is not necessary to read this as I have incorporated all of these ideas into the present text. I include my submitted paper for historical reasons in order to claim a degree of priority. It had to be compressed too much in order to meet the requirements of the editors and it is more difficult to follow. That is probably why they did not like it. I have expanded those ideas into the present text and extended them further. The article is given in the appendix. There were minimal differences between the two papers. (One to Science and one to Nature. Some typos have been corrected.)
I will now continue my main theme and I recapitulate a little. I have presented a picture of a holographic record in higher cortical neurons written as a pattern of cells, (Or maybe just synapses), some of which have been inhibited and some have had their excitability enhanced. This would have been effected to some degree by Hebbian processes. (More about this later.) At a particular time when a sensory experience produces a corresponding pattern of inhibition and excitation for periods of excitation of some 300 milliseconds we have oscillatory loops communicating with primary sensory cortex in various areas simultaneously, producing a sensory experience in the mind. Those loops are oscillating because the relevant feedback pattern unscrambles the channel to a particular sensory primary cortical area giving rise to positive feedback and oscillations. The nature of the feedback to sensory cortex and its unscrambling function has been explained. When the reciprocal network has been fully formed we get a positive feedback situation in very many loops, because we are feeding the output straight back to the original neurons. The result will be that there are very many potentially oscillatory loops. Oscillation occurs if the loop gain is more than unity. We assume that there is a great deal of negative feedback in general, in order that there should be stability. The gain in the various loops has to be held close to unity. If we are just above that level, oscillation will occur. If below, it ceases. At any time when recall is being effected there will be oscillatory loops connecting any active neuron in the lower sensory area with the higher cortical memory area. One can envisage an active oscillatory loop going through the lower neuron but there will also be a "Fuzzy" surround of supporting pathways each of which by itself would be short of oscillation connecting to a number upper neurons. See Figure #11.
It is now necessary to explain how the oscillations become extinguished. We need a process which will reduce the loop gain in that loop fairly quickly so that another loop can be active. It turns out that habituation will do the trick. Habituation is an effect which reduces the excitability of a neuron temporarily, immediately after it is active. Quite a small amount of habituation reduces the gain to just below unity, so the oscillations will stop. This may take a little time and in our model we have taken this to be about 300 milliseconds. Synapses have Hebbian properties and also they have habituation. We can assume that in mammals habituation occurs in a way very similar to that described by Kandel in Aplysia (The water snail - Ref.7 - E.R.Kandel Cellular Basis of Behavior - Freeman 1976). He found that, for a single synapse, habituation goes to about 30 per cent in ten seconds. If we have to go over a loop comprising three synapses that would translate to a loop gain falling to about 3 per cent in ten seconds (0.3 x 0.3 x 0.3 = 0.027) or maybe about to 90 per cent in a third of a second, enough to extinguish oscillations. If it was over 4 synapses it would be easily enough. We don't know how many synapses are involved but it is entirely possible that loop gain could be reduced by the few percent necessary, within 300 milliseconds. (Fall would be logarithmic, not linear. You can juggle the figures and it's feasible.) It is part of the theory that oscillations in such a loop do occur during conscious thought and that they last for about 300 milliseconds, corresponding to the time for an individual thought. The frequency would probably be at about 40 - 50Hz. These are known to occur. This would correspond to about 12 to 15 oscillations before the loop gain falls to extinction level. It might be necessary to have some lateral coordination between neurons in order to synchronize the changing of loops.
We now have to explain how a learning process might be able to set up the correct reciprocal network. Initially connections are random or nearly so. When the brain begins to function, signals are at first random. However there will be feedback pathways here and there which tend just a little toward the oscillatory condition. The neurons in that loop would fire just a little more often than the others and as a result, that pathway would be written in a little more strongly by a Hebbian process. What Donald Hebb originally proposed was that when neurons fire there is a pre and post synaptic process which builds in a tendency to fire at a lower threshold at a later time. It has been shown by Kandel that Hebbian rules can be found to be obeyed in the synapses of Aplysia (The water snail) and we certainly expect the same in mammals. When we get actual oscillations, that would lead to the oscillatory loop being written in very firmly by Hebbian rules. As one approached the oscillatory condition there would be a greater tendency to fire and there would be more frequent firing than with a random neuron, resulting in post synaptic potentiation. Here we have a kind of neural Darwinism though not quite the same as the neural Darwinism described by Gerald Edelman. The sensitivity of the synapses in a potential loop would gradually increase over time by a kind of natural selection. (Neural Darwinism, Gerald M. Edelman - Oxford University Press 1989). Starting with random connections before the training and initially random use, there would be a tendency to firing more readily whenever the signals hit a pathway which was closer to the positive feedback condition. Firing slightly more often, the feedback loop would build up over time by a process analogous to natural selection or survival of the fittest connection.We have arrived at a picture of brain function explaining the holographic model and how the unscrambling is achieved by a reciprocal network. The properties of the unscrambling network have been defined and we have explained how it could be self-programmed. The result of this self- programming is that the brain is a "Topographic Form Analyzer". We must now examine artificial neural nets and see what they can do, comparing these with the brain.