Oxford university press how the br ain evolved language



Yüklə 2,9 Mb.
Pdf görüntüsü
səhifə3/18
tarix06.06.2020
ölçüsü2,9 Mb.
#31814
1   2   3   4   5   6   7   8   9   ...   18
how the brain evolved language


Figure 2.9. 
A six-celled brain: bilateral symmetry with inhibition and propriocep­
tive feedback. (a) Sensory circuit. (b) Motor circuit. 
drove the right side of the body and that the motor command system formed 
a loop through R
i
. When proprioceptive signals from the fish’s right side told 
R
i
 that the fin stroke was complete, R

inhibited L
e
, conserving energy and al­
lowing R
e
 to activate its stroke on the left side of the fish’s body. Thus, the ver­
tebrate swam through the water, rhythmically swinging its tail from side to side, 
the quickest predator in the Ordovician sea.


36

HOW THE BRAIN EVOLVED LANGUAGE
• 




E
• 
The Communicating Cell
In the last chapter we reviewed some of the problems one-celled life solved in 
its struggle to survive. In order to get ahead in life, we saw that the primeval 
organism needed to grow and to move. This led to multicelled creatures with 
the new problem of intercellular communication. To get ahead, the multicelled 
organism’s left cell had to know what the right cell was doing. Thus, the evolu­
tionary differentiation of cells leads us to the origin of mind: knowing what 
one is doing. 
For muscle cells to work together, they must be coordinated. This implies 
that some cell, or group of cells, must take charge and communicate an order 
to muscle cells, many of which are relatively distant. By what structure and 
process could one cell communicate with another cell over a distance? In the 
modern neuron, it is the axon that makes this possible. The axon is a long, thin, 
tubular extension of the cell’s membrane that carries electrical intercellular 
communications. There is no clear fossil record of how the axon evolved, but 
as we saw in chapter 2, an obvious candidate prototype was the Mastigophora’s 
flagellum: Nature having once invented a distal extension of the cell, nature 
did not have to reinvent it. Nature only had to remember it somewhere in DNA 
and then adapt it to create a new cell type, the neuron. 
Since the functions of brain cells were a mystery to early anatomists, dif­
ferent neurons were first named by the shapes of their cell bodies, and in the 
first half of the twentieth century, a menagerie of “pyramidal” and “spherical,” 
“stellate” and “bipolar,” and “spiny” and “smooth” neurons was collected 
under the microscope. But whatever their body shape, all neurons have long 
axons.

The large pyramidal cells of cortex (from the Latin for “rind”) stain par­
ticularly well and became the early objects of microscopic study. Cerebral cor­
tex is a sheetlike fabric of neurons about 4–5 mm thick. In microscopic cross 
sections of this fabric like figure 3.1, early researchers could see the apical 
36 

THE  COMMUNICATING  CELL 

37
 
Figure 3.1. 
The laminae of cerebral cortex. (Lorento de Nó 1943. Reprinted by 
permission of Oxford University Press.) 
dendrites of pyramidal cells rising high above their cell bodies, finally spread­
ing out, in treelike “arborizations,” while lesser, basal dendrites spread out from 
the bottom. Below the cell body, axons could be seen descending below the 
cortical sheet (a, b, c, and d in figure 3.1). But where do these axons go? They 
quickly outrun the microsope’s field of view. If one could follow figure 3.1 sev­
eral frames to the right or the left, it would become clear that many of the axons 
arise again into the cortical sheet, where they connect with the dendrites of 
other cortical neurons. But it remains almost impossible to know exactly which 
neurons connect with which other neurons. 
This problem becomes even worse if one looks very closely at some of the 
pyramidal cells in figure 3.1. There it can be seen that axon collaterals branch 
off and radiate from the main axon. So to learn the connections of any neu­
ron, we must trace not just one axon but thousands of axon collaterals. That 
there are thousands of axon collaterals for every main axon is graphically dem­
onstrated in figure 3.2, a drawing from an electron micrograph of an average 
neuron’s cell body. Each bump on the cell body is the synaptic connection of 
some axon collateral. If the average neuron receives thousands of connections, 
as in figure 3.2, then it follows that an average neuron must also send out thou­
sands of axon collaterals, each originating from a single inconspicuous output 
fiber. 
To put the problem in further perspective, imagine that a largish pyrami­
dal cell 150 
µm in diameter was actually a large tree with a trunk 1 m in diam­
eter. Then the tree’s apical dendrite would rise 50 m above the ground, and its 
basal dendrite “root system” would be 30 m in diameter. All of this relates nicely 

38  • 
HOW  THE  BRAIN  EVOLVED  LANGUAGE 
Figure 3.2. 
Competition for synaptic sites is intense. (Poritsky 1969. Reprinted by 
permission of John Wiley and Sons.) 
to the proportions of a large tree. But the main axon collaterals would run a 
distance of 1 km! Making the problem worse still, many main axon collaterals 
descend and run this distance beneath the cortex in a great labyrinth of “white 
matter,” a tangled mass of trillions of other axons, a cortical “underground” 
(a, b, c, and d in figure 3.1). Now imagine the task of excavating and tracing a 
single axon. Sometimes, the axons form bundles, called fascicles, which, like a 
rope, can more easily be traced from brain region to brain region. In this way 
we know, for example, that a bundle of axons (the arcuate fasciculus) connects 
Broca’s area and Wernicke’s area. Only within the last decade has science 
finally begun to accurately trace detailed pathways from neuron to neuron 
using radioactive and viral tracing techniques, and even still the problem is 
daunting. 
It is no wonder that throughout the twentieth century, researchers by and 
large ignored the axon collaterals and instead focused their ever-more-powerful 
microscopes on ever-smaller neural structures in an ever-narrowing field of view. 
We shall do the same in this chapter. We shall focus on details of neural struc­
ture. But in the rest of this book, our main task will involve understanding the 
broad view, the connectivity patterns of axon collaterals. 
The Neuron Membrane 
Toward the end of chapter 2, we saw that the early neuron had to develop a 
cable to communicate its messages to a muscle cell, and we identified the axon 

THE  COMMUNICATING  CELL 

39
 
as that cable. But how exactly does a message travel down the axonal cable? 
Since axons are hollow, one might imagine chemicals diffusing or even (à la 
Descartes) being pumped through the axon’s interior. A moment’s reflection 
on the tree analogy and the 1 km axon should convince us that this would be 
a hopelessly slow mechanism. And blessed with the hindsight of twentieth-
century science (and Galvani’s eighteenth-century observations), we know that 
the nervous signal is electrical. Where could the first neuron have gotten the 
idea of electrical communication? 
Thinking back to chapter 2, recall the membrane feeding frenzy. There 
we postulated that the “mouths” of the protozoan cell membrane could com­
municate via the ionic charges of sodium (NA
+
) and chloride (Cl

) in the pri­
mordial soup. The evolution of nervous signaling through recruitment of such 
an ionic feeding signal is quite speculative, but there is no longer anything 
speculative about the role of Na
+
 and Cl

 in the propagation of nervous sig­
nals. In 1963 Hodgkin and Huxley received the Nobel Prize for defining the 
role of Na
+
 and Cl

 in nerve signal propagation along the giant axon of the 
squid. 
Not all nervous systems evolved exactly like the human one. Mollusks de­
veloped along a rather different line, and squid evolved axons up to 500 mm 
in diameter, some 100 times larger than a comparable vertebrate axon. A 
series of twentieth-century studies based on these giant squid axons established 
many essential facts about how nerve cells transmit signals along the axon. Into 
such large axons, Hodgkin and Huxley were able to insert microelectrodes and 
micropipettes to measure differences in charge and chemical concentrations 
inside and outside the axon. Living cells tend to be negatively charged, and in 
most neurons this charge is usually expressed as an internal, negative, “rest-
ing-level” charge on the order of –70 millivolts, relative to the surrounding 
plasma. This can be almost entirely attributed to a lower internal concentra­
tion of positively charged sodium ions. 
Figure 3.3 schematically depicts a segment of an axon membrane and the 
process Hodgkin and Huxley discovered. In the figure, when a sodium gate is 
opened, positively charged Na
+
 ions are drawn through the membrane into 
the cell interior. This local voltage drop (a “shock,” or depolarization) causes 
adjacent sodium gates to open, and the sodium influx moves along the axon 
membrane. This simple Na
+
 chain reaction is the basis of the “nervous” signal, 
but it has its limits. If sodium is allowed to flow into the axon unchecked, the 
resting potential of the membrane will soon reach 0 mV. Once this happens 
(and actually well before this happens), another signal cannot be generated 
until the excess sodium is somehow pumped out of the axon and the original 
–70 mV resting-level charge is restored. In fact, cells do have membrane “waste” 
pumps which remove excess sodium and other waste from the cell interior, but 
these pumps are metabolic and operate much more slowly than the electrical 
forces of ions and the nerve signal. 
The successful neuron could not wait around for these bilge pumps. The 
nerve cell membrane therefore evolved a separate set of potassium gates to 
control the influx of sodium. As figure 3.3 illustrates, when an initial Na
+
 in­

40  • 
HOW  THE  BRAIN  EVOLVED  LANGUAGE 
Figure 3.3. 
Sodium and potassium gates. (Eccles 1977. Reprinted by permission of 
McGraw-Hill Book Company.) 
flux locally depolarizes the axon membrane, Na
+
 rushes in. But in the neuron, 
the same ionic forces cause potassium gates to open. Positively charged potas­
sium ions (K
+
) leave the cell, repolarizing the membrane and closing the open 
Na
+
 gates. Potassium being heavier than sodium, we may imagine the potas­
sium gates and ions as being relatively ponderous and slow. Thus, the K

cur­
rent is exquisitely timed to stop the chain reaction just after the sodium charge 
has begun to propagate down the axon but before an influx of Na
+
 floods and 
discharges the membrane unnecessarily. The time course of these events is 
plotted in figure 3.4. 
On an oscilloscope, as in the figure, this brief membrane depolarization 
shows up as V, a brief voltage spike, or action potential. After the spike, metabolic 
pumps must still restore K
+
 and Na
+
 to their original levels, but this is now a 
minimal task, since only enough Na
+
 and K
+
 was transported across the mem­
brane to generate a single spike. In the meantime, the –70 mV resting-level 
charge (–64 mV in the particular cell measured in figure 3.4) can be speedily 
restored, and a new Na
+
 pulse can be generated and propagated. Most central 
nervous system (CNS) neurons have a “refractory time” on the order of 2.5 
ms. During this time, the neuron cannot generate another spike. This means 
that most neurons can generate spikes as frequently as 400 times per second, 
but there is substantial variation. Renshaw interneurons,
2
 for example, have 
been found to fire up to 1,600 times per second. 
By implanting microelectrodes at two points along the axon in figure 3.3, 
it is possible to electronically measure the speed at which a spike propagates. 

THE  COMMUNICATING  CELL 

41
 
Figure 3.4. 
Sodium and potassium currents create and limit the duration of signal 
“spikes.” (Eccles 1977. Reprinted by permission of McGraw-Hill Book Company.) 
As a rule of thumb, nerve signals travel at a rate of 1 m/s, but a typical speed 
for a 30 µm invertebrate axon might be 5 m/s. The speed is directly propor­
tional to the diameter of the axon. This explains why the squid, as it evolved to 
be larger, evolved a giant axon to signal faster at larger and larger distances. 
With a 500 µm axon, the squid’s nervous signal travels at 20 m/s. A little math, 
however, will show that the squid’s solution is something of an evolutionary dead 
end. There are limits to growth. The speed of transmission is proportional to 
the square root of the axon diameter, but the axonal volume which the cell me­
tabolism must support grows with the square of the diameter. The squid was 
caught in a game of diminishing returns. Vertebrates found a better solution. 
Myelin 
Vertebrates evolved a type of cell that wraps itself around an axon. In the brain, 
these cells are called oligodendrocytes, whereas elsewhere, they are called 
Schwann cells, but in both cases the resultant axon wrapping is called a myelin 
sheath or, simply, myelin. Between the sheaths of successive oligodendrocytes, 
the axon is exposed at “nodes of Ranvier” (figure 3.5). When a sodium influx 
depolarizes the extracellular fluid at such an exposed node, adjacent sodium 
gates cannot be opened, because they are under the myelin sheath. Instead, 

42  • 
HOW  THE  BRAIN  EVOLVED  LANGUAGE 
Figure 3.5. 
Myelin. (Eccles 1977. Reprinted by permission of McGraw-Hill Book 
Company.) 
the ionic influence is exerted on the next node of Ranvier, opening sodium 
gates there. As a result, nervous impulses “jump” from node to node at electri­
cal speeds which are not limited by the mechanical opening and closing of local 
membrane gates. This process is therefore sometimes called saltatory conduc­
tion, from the Latin saltare < salire, “to leap.” So while a 500 µm giant squid 
axon labors to achieve speeds of 20 m/s, a large, 5 µm, myelinated vertebrate 
axon can achieve speeds of 120 m/s. As a bonus, the myelinated neuron also 
needs far fewer ion pumps to restore ionic balance after each pulse and so 
uses less metabolic energy. The myelin sheath also provides structural sup­
port for the axon, allowing it to be thin and more energy-efficient without 
loss of strength. 
Figure 3.6 illustrates how successive myelin cells envelop an axon. Myelin 
is white in appearance, so it is the myelin sheaths which give the mass of axons 
beneath the cortex the name “white matter.” Inflammation and/or degenera­
tion of myelin sheaths cause major neural pathways to slow down to a relative 
halt. This is the debilitating disease known as multiple sclerosis. 
Thresholds, or Why Neurons Are (Roughly) Spherical 
Although I have explained how the nervous signal came to be, and how it came 
to be fast, I have not explained how a nerve signal, being just barely sustained 
along the surface of a 3 
µm axon, could depolarize a 150 µm cell body. 
It can’t. So where the axon terminates on another cell, the presynaptic axon 
terminal swells, forming a larger synaptic terminal, or “knob.” The postsynap­
tic cell also swells at the contact site, forming a spine (figure 3.7). Now the 

THE  COMMUNICATING  CELL 

43
 
Figure 3.6. 
Oligodendrocytes and Schwann cells wrap axons in myelin. ( Joseph 
1993. Reprinted by permission of Plenum Press.) 
impedance of the axon more nearly matches the impedance of the postsynap­
tic membrane. When the charge does successfully cross the synapse onto the 
postsynaptic membrane, the spine funnels the charge onto the postsynaptic 
cell body. 
Yet it is not funneled directly onto the cell body. As figures 3.7 and 3.2 il­
lustrate, the spine is but the smallest branch in an arborization of dendrites, 
which resembles nothing so much as our fractal fern in figure 2.4. Thus, a single 
spike never depolarizes a postsynaptic cell body by itself, but if one small den­
dritic branch is depolarized just as it meets another small, depolarized branch, 
then the combined charge of the two together can depolarize a dendritic limb. 
And if that depolarized limb should meet another depolarized limb, then that 
limb will be depolarized, and so on, until the “trunk” of the dendritic tree builds 
a charge sufficient to begin depolarizing the cell at the “north pole” of the cell’s 
spherical body. 
In order for the depolarizing current to reach the axon, a larger and larger 
band of depolarization must spread out from the north pole—until the depo­
larization finally reaches the cell’s equator. From there to the south pole, the 
depolarization spreads to an ever-decreasing area of membrane. Thus, the 
equator defines a threshold, a degree of membrane depolarization which must 
be exceeded if depolarization is ever to reach and propagate along the axon. 
Once exceeded, however, a coherent charge, limited by the dynamics illustrated 
in figure 3.4, is delivered to the axon. 
A generation of researchers was led astray along with Von Neumann 1958 
in concluding that since the spike was a discrete, thresholded, “all-or-nothing” 

44  • 
HOW  THE  BRAIN  EVOLVED  LANGUAGE 
Figure 3.7. 
Photomicrograph of a synapse. A highlighted dendritic spine protrudes 
downward from right-center, with a highlighted axon terminal knob synapsing from 
above. (Llinás and Hillman 1969. Reprinted by permission of the American 
Medical Association.) 
event, the nervous system itself was a discrete, binary system like a computer. 
If we compare the surface area of the axon to the surface area of the cell body, 
we realize that the thin axon cannot carry away the entire charge of the cell’s 
southern hemisphere in one spike. Instead, a volley of spikes is normally re­
leased, one after another. The frequency of spikes in this volley varies in pro­
portion to the charge on the cell body, up to the limit imposed by the refractory 
period of the cell membrane, and this variable spiking frequency carries much 
more information than a simple 1 or 0. 
Traveling away from the cell body, the spike eventually brings us back to 
the problem Golgi tried to explain to Ramón y Cajal (chapter 1). If the cell 
membranes are not connected, how can the charge be transferred across this 
physical gap, called a synapse, onto the postsynaptic membrane? 
The Synapse 
At its end, each axon collateral abuts (but is not physiologically attached to) 
another cell’s membrane. This junction is called a synapse. Even after solving 
the impedance mismatch problem with a system of terminal knobs, spines, and 
dendrites, there remains the biological hurdle of communicating the nervous 
signal from the presynaptic cell to the postsynaptic cell, across a synaptic gap. 
The quickest way to pass a nervous impulse would be to simply pass the ionic 
chain reaction directly onto the postsynaptic cell membrane. In fact “gap junc­
tions” are commonly found in submammalian species like electric fishes. A few 

THE  COMMUNICATING  CELL 

45
 
are even found in the human nervous system, but they are very rare. After all, 
if Na
+
 flows into the presynaptic terminal knob when it is depolarized, then 
there will be that much less Na
+
 left in the synapse to depolarize the postsyn­
aptic membrane. Instead, at virtually all synapses, specialized chemical efflu­
ents from the presynaptic axon terminal—neurotransmitters—open specialized 
pores in the postsynaptic cell membrane, and the opening of these gates reini­
tiates depolarization. But this is a slow process. As Sherrington demonstrated 
in 1906 (incidentally disproving Golgi’s continuous-network hypothesis and 
winning Ramón y Cajal’s side of the argument), it decreases the speed of the 
neural signal by a factor of ten. If the race of life goes to the quick, how did 
the chemical synapse survive? 
As it happens, in the mid-1930s electrical engineers began to model neu­
ral circuits rather as if they were Golgian radio circuits. In a famous paper 
McCulloch and Pitts (1943) offered networks like those in figure 3.8. Like the 
last of these, which features a reverberatory loop, such networks had many in­
teresting properties, but they had one glaring pathology: when the current is 
switched off or interrupted in such a network, all memories are lost. 
Donald Hebb noted that “such a trace [a McCulloch-Pitts reverberatory 
loop] would be unstable” (Hebb 1949, 61). If the current were turned off, the 
thought would be lost. Such Golgian “gap junction” synapses have no mecha­
nism for long-term memory. Where then might long-term memory be found 
in a real brain? Hebb went on to speculate that long-term memory (LTM) must 
therefore reside at the (chemical) synapses, so that “when the axon of cell A is 
near enough to excite a cell B and repeatedly or persistently takes part in fir­
ing it, some growth process or metabolic change takes place in one or both 
cells such that A’s efficiency, as one of the cells firing B, is increased. . . . A re-
Figure 3.8. 
A McCulloch-Pitts neural network. (McCulloch and Pitts 1943. Re­
printed by permission of Elsevier Science Ltd.) 

46  • 
HOW  THE  BRAIN  EVOLVED  LANGUAGE 
verberatory trace might cooperate with the structural change and carry the 
memory until the growth change is made” (1949, 61–62). 
Thus, Hebb described synaptic learning as a physiological associative pro­
cess. Since associative learning had been extensively developed as a psycho­
logical concept (Ebbinghaus [1913] 1964), Hebb’s criticism of the McCulloch-
Pitts model found a wide and receptive audience among behaviorists. The 
Hebbian theory joined both learning and long-term memory at the synapse, 
and soon microscopic evidence was found which tended to support his con­
jecture. It was found that disused synapses tended to atrophy, so the inverse 
seemed plausible: used synapses would hypertrophy—they would grow. En­
larged axon terminals would have more neurotransmitter and would therefore 
engender larger depolarizations in their postsynaptic spines, which, being larger 
also, would pass larger nervous impulses onward. This makes for a very clear 
picture of LTM, and indeed, we will use hypertrophied presynaptic knobs 
to represent LTM in the diagrams which follow throughout this book. How­
ever, the actual synaptic mechanisms which underlie long-term memory are 
more complicated, and more wonderful. For a better understanding of the 
modern theory of synaptic learning and memory, we must look more closely at 
neurotransmitters. 
Neurotransmitters and Membrane Receptors 
I have already noted that axon terminals form “buds” or “knobs” which in­
crease the contact area between the otherwise very fine axon and the much 
larger target cell body. Each of these knobs is like a small branch office of 
the main neuron cell body. Within each knob is an almost complete set of 
mitochondria to power the branch office and organelles to manufacture 
locally essential chemicals. All that is really lacking to make the knob a self-
sufficient neuron is self-replicating DNA and the cell nucleus—a minor dif­
ference since mature neurons seem to reproduce rarely. Inside the synaptic 
knob, small vesicles of neurotransmitter (the small bubbles in figure 3.7) 
accumulate along the membrane adjacent to the synaptic cleft, ready for 
release. When the membrane depolarizes, the vesicles empty neurotransmit­
ter into the synaptic cleft. 
The number of neurotransmitters identified has grown substantially since 
Loewi first identified acetylcholine as a neurotransmitter in 1921. There are 
now about a dozen known primary neurotransmitters and several dozen more 
secondary messengers. But what has proved more complex still is the variety of 
neurotransmitter receptors. When an axon terminal releases neurotransmitter, 
the neurotransmitter itself does not penetrate the postsynaptic membrane. 
Rather, it attaches to receptor molecules in the postsynaptic membrane. These 
receptors in turn open channels for ions to flow through the postsynaptic 
membrane. There are several and often many different receptors for each 
neurotransmitter. Apparently, a mutation that changed the form of a neu­
rotransmitter would have broadly systemic and probably catastrophic conse­

THE  COMMUNICATING  CELL 

47
 
quences, but mutations in the structure of receptors are more local, more 
modest, and have enabled more nuanced adaptation. 
Over a dozen different receptors have been identified for the neuromus­
cular transmitter acetylcholine, and over twenty different receptors in seven dis­
tinct families (5–HT1–7) have been identified for the neurotransmitter serotonin (also 
known as 5–HT, 5–hydroxytryptamine). Some receptors admit anions (Na
+
 and 
Ca
2+
), depolarizing and exciting the postsynaptic cell. Other receptors admit 
cations (principally Cl

), hyperpolarizing and inhibiting the postsynaptic cell. 
Thus, one should perhaps no longer speak of excitatory and inhibitory neuro­
transmitters, since many can be either, depending upon which type of recep­
tor the neurotransmitter binds to. This is especially true of neurotransmitters 
that bind to G-protein-coupled receptors: notably dopamine, serotonin, and the 
adrenergic transmitter noradrenaline.
3
 In such synapses, regulatory G-proteins 
and catalysts like adenylate cyclase (cAMP, adenosine 3',5'-cyclic monophos­
phate) act as second messengers, further modulating membrane polarization. As 
we shall see, these have various and complex effects upon the CNS, and they 
have received considerable recent attention. Drugs like Ritalin

 widely pre­
scribed for attentional deficit disorder) and Prozac

 widely prescribed for 
depression and anxiety) affect the adrenergic and serotonergic systems, respec­
tively. Dopamine deficiency has been isolated as a cause of Parkinson’s disease, 
and Gilman and Rodbell were awarded the 1994 Nobel Prize for initially eluci­
dating the function of G-proteins in neurobiochemical signaling. 
The other major group of neurotransmitters are those that effect fast sig­
naling directly through ligand-gated ion channels. Fortunately, these neu­
rotransmitters tend to be more uniformly excitatory or inhibitory, and for our 
minimal anatomies, we need focus on only two: glutamate (and its chemical 
cousin aspartate) and gamma-aminobutyric acid (GABA). 
Glutamate is an excitatory neurotransmitter. It is produced by major brain 
cells like the pyramidal cells of neocortex, and it induces depolarization in its 
target membrane, as described above. In the main excitatory case, glutamate 
is released from the presynaptic axon terminal and attaches to a receptor gate 
on the postsynaptic membrane. 
In 1973, Bliss and Lømo, using microelectrodes, studied the “evoked post­
synaptic potential” response of hippocampal pyramidal synapses to repeated 
stimulation. For the first few intermittent stimulations, a modest response was 
obtained. But after a few of these stimulations, the synapse began to generate 
a bigger and bigger response to the same stimulation. The synapse seemed to 
learn. Moreover, this learning effect persisted. Called long-term potentiation 
(LTP), this effect is now the leading physiological explanation of long-term 
memory. In figure 3.9, the effects of LTP are graphed. First, several strong, 
“tetanizing” stimuli are applied to a postsynaptic membrane until t = 0. For a 
long time thereafter, milder stimuli continue to elicit an elevated response. 
As it turns out, glutamate attaches to two distinct types of receptors: N-methyl 
D-aspartate (NMDA) sites and non-NMDA sites.
4
 At first, in “normal” transmis­
sion of the nerve signal, glutamate opens the non-NMDA gates, allowing Na

to enter the postsynaptic cell and reinitiating our familiar Na
+
/K
+
 chain reac­

48  • 
HOW  THE  BRAIN  EVOLVED  LANGUAGE 
Yüklə 2,9 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   ...   18




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin