What’s in the brain?
[This is a precis of a graduate course in Speech Pathology I gave recently at NYU called The Science and Neurology of Language.]
Given our observation of speech, what can we conclude is represented in the brain? So far we’ve found: representations (units), strings (concatenations of units), categories (hierarchical structures) and functions (rules) operating on all of these.
We can observe from speech that language is structured in small units that are joined together to form larger units. We observe this joining at three levels: the phonemic, morphemic and syntactic.
The joining can sometimes be simply sequential (Markovian), as often in the case of strings of phonemes, the sequencing of prefix-root-derivatonal suffix-inflection and the sequence of words in the noun phrase (determiner-adjectives-noun). Such strings may be joined together one unit at a time in sequence like beads on a thread.
But we also see that some joinings involve long-distance relationships that are not Markovian. The joining of certain prefixes depends on the grammatical category of the stem, and the category of the stem depends on the suffix, so the ability of certain prefixes to join to a stem depends not on the immediate morpheme which follows the prefix but on a suffix which may be many morphemes distant. E.g., re- can be attached to verbs , including verbs derived from adjectives or nouns, but typically not to the adjectives or nouns themselves
rebright, rewhite (simple adjectives)
are not possible English words but
rebrighten, rewhiten (as in “rebrightens and rewhitens your wash”)
are; even more perspicuous, there is no English noun
pronounced with [s] (a rehouse??), but there is a verb
pronounced with a [z] indicating the verb to house (as in “we must house the homeless,” ” we must rehouse Katrina victims”)
Similarly, to unmove is semantically impossible because un- attached to a verb has the meaning of reversal of the verbal action (“unlock the door”), but unmoved (“this performance left me unmoved”) is possible because un- added to an adjective means not and past/passive participles are adjectives (observe their position in the sentence “an unmoved audience”/”a hushed audience,” “the audience was unmoved/hushed” — it’s the same position as the adjective “a quiet audience,” “the audience was quiet”).
These attachments are not Markovian chains. For this kind of relationship a machine of greater power is needed, a machine that can accept grammatical-categories-of-units as well as mere units. We can observe this machine at work in syntax as well as in morphology.
Aside from these two machine structures, the Markovian and the categorial, we find in language functions that operate on the units. We see this in phonology at the level of the phoneme with its allophones, and at the level of morphology with the allomorphs of the plural for nouns — [s,z,∂z] — and past tense for verbs — [t,d,∂d]. At the level of syntax, we find a movement function that operates on a relative pronoun in object position placing it at the head of the relative clause. (Using i to indicate the element moved and the position from which it is moved: “The book thati you gave me __i, was boring”).
So, we can conclude that the brain must accommodate the following:
A. REPRESENTATIONS of
1. phonemes (possibly abstract and underspecified)
2. morphemes encoded with their meaning, their grammatical category or category changing property in the case of derivational suffixes, their attachment preferences and their syntactic reflections
3. words with their meaning, their grammatical category and their combinatorial preferences (“sneeze” takes no object, “hit” takes one, “give” takes two)
B. SEQUENTIAL STRING STRUCTURES of
C. HIERARCHICAL CATEGORICAL STRUCTURES of
(we haven’t looked at any examples of long-distance relationships among phonemes)
D. FUNCTIONS (RULES) governing
3. syntactic structures (e.g., movement of wh- words in relative clauses)
So the brain must store representations of units at three levels and must be able to handle sequential and hierarchical structures and functional rules. We’ve looked at the machine type that accepts sequences (finite automata) and the machine type that accepts category hierarchies (push-down automata). Transformational functions require a Turing Machine.
That’s what’s in the brain: a finite automaton, a push-down machine and a Turing machine, each dedicated to some area of language structure.
All this generative brain ability we have figured out just from circumstantial evidence of speech behavior.
There remains the question, “Where in the brain are these located, how are the representations stored, where or how are the structures and functions implemented?” Chomsky has no answer to this except his uncontroversial claim that they are stored in your brain somewhere or somehow, not in your big toe. Lieberman, Sidtis and Bever, among others, all have ideas about where these are to be found in the brain. They present them as anti-Chomskian, but since Chomsky has no views on location, this seems a bit unfair. It would be more accurate to describe their views as anti-Geschwindian or anti-classical: it’s not a simple story of Broca’s area for syntax and production, Wernicke’s area for semantics and reception. The revelations of generative grammar tells us what to look for, not where we’ll find it.