Müller, Horst M., & Kutas, Marta (1996) What's in a name? Electrophysiological differences between spoken nouns, proper nouns and one's own name. NeuroReport 8  221-225.

What's in a name? Electrophysiological differences between spoken nouns, proper names, and one's own name

Horst M. Müller 1,3,CA and Marta Kutas1,2
Departments of 1Cognitive Science and 2Neurosciences, University of California at San Diego,
9500 Gilman Drive, La Jolla, CA 92093-0515, USA;

3AG Experimentelle Neurolinguistik, Fakultät für Linguistik und Literaturwissenschaft,
Universität Bielefeld, Postfach 100131, 33501 Bielefeld, Germany

3,CA Corresponding Author and Address

Running title:  What's in a name?

Key words: 

Auditory; Spoken language processing; Event-related potential (ERP); Proper names; Nouns; Lexicon; Word categories

Correspondence address:

Horst M. Müller
Universität Bielefeld
Fakultät für Linguistik und Literaturwissenschaft
AG Experimentelle Neurolinguistik
Postfach 100131
33501 Bielefeld

e-mail: mueller@cogsci.ucsd.edu

Tel:   +49-521-106-6928
Fax: : +49-521-106-2996



To investigate the neural processing of different word categories, we recorded event related potentials (ERPs) from 32 individuals listening to sentences, beginning either with a proper name (first name), the subject's own name, or a common noun. Names and nouns both elicited ERP waveforms with the same early componentry, but the N1 and P2 components were larger for proper names than common nouns. The ERPs to the subject's own name also had a large N1/P2 plus a prominent negativity at parieto-central sites peaking around 400 ms and a late positivity between 500-800 ms over left lateral-frontal sites. These findings are consistent with differential processing of people's first names within the category of nouns.



A major issue in cognitive science concerns how phonological, lexical, and conceptual knowledge about words is represented in the human brain. Clues for the relevant linguistic categories have been sought in patients with brain lesions and in processing differences for various word categories using positron emission tomography (PET) and electrical brain activity. One useful tool in the physiology of language comprehension including the processing of word categories is analysis of event-related brain potentials (ERPs) (e.g. 1).

In the present study we compare the comprehension of different lexical subclasses occurring as first words in naturally spoken sentences. From a linguistic point of view one might expect differential processing even within a single grammatical category like nouns, e.g. common nouns versus proper names. The class of nomina has been divided into common nouns (nomina appellativa) like handkerchief, table or desk and proper names (nomina propria), like Peter, Baxter or Rocky Mountains. This classification has found support from findings in linguistics2 and philosophy of language.3 The grammatical analysis of proper names, which can be traced to Plato's Cratylus, started with the Stoic grammarians who introduced a distinct linguistic category for proper names: "Onoma", which subsequently was differentiated by Dionysius into three - "name", "noun", and "subject". This linguistic classification remained unchanged for almost 2000 years.4 Within the last few decades the special characteristics of proper names and their potentially unique role in cognition have been well articulated by the discipline of onomastics.5

Findings from neuropsychology and biology also support a distinction between proper names (PN) and common nouns (CN). For example, there are reports of various patients with specific brain damage (aphasics) who are selectively impaired in their ability to use either PNs 6,7 or CNs, respectively or in some cases even very specific categories of nouns like tools or fruits relative to other categories which are relatively intact.8 Nonetheless, it remains unclear whether there exists a biological basis for this distinction between PNs and CNs. It was to that end, we compared brain processes to spoken sentences starting either with people's first names, or a common noun.


Subjects and Methods

Subjects: Thirty-two UCSD-students (12 woman and 20 men, 23.5 ± 3.9 years) received either course credit or $5 per hour for participating. All participants were native English speakers; 29 were right-handed and 3 left handed according to the Edinburgh Handedness Inventory. All participants had normal hearing with no threshold differences between the ears, tested by air-conduction pure-tone audiometry.

Procedure: Participants were seated in a recliner in a sound-proofed booth, and looked at one of several fixation points against a dark background. After a short practice run, sentences were presented over speakers with an inter-sentence interval of about 3.5 s. While their EEG was being recorded, participants listened to a total of 216 sentences with different syntactic structures; of these a random 44, began with either a common noun (CN) or a proper name (PN), and one began the subject's own name (ON).

Stimuli: All sentences were spoken in a young man's voice at a normal speaking rate, with normal pitch and intonation. After digitizing (16 bit, 44.1 kHz) and computer-editing, the sentences were presented via computer, HiFi-amplifier and two Loudspeakers, 220 cm in front of the listeners. Stimulus intensity ranged between 50 and 55 dB SPL; this corresponds to a relatively quiet conversation. The articulatory duration of the CNs was 404 ± 109 ms, of the PNs 285 ± 91 ms, and of the ON 497 ± 106 ms. To test for a possible influence of extreme kinds of intonation the word help was presented twice, once as a common noun at the beginning of a sentence (length = 149 ms), and once as a single exclamation HELP! (194 ms), spoken more loudly, with greater stress, and with more affect.

ERP-recording and analysis: Recordings were made from seventeen tin electrodes at F3, F4, F7, F8, Fz, Cz, Pz, T5, T6, O1, and at 6 sites bilaterally approximately over Broca's area, Wernicke's area, and auditory cortex, embedded in an elastic cap. These were referred to a balanced non-cephalic, placed at the sterno-clavicular junction and on top of the seventh cervical vertebrate. The electrooculogram was recorded using 4 electrodes placed supraorbitally and laterally around both eyes. The EEG was filtered with a bandpass of 0.01 and 100 Hz, with a time constant of 8 s, and continuously digitized at rate of 250 Hz. For analysis, 10.6 % of the data were rejected due to artifacts. The rejection rate was 15.6 % for the ON, 3.1 % for the word help, and 9.4 % for the shouted HELP!.

Analyses of variance (ANOVAs) were performed on the mean ERP-amplitudes for a 20 ms-time window around 125 ms (N1) and around 225 ms (P2), with the within-subjects variables of word category (PN vs. CN), and electrodes (17 sites). The N1, P2 and N400 data were analysed in two ways, a subject and an item analysis. Violations of sphericity were adjusted by the Huynh-Feldt correction.

For the subject's own name mean amplitude values of a 20 ms time window around 400 ms were also submitted to Student's t-test.



Our results reveal an amplitude difference in the ERPs to the first words of naturally spoken sentences as a function of whether they began with a PN or a CN as early as 125 msec after onset of articulation. As can be seen in Figure 1, both the N1 and P2 components are larger for the proper names. The P2 amplitude was reliably larger for names than nouns both, in the subject analysis [word type x electrode interaction, F(1,16)= 2.95, p < 0.024] and in the item analysis [word type x electrode interaction, F(1,16)=19.56 p < 0.000]. The larger P2 for names was evident at all but the lateral temporal (T5/6) and occipital (O1/O2) sites.

The amplitude difference for the N1 was sigificant in the item analysis [word type x electrode interaction, F(1,16)=3.87 p < 0.032]. In the subject analysis the difference in N1 amplitude showed a tendency in the word type x electrode interaction [ F(1,16)=1.51 p < 0.19]. Looking at the central sites, where auditory ERPs are largest, we find a reliable difference between names and nouns, e.g. at Vertex (p < 0.05). The ERPs to the two word types did not differ from each other in latency.

FIG. 1: Grand average ERPs (N = 32) at the vertex (Cz) elicited by 22 different sentences, beginning with a proper name (dotted line), and 22 with a common noun (solid line). In this and in all following figures negative is up.
In principle, proper names and common nouns could differ systematically along a number of dimensions, which could be the cause of any ERP difference between them. To test the possible effects of articulatory differences in loudness and pronunciation on ERPs, we contrasted the responses to the word help spoken normally and shouted. Similarly, we contrasted the ERPs to proper names starting with plosives (e.g. Paul) versus fricatives (e.g. Fred). Neither comparison revealed any reliable effect on the shape, amplitude, or latency of the N1 or P2 component (Figure 2).

FIG. 2. Grand average ERPs at the vertex (Cz) to the naturally spoken versus shouted word "help" on the left and to proper names, starting with a plosive phoneme versus a fricative at the right.

Like other names, the ERP to the subject's own name (ON) had a large N1 relative to common names. In addition, the ERP to the ON also contained a prominent negativity over centro-parietal sites, peaking around 400 msec (Figure 3) and a later postivity largest over left fronto-temporal sites (Figure 4).

FIG. 3. A) Compared at a parieto-central site (Pz) are the first 900 ms of ERPs elicited by sentences beginning with the subject's first name (bold line) versus the average across 22 sentences beginning with 22 different proper names (dotted line). B) Same as in A, except the proper names are not shown as average but as ERPs to 22 different proper names (one for each name, solid lines) versus that beginning with the subjects own name (bold line).

FIG. 4. ERPs elicited to the subject's own name (dashed lines) and other names (solid lines) show a hemispheric asymmetry for a late positivty (hatched) comparing lateral frontal sites (F7 vs. F8).

Because each subject's own name was presented only once in a recording session but subjects were exposed to 22 other proper names, one concern may be differences in signal-to-noise ratio. We analyzed the data in two different ways and both showed essentially no overlap between the ERPs to the subject's own name and other proper names in the region of the N400. For one analysis using one-tailed t-test on independent measures, the mean amplitude at Pz between 390 and 410 msec for the grand average across 27 ONs (5 lost due to artifacts) was contrasted with the same measure for the grand average of 10 different permutations of 27 randomly chosen proper names (one for each subject); the p-values ranged between 0.075 and 0.0003, mean p < 0.016. In a different analysis using one-tailed, independent t-test, we compared this measure for the grand average across 27 ONs against the grand average of each of the 22 different proper names; the p-values ranged between 0.0168 and 0.0003, mean p < 0.0047. Finally, we also compared this measure for the various proper names, one at the time. Of the 40 different constrasts, only one pair (Jill vs. Mary) differed significantly from each other. Thus the responses to the different names did not reliably differ from each other while they were markedly different from the response to each subject's own name.



The ERPs to the first words of spoken sentences were essentially identical in shape whether they began with a proper name or a common noun, indicating a basic equivalence of the two in this experiment. The articulatory duration of our PN and CN stimuli ranged from 169-535 ms, and 212-658 ms, respectively. Nonetheless, the ERPs are quite similar in morphology, with the exception of the larger N1\P2-amplitudes.

The primary differences in peak amplitudes occurred at 125 and 225 ms; at this point in these words only one or two phonemes of any given word has been articulated. To test the possibility that there might have been some noticeable physical difference between names and nouns, we conducted a behavioral gating study. Participants were, in fact, above chance in guessing whether a stimulus was a common noun or a proper name after listening to only the first 120 ms of it above chance. Guessing performance was even higher if listeners were exposed to the first 200 ms. This makes our findings of early ERP-differences during sentence comprehension less surprising. Results of previous gating studies likewise have concluded, that listeners need to hear only 200-250 ms of a word to repeat it ("shadowing").9 With 2 phonemes recognized, the number of Englishwords compatible with the input is reduced to approximately 40.9 However, we can assume that even more constraints on what the actual word is are provided by coarticulatory and prosodic features. In this study we find that the initial 120 ms of English words provides sufficient information to reveal whether a sound is the beginning of a noun or a name.

As shown in Figure 1, words of very different lengths elicit almost identical ERP wave shapes, indicating that merely looking at a word's duration and physical length is not the right way to determine its recognition point. Any given spoken word is composed of linguistic entities which define that word. In principle, the recognition points for these various sub-word entities could be used to time-lock responses for ERP averaging. However, there is no consensus as to what the fundamental linguistic entity in comprehending speech is. Among the alternatives that have been proposed are the whole word (e.g. logogens), context sensitive spectra, acoustic-phonetic sound sequences, phonemes, phonemic features, phones and allophones, specific engrams, phonemic representations, or syllables.10 Various hypotheses implicate specific feature patterns in word recognition and specific time points at which recognition takes place. Whatever the basic unit, our findings demonstrate that word comprehension is not critically dependent on actual word length. One practical consequence of this is that it may not always be necessary to match the articulatory lengths of words used as stimuli in an speech comprehension experiments.

One explanation for the early N1/P2-differences in the responses to CNs and PNs may be a physical difference in articulation. For example, in anticipation of the first word in a sentence, a speaker might unintentionally provide cues as to what the subject of the sentence is via intonation, for instance; these may differ for common nouns and proper names. Another possibility may be differences in their inherent, phonetic features. It is known that first names reflect acoustic and concerns aesthetic and that certain phoneme combinations are chosen for onomatopoeic reasons. For example, the phoneme [X] can be found more often in CNs than in PNs.11 There are also differences in intonation, as in Turkish where in CNs the endings are more often stressed than in place names.11

The lack of a difference between for the word "help" normally spoken versus shouted indicates that the differences in N1/P2 amplitudes are not simply due to loudness or emphasis. Thus, it seems unlikely that an attention-related effect due to an unexpected change of intonation could account for the observed N1/P2 difference. Likewise, we find that the effect was not due to a simple physical difference in the initial phoneme (plosive vs. fricative) as a within category division of PNs on this basis did not yield a reliable N1/P2 difference. This suggests to us that the ERP differences we observed were based on word category membership even if the word category may have been correlated with different inherent or speaker-imposed phonetic features.

Proper names vs. subjects's own name: Because of its importance and frequency of occurrence in daily life, one's own name can be seen as one of the most over-trained linguistic expressions. A slightly faster reaction to this stimulus is to be expected and may be the cause of the earlier N1. The ON is the most ideal proper name, as no other PN provides greater absence of conceptual meaning for a listener. Without further experimentation it is unclear what the functional significance of the prominent negative peak at 400 ms is. The late left hemisphere positivity may reflect the subject's surprise at hearing their own name within the experiment. In an auditory oddball-paradigm, the subject's own name did elicit a large P300 compared to other words.12 In another study wherein people read sentences such as "My name is ...." one word at a time, assumed names did not elicit a N380, whereas both a false (unexpected) names and their own did.13 These results were discussed in the context of N400s to semantically unexpected sentences endings.

Interdisciplinary context: These electrophysiological differences between PNs and CNs are consistent with findings in other disciplines. In linguistics CNs like "table" or "desk" have a meaning which is assumed to be more or less the same within individuals ("vagueness of language"); each CN stands for a stereotypic concept. Any given object can be a more or less typical member of such a concept and may possess features which belong to two or more concepts. By contrast PNs have no conceptual meaning; they are only paired associates. For example, a given object may have some features of both a "table" and a "desk" concept. This, however, is not the case for PNs; it is not possible to be more or less a "Peter". According to Frege3 PNs have meaning only as "reference" but not as "sense" which excludes the existence of any attribute or semantic connotations. Of course, for any given speaker there are exceptional PNs that may assume a kind of meaning ( e.g. The Judas of our group, or The Mother Theresa of our city). The distinction between PNs and CNs is still under debate.

Within Biology, the use of unique signals to recognize individuals within the same species has an evolutionary advantage and is not restricted to humans. The use of names, albeit not necessarily acoustic, is widespread in animals14 for courtship and rearing of offspring, etc. In a bird colony, for example, a returning parent must identify its chick among thousands of others by auditory and visual features, even if it has changed location. In this example the acoustic signal can be taken as the name of the individual. Because it has an evolutionary advantage, signal use for identifying certain individuals has a longer phylogenetic history than language. Even though humans replaced individual signals with linguistic ones, there might remain a physiological difference in processing signals that stand for certain individuals as opposed to those that stand for categories of objects.

There is also neuropsychological evidence for processing and memory-related differences for various word categories. Findings with aphasic patients led Caramazza and Hillis 15 to argue for different processing of nouns and verbs. Also, proper names are especially vulnerable to memory problems such as recalling a familiar name or learning a new one.16 While recalling familiar names is disproportionately impaired in the elderly, individuals of all ages routinely may recognize a person and recall his/her occupation but not their name. The tip-of-the-tongue phenomenon where semantic information is almost accessible, but phonological information is not, is more common for PNs than for CNs, at least in the elderly.16 Furthermore, after one encounter with a person, their PN proves to be more difficult to remember than other biographical information about them.17 Even if a surname is used as both a PN and as a CN (e.g. Baker), it is harder to recall it when it serves as a name John Baker.18 Some aphasic individuals show very selective impairments in retrieving either certain groups of PNs, e.g., states or persons,6,7,19 or in retrieving certain groups of CNs, e.g., tools or fruits.8,20 Other studies show evidence of distinct anatomical loci for such subcategories of CNs.15,19,21,22



There is evidence for EEG differences for higher level syntactic structures like phrases,23-25 for open class vs. closed class words1,26,27 for nouns vs. verbs28 as well as abstract vs. concrete nouns.29 Nonetheless, it is not yet clear what the relevant word categories are, in terms of neural processing and localizations. Our findings support the proposal that proper names, especially people's own names, are processed in some sense differently than common nouns. Whether our ERP data reflect a difference between names and nouns in the extent to which they capture attention, arouse an emotion, evoke a memory, or reside in different anatomical loci, they do offer a physiological grounding for proposed linguistic and evolutionary distinctions between proper names and common nouns.



The research was supported by the DFG (Mu 797/2) to H.M.M. and by the NICHD (HD22614) and NIA (AG08313) to M.K.



1. Kutas M and Van Petten CK. Psycholinguistics electrified: event-related brain potential investigations. In: Gernsbacher MA, ed. Handbook of Psycholinguistics. San Diego: Academic Press, 1994: 83-143.

2. Carroll JM. Linguistics 21, 341-371 (1983).

3. Frege G. On sense and nominatum. In: Feigl H and Sellars W, eds. Readings in Philosophical Analysis. New York: Appleton Century Crofts, 1949: 85-102.

4. Algeo J. On Defining the Proper Name. Gainesville: University of Florida Press, 1973.

5. Eichler E, Hilty G, Löffler H et al. eds. Namenforschung: Ein internationales Handbuch zur Onomastik, Berlin: de Gruyter, 1995.

6. Hittmair-Delazer M, Denes G, Semenza C et al. Neuropsychologia 32, 465-476 (1994).

7. Lucchelli F and De Renzi E. Cortex 28, 221-230 (1992).

8. McNeil JE, Cipolotti L and Warrington EK. Neuropsychologia 32, 193-208 (1994).

9. Marslen-Wilson WD. Cognition 25, 71-102 (1987).

10. Kent RD. Auditory processing of speech. In: Katz J, Stecker NA and Henderson D, eds. Central Auditory Processing. St. Lois: Mosby, 1992: 93-103.

11. Mangold M. Phonologie der Namen: Aussprache. In: Eichler E, Hilty G, Löffler H et al, eds. Namenforschung: Ein internationales Handbuch zur Onomastik. Berlin: de Gruyter, 1995: 409-414.

12. Berlad I and Pratt H. Electroencephalogr Clin Neurophysiol 96, 472-474 (1995).

13. Fischler I, Jin, YS, Boaz TL et al. Brain Lang 30, 245-262 (1987).

14. Hediger H. Experientia 32, 1357-1488 (1976).

15. Caramazza A and Hillis AE. Nature 349, 788-790 (1991).

16. Cohen G and Burke DM. Memory 1, 249-263 (1993).

17. Cohen G and Faulkner D. Brit J Develop Psychol 4, 187-197 (1986).

18. Brennen T. Memory 1, 409-431 (1993).

19. Semenza C and Zettin M. Nature 342, 678-679 (1989).

20. Warrington EK and McCarthy RA. Brain 110, 1273-1296 (1987).

21. Damasio AR and Tranel D. Proc Natl Acad Sci USA 90, 4957-4960 (1993).

22. Damasio H, Grabowski TJ, Tranel, D et al. Nature 380, 499-505 (1996).

23. Friederici, AD, Pfeifer, E and Hahne, A. Cogn Brain Res 1, 183-192 (1993).

24. Connolly JF and Phillips, NA. J Cogn Neurosci 6, 256-266 (1994).

25. Müller HM, King JW and Kutas M. Cog Brain Res (accepted).

26. Neville HJ, Mills DL and Lawson DS. Cerebral Cortex 2, 244-258 (1992).

27. Pulvermüller F, Lutzenberger W and Birbaumer N. Electroencephalogr Clin Neurophysiol. 94, 357-370 (1995).

28. Brown WS, Lehmann D and Marsh JT. Brain Lang 11, 340-353 (1980).

29. Weiss S and Rappelsberger P. Neurosci Lett 209, 17-20 (1996).