Psycholinguistics
Psycholinguistics or psychology of language is the study
of the psychological
and neurobiological
factors that enable humans
to acquire, use, comprehend and produce language.
Initial forays into psycholinguistics were largely philosophical ventures, due
mainly to a lack of cohesive data on how the human brain functioned. Modern
research makes use of biology, neuroscience,
cognitive science, linguistics,
and information theory to study
how the brain processes language. There are a number of subdisciplines with
non-invasive techniques for studying the neurological workings of the brain;
for example, neurolinguistics has
become a field in its own right.
Psycholinguistics
covers the cognitive processes that make it possible to generate a grammatical
and meaningful sentence out of vocabulary
and grammatical
structures, as well as the processes that make it possible to
understand utterances, words, text, etc. Developmental
psycholinguistics studies children's ability to learn language.
Areas of study
Psycholinguistics
is an interdisciplinary field. Hence, it is studied by researchers from a
variety of different backgrounds, such as psychology,
cognitive science, linguistics,
and speech and language
pathology. Psycholinguists study many different topics, but these
topics can generally be divided into answering the following questions: (1) how
do children acquire language (language acquisition)?; (2)
how do people process and comprehend language (language comprehension)?; (3)
how do people produce language (language production)?; and (4) how do adults
acquire a new language (second language acquisition)?
Subdivisions
in psycholinguistics are also made based on the different components that make
up human language.
Linguistics-related
areas:
- Phonetics and phonology are concerned with the study of speech sounds. Within psycholinguistics, research focuses on how the brain processes and understands these sounds.
- Morphology is the study of word structures, especially the relationships between related words (such as dog and dogs) and the formation of words based on rules (such as plural formation).
- Syntax is the study of the patterns which dictate how words are combined to form sentences.
- Semantics deals with the meaning of words and sentences. Where syntax is concerned with the formal structure of sentences, semantics deals with the actual meaning of sentences.
- Pragmatics is concerned with the role of context in the interpretation of meaning.
A
researcher interested in language comprehension may study word
recognition during reading to examine the
processes involved in the extraction of orthographic,
morphological, phonological,
and semantic
information from patterns in printed text. A researcher interested in language
production might study how words are prepared to be spoken starting from the
conceptual or semantic level. Developmental psycholinguistics study infants'
and children's ability to learn and process language[1]
.
Theories
In
this section, some influential theories are discussed for each of the
fundamental questions listed in the section above.
Language acquisition
There
are essentially two schools of thought as to how children acquire or learn
language, and there is still much debate as to which theory is the correct one.
The first theory states that all language must be learned by the child. The
second view states that the abstract system of language cannot be learned, but
that humans possess an innate language faculty, or an access to what has been
called universal grammar. The view
that language must be learned was especially popular before 1960 and is well
represented by the mentalistic theories of Jean
Piaget and the empiricist Rudolf
Carnap. Likewise, the school of psychology known as behaviorism
(see Verbal Behavior (1957) by B.F.
Skinner) puts forth the point of view that language is a behavior shaped
by conditioned response, hence it is learned.
The
innatist perspective began with Noam
Chomsky's highly critical review of Skinner's book in 1959.[2]
This review helped to start what has been termed "the cognitive revolution" in
psychology. Chomsky posited humans possess a special, innate ability for
language and that complex syntactic features, such as recursion,
are "hard-wired" in the brain. These abilities are thought to be
beyond the grasp of the most intelligent and social non-humans. According to
Chomsky, children acquiring a language have a vast search space to explore
among all possible human grammars, yet at the time there was no evidence that
children receive sufficient input to learn all the rules of their language (see
poverty of the stimulus).
Hence, there must be some other innate mechanism that endows a language ability
to humans. Such a language faculty is, according to the innatist theory, what
defines human language and makes it different from even the most sophisticated
forms of animal communication.
The
field of linguistics and psycholinguistics since then has been defined by reactions
to Chomsky, pro and con. The pro view still holds that the human ability to use
language (specifically the ability to use recursion) is qualitatively different
from any sort of animal ability.[3]
This ability may have resulted from a favorable mutation or from an adaptation
of skills evolved for other purposes. The view that language can be learned has
had a recent resurgence inspired by emergentism.
This view challenges the "innate" view as scientifically
unfalsifiable; that is to say, it can't be tested. With the amount of computer
power increasing since the 1980’s, researchers have been able to simulate
language acquisition using neural network models.[4]
These models provide evidence that there may, in fact, be sufficient
information contained in the input to learn language, even syntax. If this is
true, then an innate mechanism is no longer necessary to explain language
acquisition.
Language comprehension
One
question in the realm of language comprehension is how people understand
sentences as they read (also known as sentence processing).
Experimental research has spawned a number of theories about the architecture
and mechanisms of sentence comprehension. Typically these theories are
concerned with what types of information contained in the sentence the reader
can use to build meaning, and at what point in reading does that information
become available to the reader. Issues such as "modular" versus
"interactive" processing have been theoretical divides in the field.
A
modular view of sentence processing assumes that the stages involved in reading
a sentence function independently in separate modules. These modulates have
limited interaction with one another. For example, one influential theory of sentence
processing, the garden-path theory[5],
states that syntactic analysis takes place first. Under this theory as the
reader is reading a sentence, he or she creates the simplest structure possible
in order to minimize effort and cognitive load. This is done without any input
from semantic analysis or context-dependent information. Hence, in the sentence
"The evidence examined by the lawyer turned out to be unreliable," by
the time the reader gets to the word "examined" he or she has
committed to a reading of the sentence in which the evidence is examining
something because it is the simplest parse. This commitment is made despite the
fact that it results in an implausible situation; we know that experience that
evidence can rarely if ever examine something. Under this "syntax
first" theory, semantic information is processed at a later stage. It is
only later that the reader will recognize that her or she needs to revise the
initial parse into one in which "the evidence" is being examined. In
this example, readers typically recognize their misparse by the time they reach
"by the lawyer" and must go back and re-parse the sentence.[6]
This reanalysis is costly and contributes to slower reading times.
In
contrast to a modular account, an interactive theory of sentence processing,
such as a constraint-based lexical approach[7]
assumes that all available information contained within a sentence can be
processed at any time. Under an interactive account, for example, the semantics
of a sentence (such as plausibility) can come into play early on in order to
help determine the structure of a sentence. Hence, in the sentence above, the
reader would be able to make use of plausibility information in order to assume
that "the evidence" is being examined instead of doing the examining.
There are data to support both modular and interactive accounts; which account
is the correct one is still up for debate.
Methodologies
Behavioral tasks
Many
of the experiments conducted in psycholinguistics, especially earlier on, are
behavioral in nature. In these types of studies, subjects are presented with
linguistic stimuli and asked to perform an action. For example, they may be
asked to make a judgment about a word (lexical decision), reproduce
the stimulus, or name a visually presented word aloud). Reaction times to
respond to the stimuli(usually on the order of milliseconds) and proportion of
correct responses are the most often employed measures of performance in
behavioral tasks. Such experiments often take advantage of priming effects, whereby a
"priming" word or phrase appearing in the experiment can speed up the
lexical decision for a related "target" word later.[8]
As
an example of how behavioral methods can be used in psycholinguistics research,
Fischler (1977) investigated word encoding using the lexical decision task. She
asked participants to make decisions about whether two strings of letters were
English words. Sometimes the strings would be actual English words requiring a
"yes" response, and other times they would be nonwords requiring a
"no" response. A subset of the licit words were related semantically
(e.g., cat-dog) while others were unrelated (e.g., bread-stem). Fischler found
that related word pairs were responded to faster when compared to unrelated
word pairs. This facilitation suggests that semantic relatedness can facilitate
word encoding.[9]
Eye-movements
Recently,
eye
tracking has been used to study online language processing.
Beginning with Rayner (1978)[10]
the importance and informativity of eye-movements during reading was
established. Later, Tanenhaus et al. (1995)[11]
used the visual-world paradigm to study the cognitive processes related to
spoken language. Assuming that eye movements are closely linked to the current
focus of attention, language processing can be studied by monitoring eye
movements while a subject is presented auditorily with linguistic input.
Language Production Errors
The
analysis of systematic errors
in speech,
writing and typing of language as it is
produced can provide evidence of the process which has generated it.
Neuroimaging
Until
the recent advent of non-invasive medical
techniques, brain surgery was the preferred way for language researchers to
discover how language works in the brain. For example, severing the corpus
callosum (the bundle of nerves that connects the two hemispheres of
the brain) was at one time a treatment for some forms of epilepsy.
Researchers could then study the ways in which the comprehension and production
of language were affected by such drastic surgery. Where an illness made brain
surgery necessary, language researchers had an opportunity to pursue their
research.
Newer,
non-invasive techniques now include brain imaging by positron emission tomography
(PET); functional magnetic
resonance imaging (fMRI); event-related potentials
(ERPs) in electroencephalography (EEG)
and magnetoencephalography (MEG);
and transcranial magnetic
stimulation (TMS). Brain imaging techniques vary in their spatial
and temporal resolutions (fMRI has a resolution of a few thousand neurons per
pixel, and ERP has millisecond accuracy). Each type of methodology presents a
set of advantages and disadvantages for studying a particular problem in
psycholinguistics.
Computational modeling
Computational
modeling—e.g. the DRC model of reading and word recognition proposed by
Coltheart and colleagues[12]—is
another methodology. It refers to the practice of setting up cognitive models
in the form of executable computer programs. Such programs are useful because
they require theorists to be explicit in their hypotheses and because they can
be used to generate accurate predictions for theoretical models that are so complex
that they render discursive analysis
unreliable. Another example of computational modeling is McClelland
and Elman's
TRACE model of speech
perception.[13]