FIFTY YEARS AGO, if you met a linguist at a dinner party and asked him how he did his research, his answer would have involved technology no more sophisticated than a good notebook, a tape recorder, and a No. 2 pencil.

These days, if you run into a University of Chicago linguist and ask the same question, you are likely to get a rather different answer. Responses might involve tools like high-speed cameras, eye trackers, visual recognition technology, and EEG machines.

This is not your grandfather’s linguistics.

A generation ago, linguists who wanted to get at questions of meaning, usage, and linguistic change generally had one-on-one interactions with their subjects, and asked them for judgments about words, sounds, and meanings. UChicago linguists were especially known for analyzing patterns of language change and recognized for their detailed studies of foreign languages.

While linguists remain preoccupied with grammar, meaning, social and historical change, and language acquisition, the discipline has grown to encompass ideas from computer science and cognitive psychology, giving rise to new methodologies such as experimental and computational linguistics.

At Chicago, several members of the Department of Linguistics faculty have incorporated the field’s latest approaches—and tools—into their research. Computational linguists like John Goldsmith, Edward Carson Waller Distinguished Service Professor in linguistics and computer science and Gregory Kobele, Neubauer Family assistant professor in linguistics, use computer programs to model different components of human language. Phonologist Alan Yu, associate professor in linguistics, runs experiments that investigate how sounds change over time. Professor Anastasia Giannakidou, a semanticist, collaborates with faculty in psychology to study sentence building in sign languages.

“Traditional methods are still the core of the department, but we’re adding on these other elements in a complementary way,” says Professor Chris Kennedy, the Linguistics Department chair. “I do see it as enhancement of what we’ve already got. You want to build on traditional strengths.”

The P-600 Effect

The so-called generative revolution spearheaded by Noam Chomsky in the 1950s and 60s popularized the notion that grammar is, at a deep level, algorithmic, explains Jason Riggle, assistant professor in linguistics. “The idea is that there is a finite recipe or set of rules that generates the infinite complexity we see,” he says.

“That means the system that we ultimately want to explain is not the one that is English or Cantonese—it’s the one that underlies English or Cantonese,” Kennedy adds.

Today, linguists continue to look across many languages in hopes of discovering the complicated processes by which language is implemented in and executed by the brain. Computational linguists like Goldsmith, for example, have used computer programs to simulate the ways the brain can learn language. Goldsmith has developed a program called Linguistica that can take a sample of any language and learn its morphological structure. Experimental linguists like Ming Xiang, an assistant professor in linguistics, favor other strategies such as monitoring the brain’s electrical activity with an EEG machine.

 “The fundamental similarities are what linguistics is going after,” says Xiang. “What are the basic, primitive units that can combine to produce all the languages?”Xiang, who received her PhD from Michigan State University and did postdoctoral research at Harvard and the University of Maryland, trained in theoretical linguistics before turning to experimental methods. She’s grown especially interested in how the brain processes sentences. Her research has unlocked some surprising truths about commonalities among languages that on the surface seem quite different. 

Some of her recent research focuses on wh-questions—interrogative words like who, what, when, where, and which. In English, wh-words are in a sentence-initial position, which means they are generally far away from their verb ("What did he buy?"). In Chinese, the word order is the same as in the corresponding non-question, keeping wh-words close to their verb ("He bought what?"). Because of the difference in word order, one might assume that English speakers and Chinese speakers would use different strategies to process these sentences. But Xiang believes this is not the case.

Using an EEG machine, which measures electrical signals generated by neurons firing in the brain, Xiang studied how Chinese speakers responded to wh-words. She focused on P-600, a burst of electrical activity in the brain that has been shown to correlate with the association of wh-words and their verbs in English.

Xiang presented her Chinese-speaking subjects with two sentences, one with a wh-word and one without. In both sentences, because of the Chinese word order, the verb and its object were right next to each other (for example, “John wondered Mary went to see which pop star” vs. “John thought Mary went to see that pop star”). The sentences containing wh-words caused a much larger P-600 effect—even though the verb was close to its object in both sentences.

[wysiwyg_field contenteditable="false" wf_deltas="0" wf_field="field_article_images" wf_formatter="image" wf_settings-image_link="" wf_settings-image_style="" wf_cache="1363374811" wf_entity_id="4855" wf_entity_type="node"]

This suggests that the brain processes the Chinese and English sentences containing wh-words in a similar way, despite the difference in word order. In other words, there seems to be a similar mental strategy for comprehension that extends across languages.

The discovery of such cross-linguistic mental processes is crucial, Xiang says. “The fundamental similarities are what linguistics is going after. What are the basic, primitive units that can combine to produce all the languages?”

For Xiang, it’s satisfying to observe and measure phenomena she once studied through a more theoretical lens. Still, “I feel fortunate I came from a traditional training, because without that foundation, I couldn’t explore how things are implemented in the brain.”

Fingerspelling Bee

Like Xiang, Jason Riggle is trying to reveal the hidden similarities in languages that seem to make different demands on users. With colleagues at the Toyota Technological Institute at Chicago and Purdue University, he’s turned his attention to American Sign Language (ASL). Riggle, a graduate of UCLA and director of the Chicago Language Modeling Lab, hopes that his research on ASL can shed light on the process behind language change.

In the pilot study, Riggle brought native signers—including a third-generation deaf signer—into his lab for a marathon fingerspelling session. (In ASL, fingerspelling is used for emphasis and for words that do not have a sign.) The subjects sat before a high-speed camera, spelling a series of words that flashed onto a screen in front of them.

When Riggle’s team reviewed the six hours of footage they had collected, they were immediately struck by the errors the signers made. Because both typing and ASL involve the fingers, they had expected to see the kinds of errors commonly observed in typing—reversing or omitting letters, for instance. Instead, Riggle says, they saw anomalies more commonly observed in speech.

In speech, we are constantly readjusting our tongue and teeth in order to speak quickly and fluidly. As we articulate the word “warmth,” for instance, we very often unconsciously produce a puff of air that sounds like a “p” between the “m” and “th.” This “p” is the accidental result of preparing to articulate the next sound in the word. In linguistics, this is known as “coarticulation.”

Riggle observed the same phenomenon among the signers he studied. For example, when they encountered a word with an “i”—a letter articulated with the pinky finger, which is used relatively rarely in ASL—the signers’ pinkies would begin to drift up. When asked to spell an unfamiliar word like “Felkelni, a town in Ireland, one of the signers rendered it “Felkeklini.”

This might look like a careless mistake, but in a larger sense “the hand is doing a smart thing,” Riggle says. “It’s getting the pinky up because it’s going to be needed later. So the things we’re calling ‘errors’ are motivated by something we see all over the place in language—that is, getting the articulators that aren’t currently needed into the position they need to be in later.” The fingers are trying to take the easiest, most efficient path from Point A to Point B.

Over time, these optimizing strategies can become ingrained in a language, contributing to long-term changes. Studying signers offers powerful insight into language change, Riggle says, because “it allows us to pull apart what is an accidental property of having a mouth, and what is a deep property of language.”

Understanding fingerspelling anomalies yields more than theoretical insight. Riggle and several Toyota colleagues who study computational vision hope to build a computer system that can recognize ASL. By identifying how and when common errors occur, they can teach the computer to anticipate mistakes. If they get a computer to recognize hand shapes, “they might be able to scale up to other kinds of learning from visual cues,” says Kennedy.

[wysiwyg_field contenteditable="false" wf_deltas="1" wf_field="field_article_images" wf_formatter="image" wf_settings-image_link="" wf_settings-image_style="" wf_cache="1363374811" wf_entity_id="4855" wf_entity_type="node"]

Riggle’s ASL research, which borrows ideas from computational, experimental, and traditional theoretical linguistics, is “very clearly at the intersection of several disciplines,” he says. “It involves bringing very new tools to bear on very old questions—and bringing old expertise to new questions. It’s the kind of thing we’re seeing more and more these days.”

The “Forbidden Experiment”

It isn’t just the tools that have changed. It’s also the kinds of data to which linguists have access. The twenty-first century is the era of “Big Data,” says Goldsmith. “Science is changing because of the amount of data we have to deal with.” Computer analysis has made it possible for researchers in many fields look at much larger swaths of information and extract meaningful data from it, and linguists are no exception.

“A lot of the nature of linguistic inquiry 100 years ago was driven pragmatically by the kind of data you had access to,” Riggle explains. It simply wasn’t possible to follow large numbers of people around and record their day-to-day speech for any length of time.

 Linguists would love to know what happens when you isolate people in an enclosed space for weeks at a time and record their every utterance.With more and more text and multimedia available online, it’s easier for linguists to study language as it is actually spoken. Increasingly, researchers are seeking out large chunks of raw data that they can analyze—“naturally occurring experiments,” Riggle says.

Two graduate students in Riggle’s lab have embraced this idea of naturally occurring experiments. Max Bane and Morgan Sonderegger, along with Peter Graff at MIT, have been mining the unlikeliest of sources for data—reality television.

“It’s the forbidden experiment,” Bane says. Linguists would love to know what happens when you isolate people in an enclosed space for weeks at a time and record their every utterance, but it’s hardly something a human-subjects board would allow. Fortunately, the contestants on the television show Big Brother did it for them.

Sonderegger, Bane, and Graff examined the changes in speech among the Big Brother participants. They focused on Voice Onset Time (VOT), a measure of how long it takes for vocal chords to relax after articulating certain consonants. It’s variation in VOT that makes “p” sound different from “b,” and “k” sound different from “g.” Longer VOT makes “p”s sound especially “bursty” and clear.

The team charted each contestant’s average VOT over the course of the season, using statistical methods to control for things like word frequency that might skew the average.

Their preliminary results, published in the Proceedings of the 46th Annual Meeting of the Chicago Linguistic Society, suggest that the contestants’ phonetics changed in response to social pressures. During periods of tension in the house, the contestants’ average VOT began to shift wildly—some people’s average VOT increased in response to the tension, while others’ decreased. And by the season’s end, the average VOT began to converge. In the absence of social pressures, contestants began to sound more like one another.

“We know language and sounds change over time, and certain social groups get associated with certain sounds,” Sonderegger says. “The implicit assumption is that moment-to-moment interaction leads, in the long term, to some change in the community. But there was no link between interaction and community-level change. We thought this research was a way to connect them.”

It’s the kind of experiment you never would have seen 50 years ago. Fortunately, Riggle isn’t fazed by all the changes to his field. “If the field is not building new tools that suggest new questions that make you build more new tools, then it’s not going anywhere,” he says. “If I’m not obsolete in 30 years, I’m not doing a good job now.”


Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.