Author: School of Humanities, Arts, and Social Sciences
For decades, MIT has been widely held to have one of the best linguistics programs in the world. But what is linguistics and what does it teach us about human language? To learn more about the ways linguists help make a better world, SHASS Communications recently spoke with David Pesetsky, the Ferrari P. Ward Professor of Modern Languages and Linguistics at MIT. A Margaret MacVicar Faculty Fellow (MIT’s highest undergraduate teaching award), Pesetsky focuses his research on syntax and the implications of syntactic theory to language acquisition, semantics, phonology, and morphology (word-structure). He is a fellow of the American Association for the Advancement of Science and a fellow of the Linguistic Society of America.
In collaboration with Pesetsky, SHASS Communications also developed a companion piece to his interview, titled “The Building Blocks of Linguistics.” This brisk overview of basic information about the field includes entries such as: “Make Your Own Personal Dialect Map,” “Know Your Linguistics Subfields,” and “Top 10 Ways Linguists Help Make a Better World.”
Q: Linguistics, the science of language, is often a challenging discipline for those outside the field to understand. Can you comment on why that might be?
A: Linguistics is the field that tries to figure out how human language works — for example: how the languages of the world differ, how they are the same, and why; how children acquire language; how languages change over time and why; how we produce and understand language in real time; and how language is processed by the brain.
These are all very challenging questions, and the linguistic ideas and hypotheses about them are sometimes intricate and highly structured. Still, I doubt that linguistics is intrinsically more daunting than other fields explored at MIT — though it is certainly just as exciting.
The problems that linguists face in communicating about our discipline mostly arise, I think, from the absence of any foundational teaching about linguistics in our elementary and middle schools. This means that the most basic facts about language — including the building blocks of language and how they combine — remain unknown, even to most well-educated people.
While it’s a challenge for scholars in other major fields to explain cutting-edge discoveries to others, they don’t typically have to start by explaining first principles. A biologist or astronomer speaking to educated adults, for example, can assume they know that the heart pumps blood and that the Earth goes around the sun.
Linguistics has equivalent facts to those examples, among them: how speech sounds are produced by the vocal tract, and the hierarchical organization of words in a sentence. Our research builds on these fundamentals when phonologists study the complex ways in which languages organize their speech sounds, for example; or when semanticists and syntacticians (like me) study how the structure of a sentence constrains its meaning.
Unlike our physicist or biologist colleagues, however, we really have to start from scratch each time we discuss our work. That is a challenge that we will continue to face for a while yet, I fear. But there is one silver lining: watching the eyes of our students and colleagues grow wide with excitement when they do learn what’s been going on in their own use of language — in their own linguistic heads — all these years. This reliable phenomenon makes 24.900, MIT’s very popular introductory linguistics undergraduate class, one of my favorite classes to teach. (24.900 is also available via MIT OpenCourseWare.)
Q: Can you describe the kinds of questions linguistic scholars explore and why they are important?
A: Linguists study the puzzles of human language from just about every possible angle — its form, its meanings, sound, gesture, change over time, acquisition by children, processing by the brain, role in social interaction, and much more. Here at MIT Linguistics, our research tends to focus on the structural aspects of language, the logic by which its inner workings are organized.
Our methodologies are diverse. Many of us work closely with speakers of other languages not only to learn about the languages themselves, but also to test hypotheses about language in general. There are also active programs of laboratory research in our department, on language acquisition in children, the online processing of semantics and syntax, phonetics, and more.
My own current work focuses on a fact about language that looks like the most minor of details — until you learn that more or less the same very fact shows up in language after language, all around the globe!
The fact is the strange, obligatory shrinkage in the size of a clause when its subject is extracted to another position in the sentence. In English, for example, the subordinating conjunction “that” — which is normally used to introduce a sentence embedded in a larger sentence (linguists call it a “complementizer”) — is omitted when the subject is questioned.
For example, we say “Who are you sure will smile?” not “Who are you sure that will smile?”
Something very similar happens in languages all over the globe. We find it in Bùlì, for example, a language of Ghana; and in dialects of Arabic; and in the Mayan language Kaqchikel. Adding to the significance of this finding: MIT alumnus Colin Phillips PhD ’96 has shown that, in English at least, this language protocol is acquired by children without any statistically usable evidence for it from the speech they hear around them.
A phenomenon like this one, found all over the globe and clearly not directly learned from experience, cannot be an accident — but must be a by-product of some deeper general property of the human language faculty, and of the human mind. I am now developing and testing a hypothesis about what this deeper property might be.
This example also points to one reason linguistics research is exciting. Language is the defining property of our species and to understand how language works is to better understand ourselves. Linguistic research sheds light on many dimensions of the human experience.
And yet, for all the great advances that my field has made, there are so many fundamental aspects of the human language capacity that we do not properly understand yet. I do not believe that genuine progress can be made on a whole host of language-related problems until we broaden and deepen our understanding of how language works — whether the problem is teaching computers to understand us, teaching children to read, or figuring out the most effective way to learn a second language.
Q: What is the historical relationship between research in linguistics and artificial intelligence (AI), and what roles might linguistics scholarship play in the next era of AI research?
A: The relation between linguistic research and language-related research on AI has been less close than one might expect. One reason might be the different goals of the scholars involved. Historically, the questions about language viewed as most urgent by linguists and AI researchers have not been the same. Consequently, language-related AI has tended to favor end-runs around the findings of linguistics concerning how human language works.
In recent years, however, the tide has been turning, and one sees more and more interaction and collaboration between the two domains of research, including here at MIT. Under the aegis of the MIT Quest for Intelligence, for example, I’ve been meeting regularly with a colleague from Electrical Engineering and Computer Science and a colleague from Brain and Cognitive Sciences to explore ways in which research on syntax can inform machine learning for languages that lack extensive bodies of textual material — a precondition for training existing kinds of systems.
A child acquiring language does this without the aid of the thousands of annotated sentences that machine systems require. An intriguing question, then, is, can we build machines with some of the capabilities of human children, that might not need such aids?
I am looking forward to seeing what progress we can make together.
Story prepared by MIT SHASS Communications