I just read a Steven Weinberg’s “Can science explain everything? Anything?” in his Lake Views. Weinberg is a Nobel-prize-winning physicist who writes frequently on science in, among other venues, the NY Review of Books. Here he addresses a basic problem for the philosophy of science: what is an explanation?
Many contend, including some scientists, that science only describes. What purports to be an explanation, they say, is just more description or more general description or broader and more comprehensive description. Others, more contentious still, claim that science should only describe; that any pretense to get beyond description, underneath the phenomena, inside the phenomena, anything but just the phenomena, is philosophy maybe, but not science.
There are, for example, linguists who believe that science must limit itself to the statistical occurrences among words. To them, there is no fixed grammar, only statistical correlations among words. The study of language is the study of those statistical relations only. Positivists, empiricists and behaviorists hold to this purity in their research. It’s admirably spare. For them, language is nothing but the stream of sound and its meaning.
At the other end of the extreme are the linguists who find that the statistical correlations are just the descriptive first step. Discovering the underlying machinery that generates the correlations is the scientific goal. They are looking for the explanatory story, the explanation of the correlations. The empiricits believe that the explanation is unjustified, chimeras. Weinberg asks as well, which explains, the correlations or the discovered machine. Do Newton’s laws explain Kepler or does Kepler explain Newton? Newton derived his laws from Kepler, yet we think Newton explained Kepler. What’s going on?
The romantic scientist dreams of explaining, telling a long and complicated story, full of surprises and apparent digressions that turn out to be essential, a plot that brings all the characters and twists into one simple conclusion. Satisfaction and surprise, that’s what the romantic scientist promises.
I’m a romantic. So I’m going to try my hand in making simplicity out of this troublesome conflict over explanation.
Suppose you’ve got before you a mysterious machine with a screen and a keyboard. When you tap a key a letter appears on the screen. Each keystroke brings a different letter on the screen. What is this thing before you?
The strict empirical positivist will answer with an investigation into the correlation between the keys and the letters. This key to the far left brings up an “a,” the one to its right brings up an “s.” When he’s done, you’ve got a complete inductive account of the keyboard.
Does the “a” key always bring up the letter “a” on the screen? Induction can’t go so far as to prove it, but that is the inductive hypothesis. And if it happens that the “a” fails, then the inductive hypothesis is falsified and the inductive account came to nothing but a statistical probability.
Now, you’re already way ahead of my story. You want to break open the mysterious machine, look at its parts, see how it functions and give an account of why and how the keys relate to what appears on the screen. Why stop at the statistical correlation of the mere phenomena? Don’t we want an explanation of that correlation?
Is that an explanation? Or is it just more description — a deeper description or a description of more stuff related to the correlation? How can science be anything more than just description?
Well, there is a difference between mere description of phenomena and a description that explains. Suppose you’ve figured out the machine and how it appears to work. And suppose now that the “a” stroke suddenly fails to bring up the letter on the screen. Has your hypothesis about the machine failed? Not at all.
When your computer keyboard doesn’t respond, you don’t come to the conclusion that you were wrong all the while about computers: the keys aren’t designed to bring up letters, that’s just a statistical likelihood. Sometimes it works, sometimes it doesn’t. There’s no more to be said.
No; you don’t stop there. When your keyboard doesn’t work, you think: either the keyboard is broken, or the connection is loose, or the software has a bug or the processor has got a virus, or — you know there’s an explanation in that machine. You know where to look. If worse comes to worst, you know where to go for help at the Apple Store. You see, the statistical probability is minimally informative, too minimal to be qualitatively useful. It’s not explanatory. It doesn’t tell you why. It doesn’t tell you anything when the inductive hypothesis fails. You have no reply to its falsification.
When you’ve explained the machine (described how and why the keys work) and the key fails, your hypothesis doesn’t fail now at all. If the key fails, your hypothesis now has an opportunity for counterfactual support. Because according to your hypothesis, if a key fails, there must be a failure in the mechanism’s hard or software. This is the moment of experimentation. You look to find the mechanical failure. If you find it, then you have additional support for your hypothesis.
And you can experiment further. If you can predict how each mechanical piece works, you can fool around with the mechanism and predict how those changes will change the operations. If you succeed, you’ve got more counterfactual support. Remove this piece, no “a” on the screen. Replace the piece, restore the “a.”
Getting back to the linguist. The positivist, behaviorist linguist objects to the use of made-up sentences that are a hallmark of generative linguists. If language is just the stream of speech — he’s pounding his fist on this one — how can anything be learnt by inventing experimental sentences that have never been said?
Believe it or not, there are schools of linguistics that hold this purism. No experiments. English is speech spoken among those who understand English. The data of English are only utterances from those speakers.
(Note that the empiricist has a chicken and egg problem. How does he know who the English speakers are? But I think even generativists have to face this one too.)
For starters, the empiricist ignores that comprehension of English is just as much a part of English as speech is. Comprehension may not be the same faculty, but it is patently a part of English and it is closely related to the speech faculty, since people who can’t speak English generally can’t understand it either. That correlation is more than just a coincidence.
And if comprehension is part of English, then the comprehension or lack of comprehension of experimental sentences is a datum of the language, even if the sentence has never been spoken. So there is nothing unscientific in making up experimental sentences. At least they tell us something about comprehension. But not least, if comprehension is integral to the language, they tell us about the structure of the language, its speech indirectly, as well as comprehension directly.
What’s more, when you’ve analysed the grammar through the use of counterfactually supportive experimental sentences that define the language, laying out its boundaries, then you can say when an utterance is a fumbled sentence, or an unfinished sentence, or a sentence distracted midway and returned to. The empiricist, relying only on speech can at best identify statistical aberrations. The generativist can say with confidence: that sentence was half finished, it’s not reflective of English as English speakers know and understand it.
That’s just the beginning of explanatory power. But you have to look behind and beyond the correlations of the phenomena. You have to look at what it is that’s generating those correlations. When you’re done, of course you’ve got another description — a description of a generative machine. But that generative machine does something new for your phenomenal correlations. What had been statistical aberrations in a pure but naive view of phenomena now are counterfactual support for what’s really going on, a reality that was not apparent, but which now is now both apparent itself and apparent in the workings of the phenomenon.
In linguistics it’s generally not possible to open up the machine and look at it. Linguists usually figure out the machine by experimenting with sentences (see the entry “Syntax for the uncertain” below), not by opening up the skull as you might open up a computer to see what’s inside.