It’s easy to mistake paradoxical sentences for liar paradoxes. “If this sentence is true, then it is false,” is a liar paradox. If the sentence is true, then the antecedent is true. If the antecedent is true, then the consequent must be false, the implication as a whole is false, so the sentence must be false. So if the sentence is true, then it is a contradiction and a falsehood. So the antecedent must not be true. If the sentence is false, antecedent is false, and the implication as a whole is true.
“If this sentence is false, then it is true,” however, is not a liar paradox. If it is false, then the antecedent is true and the implication fails, and the whole is false. If the sentence is true, then the antecedent is false, the implication holds, and the sentence is true. That’s not a paradox, it’s just a sentence the truth of which cannot be determined. It’s like the sentence, “This sentence is true.” Is it true or false? How could you tell?
Similarly, “The sentence I am now writing is true,” is indeterminate. “The sentence I am now writing is false” is provably a liar paradox, athough one could ask of these two sentences “true or false of what?” The deductive proof that yields a liar paradox of the latter, is a reductio: assume the sentence is true, you deduce that it is false; assume it’s false, you deduce it’s true. So if it’s true, it’s false and vice versa. But if you ask “true of what?” then you’re asking for an empirical answer — does the sentence corresponds to something, in this case to its own truth. Is truth a thing that can be pointed to? If it’s a correspondence with something, we’re stuck in an infinite recursion. So these sentences, on the one hand, lead to a questioning of the correspondence theory. But they also lead to questioning of the validity of deductive reductio argumentation, not unlike that questioning of the reductios that led Cantor to multiple levels of infinities, and the intuitionist rejection of the reductio in favor of proof by demonstration. Several directions from here: you can say these sentences don’t correspond to anything; or correspondence is not complete; or correspondence, even with its incompleteness is a better option than reductios that lead to liar paradoxes. Read the rest of this entry »
“I’m very witty!” someone wrote in a comment box in response to the criticism “You have no wit.”
It’s supposed to be well-established that commodity prices are the inverse of interest rates. Interest rates are as low as they can be and luxury housing prices in NYC are high, for example, and the stock market is flying too. But the rest of the economy is not wildly inflated. Why?
Easy money (low interest rates) flows into commodity inventories (we saw that leading up to the Arab Spring), on the one hand, and on the other, it curbs extraction of new resources and commodities because the low interest rates reduce their monetization, or so the theory goes. Yves Smith posted on it back in 2008:
Here are the originals:
That easy money/low interest rates leads to inflation has been orthodoxy since Friedman at least. It was Volcker’s successful program to curb inflation by increasing interest rates, causing a recession, and Bernanke’s opposite strategy to take us out of recession, allowing inflation. But it also has a specific reflex in commodity prices. QE2 caused a global price hike in food prices as investors left the dollar for commodities, that caused the Arab Spring. Not exactly what Bernanke anticipated. His response was washing his hands: other nations have to deal with their own inflation, he quipped, cynically, I thought.
Believers in deities often claim that because secularism is deterministic, it has no room for free will and therefore has no concept of personal responsibility or morality. But I don’t see how free will entails moral responsibility, and I don’t see that responsibiltiy entails free will.
To take the first implication direction: free will is an incoherent notion. If there is no motive or source of a decision, then the decisions are not tied to an integral self — they’re just random decisions, that don’t belong to anyone. If a decision is motivated by some determinant, then the decision isn’t free. To put it in a theological context: either god made you who you are, and she is responsible for every decision thereafter, or your decisions are random and not anchored is a self. So free will doesn’t entail responsibility. It entails no responsibility, because it entails no self. End of story.
From the other direction of entailment: the individual can hold herself responsible just to flatter herself for believing she’s an integral self. And what do you know, that’s exactly how we all feel. Responsibility is an illusion that works. You don’t need free will, only the illusion of self.
The peculiarity of classical notion of possibility is that it has a relation to the actual world as well as a relation to the irreal world of conditions counter to the actual and the epistemic world of certainty and uncertainty. Lukasiewicz’ notion of possibility seems to apply only to uncertainty — it seems essentially epistemic.
So here’s the lay of the land, as I see these two modal programs: Read the rest of this entry »
[This post has been updated for clarity.] Łukasiewicz supposed that the actual is necessary (if I have no coins in my pocket, then it is not possible that I do have a coin there) and that possible implies possibly not. I want to contest both of these. There is good reason to distinguish the actual from the necessary — the earth revolves around the sun is an actual fact, but that the sun is the center of the solar system is necessary (on the grounds that “solar” means “sun”). But if the earth does revolve around the sun, and it’s not possible therefore that it doesn’t revolve around the sun, then isn’t the earth’s revolution around the sun necessary (Łukasiewicz’ not-possibly-not)? So hasn’t he leveled a useful distinction? Read the rest of this entry »
In the latest New Yorker Steven Pinker quotes his defense of the dictionary, “it is not just a matter of opinion that there is no such word misunderestimate, that the citizens of modern Greece are Greeks and not Grecians, and that divisive policies Balkanize rather than vulcanize societies.” Given that language is always in flux, on what principled ground can these be judged? Misunderestimate is redundant, but what of it? Language is full of redundancy, and if some underestimations are benign then maybe misunderestimating is not exactly redundant. If Grecians becomes current, then Greeks will be an anachronism; same with vulcanize. Stranger things have happened to English.
So what’s the purpose of a dictionary? Shouldn’t it be a source of scholarly information — about who uses Balkanize, vulcanize, Grecians and misunderestimating and why, and how their use came to be? When did scholarly information include prescriptions on use? Shouldn’t that be left to the newly informed reader?
Whether you choose to avoid the intensified “same exact” for fear of someone (as another letter-writer in the same exact issue of the New Yorker) thinking that you are unthinking, that’s a choice you make between using language as if it were logical and systematic (which it can and maybe should be at points) rather than expressive (which it can and should be too). Most such complaints against illogical use are little more than gotcha‘s to show ones linguistic or logical accuity, which though admirable for its accuity is at least as deplorable for its smug, nit-pickity derisiveness.
After all, aint has its place for effective use and so has “Have you finished your homework, yet?” (also from that same letter-writer J.A.F.Hopkins — note the many names). I monitor my own use, but I have garnered not a few enemies for it. Not everyone loves a pedant, and some resent them deeply.
I do not embrace loss of linguistic distinctions. I regularly hear “it begs the question” meaning the uninteresting “it leads to another question,” and it’s been years since I’ve heard anyone use “beg the question” in its old sense of “that’s not an answer but a circuitous restatement of the problem” which was always such a clever rebuttal. But before I’d conclude that English is dying, I’d want to understand better exactly why the changes occur. In this case, it is not that speakers are losing the ability to recognize empty circular reasoning. Begging the question was a rare form belonging to philosophical discourse. The change has not been a loss of the expression, but a popularization. People outside of philosophy are using it, and they use it for their purpose. Within the philosophical community, the expression still thrives exactly as it was and no doubt with the same frequency.
Language is an accommodation to communication for the interchange of information and socialization. That’s what’s interesting in language — not that the language is abused, but why, what conditions of the language system that allows for those changes, and what pressures on expression drive those changes. Same exact is the familiar case of hyperbole that gave us terribly good and awesome and the British brilliant for “very useful.” And brilliant is itself a dead metaphor.
A twenty-something friend regularly writes “could of and would of” although he had an expensive education and fancies himself a writer, no less. His excuse is, “language is always changing.” But that’s clearly not relevant: he would never write, “I could certainly of, but ofn’t, and you would definitely not of, and in fact you ofn’t.” So his language hasn’t changed, he’s just chosen to spell the word in one grammatical position as a different word. One response is: what an idiot — can’t he see that his own usage is inconsistent? But the interesting response is: what is it that hides his inconsistency from him? It’s not that he’s incapable of thinking about the use of “have.” Anyone can do that once it’s pointed out. It’s that this “have” is not really a verb at all. That’s where it gets interesting — asking, not judging.
Now what do people think they mean when they say “I could care less”? — especially since “I couldn’t care less” is so incisive and expressive.
Hayek became something of a hero to Libertarians for presenting the strongest possible arguments against government intervention in the free market and in particular against government redistribution of wealth. But even in The Road to Serfdom he advocates for a social safety net. Recently a few bloggers have tried to make sense of this apparent contradiction, including the estimable Matt Yglesias.
Kevin Vallier thinks that there’s a difference between welfare conducted by an administration or by law, but since laws are created by administrations, it’s hard to see the line between the two, and I haven’t noticed that Hayek himself drew it. On the other hand, Vallier does object to Hayek’s frequent slippery slope arguments as running together the conceptual with the empirical. “Slippery slope” understates the character of Hayek’s approach. He doesn’t object to a mere conceptual possibility opening a possible pathway, he argues that the slopes are necessarily slippery. At several points in the Road to Serfdom he provides compelling arguments that any least gesture towards government intervention in redistribution, for example, leads inevitably to totalitarianism. There is in his responses an element of defensive alarmism evident not only in his general concern but commensurately in the extremity of his argument.
A defensive call serves a useful purpose, even if it isn’t always a coherent program. That’s also consonant with his views on uncertainty, which undermine program. So I’m with Yglesias on this. Besides, it’s easy to accuse Hayek of inconsistency when he himself rejected consistency as a program.