Dramatizing morality

April 13, 2011

At the start of MIkhalkov’s Burnt by the Sun, the Great Hero of the Stalinist army appears to a platoon of soldiers as himself, barely clothed — he’s just run naked out of his bath, throwing on his pants to save the local peasants from some Stalinist ukase-from-afar. The soldiers don’t recognize him until he grabs a soldier’s army cap and mugs his famous profile for him. It’s the adorable hero as genuine and authentic as well as modest, courageous and good-humored, saving everyone from disaster through his simple honest speech borne of his unalloyed dedication to his country, his people and his army that he believes defends them.

In a moment, the villain of the piece will appear not as himself, but in disguise, lying, playing, charming, insinuating, seducing  — he’s an artist, a musician, a far cry from an honest, simple soldier of simple skills and simple sentiments. He saves nothing and no one. He comes to serve himself regardless of its harm.

Yet who can resist his art? He has them all dancing, like a marionette master. And always dressed, even when swimming, the devlish dandy, even in the bath! His past is riddled with all sorts of whoring jobs, serving any local master, and always playing a double role — a triple role: his local boss, the Stalinist government as an agent, and, of course, himself. No integrity, no authentic public self, no honor, no dignity; all show, all artifice.

This portrayal of villain as brilliant, manipulative artist appealed to me most in this movie. The indictment of the artist is close to my heart. The artist seems to me exactly that: a manipulator, a charlatan, a self-promoter who seduces you to love and adore him, despite your not at all knowing who he really is.

And that is, perhaps, all he is. From the 19th century notion of the artist bringing gifts of the gods, redeeming vile reality, justifying it and comprehending it, here is discovered the vilest motives: seduction, deceit, distraction and distortion. Read the rest of this entry »

Stephen Hawking’s latest

April 9, 2011

In his The Grand Design Hawking seems to take pains to set the record straight on his view of creation — no need for a prime mover, no intelligent design, no anthropic principle. None of that surprises me, since arguments for prime movers are logically inconsistent, if there’s a design here Mickey Mouse could have done better, and I can’t make coherent head or tail of the anthropic principle. What did surprise me was Hawking’s trashing of Aristotle on a scale with Bishop Tempier’s. He berates him for applying rationality where he should have observed, a peculiarly Procrustean habit of philosophers. In the context of rejecting prime movers and teleological causes (like the anthropic principle and intelligent design) it’s not surprising that Hawking is critical of Aristotle, who relied on both. But it seems a bit extreme. At one point he even says, without any further comment, “Philosophy is dead.”

I’m an Aristotle fan, not for his physics, but for his logic. A good logician doesn’t necessarily make a good empirical scientist and philosophers have always been weak on knowledge. Their motto should be akin to “those who can’t do, teach” — “They who don’t know anything, philosophize.” Perfect for logic, a science without content.

Everything and more

April 9, 2011

Btw, Wallace’s book on infinity, Everything and More, is an excellent, lucid treatment of the problems within mathematics (which implies scientific theory in general) and its application to the real world.

There are limits at the boundaries not only of mathematics, but also in logics — not only modal logics but even simple first order logic. Reminds me of a Kantian remark Russell made somewhere to the effect that our descriptions of phenomena only approach the phenomena from our descriptive perspective. The things themselves remain utterly mysterious. Worse, our descriptive apparatus is limited. Even ourselves as phenomena (pace Schopenhauer) cannot gain access to ourselves beyond our own descriptive apparatus — language, sensibility, logic and science. We can, at best, observe behavior and derive a few conclusions. Schopenhauer, prior to Darwin and armed only with Eastern religion, mistook that behavior for the thing in itself, when, really, it was just a character of evolutionary survival, not of the entire cosmos of phenomena or noumena. With sufficient scientific research, we approach explanation of both sentient and non sentient behavior … but substance itself? What can you say about the limits of knowledge? Is there a something beyond it or not? I’ve never been impressed with Wittgenstein’s cavalier gnome “It’s not a something, but it’s not a nothing either.” Well, so what? I think Rumsfeld said it better. We can have no access to it, and we don’t even know if there’s a there there.

Which brings me back to the notion of explanation in the sciences: the theory of evolution has explanatory value for psychology (despite Fodor’s just complaint that it is, at this point, merely post hoc and not predictive) because it is a theory independent of emotions or sensibility. It is not just a statistical account of emotions under conditions (the behaviorist model). It is a theory of species development in general. I think Fodor is right that it is post  hoc and unpredictive, but it still has explanatory power, just post hoc. Maybe that’s the best place to rest on Fodor’s complaint: natural selection is explanatory but not yet predictive.

Wallace’s solution

April 9, 2011

I’m a little uncomfortable reading Wallace’s book since it was a youthful work not intended for publication, was never published while he lived, and is being published now without his permission. And he left no later comments on it and can’t respond to critics.

Wallace’s solution depends on a scope difference in an alethic tensed modality. Using Taylor’s example: “if the battle occurred, then the admiral must have ordered it” is ambiguous between

1. if the battle occurred, then yesterday it was the case that the admiral must have ordered it

2. if the battle occurred, then it must have been the case that the admiral ordered it

Wallace admits that (1) entails fatalism, but points out that (2) doesn’t. According to (1), the admiral must have ordered the battle, and so had no choice. In (2) the admiral ordered it, but not under duress, as it were, of necessity (must). He had a choice — he might have contemplated several possible worlds in which he orders and several in which he declines to order. It’s just that none of those possible worlds turn out to have been real. That is, yesterday’s world in which the battle was ordered, turns out to have been the only possible world.

But if that’s the only possible world, why is it the only? Wallace seems to show successfully that the answer cannot be the logic alone.

Suppose you are at the moment of ordering. That moment excludes any moment in which you decline to order. That moment includes only moments in which you order the battle. The difference seems to be between whether you have free will or whether you have freedom. Wallace’s draws a nice distinction between fatalism and a kind of post hoc determinism.

Is this a difference without a distinction? If the admiral knows that the moment determines his order (he has no freedom), what does it serve him to have free will? Nothing in the real world. But that accords with our experience: no matter how we plan for the world, the consequences are beyond our ability to control.

The utilitarian/consequentialist effects of determinism and fatalism are equally discouraging. But the entailments for (Kantian)  moral sensibility are completely distinct. Determinism is consistent with holding moral sentiments. Not a fatalist, and that’s why even philosophers spurn it.

On the other hand, while Wallace has found a distinction, I’m not sure that it is telling against Taylor’s view. Relations of necessity among physical effects depend on circumstances, and these are explicit in Taylor’s assumptions. If there is no world in which the order for battle is not given, then the only possible worlds are those in which he chooses to order it. That entails a strange paradox: he is free to choose, but he can only choose one option, not the alternate choice.

How the logic of implication works entirely depends on how you set up the modal system — its axiomata or its inferential rules or both — and its consistency. What makes a system meaningful, assuming it’s consistent, is its usefulness or accuracy. Wallace uses our natural language notions of “it couldn’t have happened” and “it can’t have happened.” That’s good for his system, but not telling against Taylor, since Taylor is specifically using logic against natural language notions which, he is attempting to show, are wrong. And on the other hand,  Wallace’s distinction seems to violate our linguistic, and maybe real-world, understanding of “free.” It may be that the logical syntax should include an inference from

must yesterday order

to

yesterday must order

or it may be that the inference should be dealt with in the semantics, in the model — in any world in which there is only one option, there is no free choice.

Fear of fatalism

April 8, 2011

Apparently the literature on Taylor’s fatalist argument — the motivation for Wallace’s book (see immediately previous post) — does not include the anepistemic solution I sketched. I’m guessing it’s because everyone is afraid to admit fatalism; everyone wants to believe in free will, and so insists on it. (Believing it and insisting on it are distinct: I insist you have no freedom, but I bet you believe you have. I’ll make a big deal out of that in a moment.)

Having no freedom does not entail making no choices. Freedom ranges over your choices, and your choices depend on your knowledge. If you don’t know the underlying determinations of your choices, those choices will appear to be determined by whatever you do know about. You seem to be making free choices, even though they are not in fact free.

This is not eliminativism, btw. It’s possible to insist that there is no autonomous self and no free will, and still insist that you have a mind and an awareness and your mind contains knowledge of which your mind is aware — or maybe your mind is that awareness of, among other things, that knowledge. Just because I think free will and autonomous self are fictions I am not compelled to give up the mind, knowledge and awareness. Just don’t ask me what awareness is or what role it plays in choice. I’m sure it plays a role, but how, I’m not sure, and having an account is not required for insisting that there is one.

Once you accept determinism, the response to Taylor’s argument is quite simple: the assertion under the modality of real time and its denial in a modality of knowledge don’t contradict. You can know that you are not free and also not know the conditions under which you will choose. So you can insist that no one is free, but still, not knowing the determinations of your choices, you can believe in the fiction of your will. Since you can’t know the sources of your choices, you may believe in any source, including yourself. You can attribute your choices to the devil or the demi-urge or a deity. If you have a sense of personal integrity, you’ll believe that the source is you, because believing in the fiction of you is all you have to be proud of. And those feelings like pride are sui generis. They may be determined by your genetic nature or your cultivated nurture, and you may question them and doubt them, but it’s always you questioning, doubting and feeling. The self has a dual nature: it’s a real melange of sensibilities and thoughts, yet not autonomous. No one has given a good account of it. That’s the appeal of the eliminativists and logical behaviorists and the Wittgensteinian behaviorists. They get on without one.

The formulation in the previous post might be amended to

~K(T) =>

K~K(T)

Not only do we not know the future, but we know that we don’t know. That Rumsfeldian modality suffices to absolves us from fatalism. However deterministic the world is, we, with our limited knowledge, know that we can’t know its determinations. That leaves us with our limited knowledge, so regardless of the facts of the future, we are not in a position to assert anything about it with certainty: the contradictory of K~K(T) is not ~T, but ~K~K(T); the contradictory of ~K(T) is K(T), and there is no entailment from ~K(T) in the present to K(T) or ~K(T) in the future — people change their minds or forget from time to time.

Lukasiewicz, bivalence and the future

April 7, 2011

Just now looking at David Foster Wallace’s Fate, Time and Language, I’m puzzled by Lukasiewicz’ argument, quoted in his text, that statements about the future cannot be true or false at the moment when they are stated. It seems obvious to me that any statement about the future must be true or false, it’s just that we don’t know their truth value at the moment (except for necessary truths and inconsistent statements which may be deemed false and if contradictory, plainly false).
~K(p) does not imply (p) or (~p).
Not knowing the truth value of a statement means that the epistemic certainty of it has a degree of probability <1. But that doesn’t imply that the proposition itself has a certainty <1. The proposition itself has a probability of either 1 or 0. Why would anyone conflate the epistemic with the realis assertive?

Am I missing something? The probability of a belief for a determinist depends on the known circumstances. Those known circumstances often do not suffice for certainty.

The issue for Lukaseiwicz lies in the way we speak about possibility. If I say, “I will be at your place tonight,” even I can’t say for sure that I really will get there — I could get run over, I could get distracted by a friend. So we venture to say that it’s possible I’ll get there, and, likewise, it’s possible that I won’t. Using P for “possible” and T for “I’ll get there tonight”

P(T) & P~(T)

When the future arrives, we’ll know which of the conjuncts is true. If we’re not determinists, there’s no problem. But if we’re determinists, then one of these conjuncts is necessarily true, and the other necessarily false: necessity is interchangeable with “not possibly not,” and “necessarily not” is interchangeable with “not possibly”:

N(T) = ~P~(T)

N~(T) = ~P(T)

but if T is true, then the statement before the future arrived, added to our knowledge of necessity now in the future, yields a contradiction

N(T) & P~(T) =

~P~(T) & P~(T) =

N(T) & ~N(T)

and if T turns out to be false

N~(T) & P(T) =

N~(T) & ~N~(T) =

~P(T) & P(T)

Now, if we are not determinists, there’s no problem: the future isn’t necessary, so the truth value at the future doesn’t contradict any assertion in the past. So non determinists can assert that propositions about the future have distinct possibilities. But if we buy into determinism, we can’t assign probabilities to propositions about the future. So Lukawiewicz offered to abandon bivalence: statements about the future are neither true nor false, but somewhere in between.

But all that’s ignoring the epistemic context of our assertions of possibility. The correct formulation of our assertions, if we are determinists-in-ignorance is:

B(P(T) & P~(T))

“I believe that possibly T and possibly not T” or alternatively

B(P(T)) & B(P~(T))

“I believe possibly T and I believe possibly not T”

Believing possibly T or possibly not T is in no way inconsistent with T or ~T or N(T) or N~(T).

B(P(T) & P~(T)) & N(T)

is consistent, as is

B(P(T) & P~(T)) & N~(T)

A simpler formulation uses the anepistemic mode

~K(T)

“I don’t know T for sure” which itself implies

~K~(T)

“I don’t know ~T for sure” and therefore

~K(T) & ~K(~T)

(I’m leaving out for the moment the possibility that ~K(T) can mean “I don’t know of T,” which allows for three possibilities: I don’t know that T is true, I don’t know that T is false, I don’t know of T at all)

These are also consistent with either of T or ~T or their modal necessary versions. There are no contradictions here:

~K(T) & ~K(~T) & N(T)

~K(T) & ~K(~T) & N~(T)

The implication is that “I might not be there tonight” means both that I don’t know whether I’ll be there or not — it means the exact same as “I might be there tonight.”
Elsewhere I’ve given the evidence of the equivalence:
???I might go but I will
???I might go and I will
???I might go but I won’t
???I might go and I won’t
???I might not go but I will
???I might not go and I will
???I might not go but I won’t
???I might not go and I won’t
Unless the speaker has had a change of mind mid-utterance, these sentences are semantically incoherent. It is uncontroversial that the consequent conjuncts assert certainty over intention, so, presumably the incoherence lies in the uncertainty of the antecedent conjunct. Since the same certainties clash with the negation or without, the implication is that “might” and “might not” bear the same uncertainty: “I might not go” implies “I might go” and both can be cashed out as the anepistemic

~K(G)

~K(~G)

~K(G) & ~K(~G)

but not  ~K(G & ~G) unless we’re very contrary, since we all know

K~(G&~G)

and we know that we know it, too.

A theory for semantic drift

April 1, 2011

Okay, here’s a theory to account for some of the variety in semantic drift, the gradual change of a word’s meaning.

Going back to old Saussure, the sign has two sides, the physical shape (for speech, this is the sound of the word) and what the word means. You’d think that semantic shift would only happen on the meaning side, but semantics plays on both sides of the sign because both sides relate to other signs in the language.

Let’s expand Saussure’s duality a bit with Frege’s distinction between sense (something like idea) and reference (the real world objects determined by the idea). The meaning of the sign can shift if the idea drifts, expanding (losing information) or contracting (becoming informationally richer) or just replacing some information with new information. The many pressures or inclinations on idea drift have been well observed and studied in the literature.

The physical sound-shape side of the sign can be the source of meaning shift as well, odd as that may seem. Why would an arbitrary sound have an effect on meaning? Well, for example, sound shapes in English that bear strong resemblance to Greek or Latin words tend to be treated as more serious and formal than monosyllabic Anglo-Saxon words. That seriousness affects their semantic value. Ask a class of students which endeavor is more fun and which more serious, athletics or sports, and you will get a 90% agreement that sports are a less formal activity, although if you then ask whether they denote the same set of activities, you’ll get 100% agreement that they do (and a puzzled class of students).

Speakers have a blind spot in their linguistic capacity. They have great trouble distinguishing between the word and its meaning. There’s nothing surprising about it: we do a lot of our thinking in words, so the two — thought and word — incline to meld into one another.

So it is also not surprising that the formality of a word can influence its semantic shifts. Our social attitude to a word’s sound-place in the language is part of its meaning.

So far we’ve got the following aspects of meaning:

– idea

– social attitude to the idea and relation of the idea to other ideas

– attitude to the sound shape and the relation of the sound shape to other sound shapes

– reference.

Less observed is the possibility of drift caused by the reference.

The set of real-world objects referred to by a word is determined by the idea. The idea of the word “cat” determines the set of felines. When the idea is employed colloquially as in “hep cat,” the idea determines that “cat” denotes the set of counterculturally acceptable males. Because the referent depends on the idea, drift in the idea has received most scientific attention. But the referent, even though fixed by the idea, can be the source of semantic shift as well, because social attitudes to things is not fixed.
“Democracy” denotes a specific set of government types, but that set has not always been valued in the past as it is today. In the U.S., “democracy” is viewed as almost synonymous with “just” and “right.” It wasn’t always and it need not always be.

Consider the denotation of “woman.” The boundaries of the set haven’t changed, and the properties that make a human a woman haven’t changed, but social attitudes have and the social place of women has. Surely these changes, which relate to the referent, not to the idea or the sound shape, have shifted the meaning.

“God” is another word that has surely shifted through changing attitudes to its referent, though discussing it presents the difficulty of dealing with an elusive referent.

Consider “computer.” The rapid technological advances have altered the set almost beyond recognition, from a room-sized machine exclusive to universities and military labs, to the palm pilot. That’s a reference change that directly shifts the idea.

One reason linguists don’t spend much effort observing reference-based shifts, is that those shifts depend on non linguistic phenomena, and so don’t instruct much about the nature of language itself. It would be useful, however, to learn the extent to which reference shift can be tolerated by a word and the broader linguistic effects of reference shift.

Then there’s metaphor. In a separate post I mentioned that metaphors trade on a few specifics of an analogy. They can dilute information as in

– the foot of the mountain

now means just the bottom — the toes, heel and arch are lost.

This is all just a start, but the point here is that semantic shift can occur on either side of the sign, the reference or the idea, the signified or the signifier.

I’m going to stop here for now, but there’s more in the first post on this blog, which gives a bunch of surprising facts and more surprising dynamics about semantic drift in gender words. The big surprise there is that euphemism frequently causes its opposite, pejoration.

Precis of time travel to the past

March 31, 2011

To summarize: objectors to time-travel-to-the-past say that the traveler would change the past, and that would contradict the past and the present through the chain of causality. Therefore time-travel-to-the-past cannot be.

But I show that their conclusion is not necessary because they have ignored a hidden assumption that is itself incoherent (part of their view). Read the rest of this entry »

Time travel to the past

March 30, 2011

Is it possible? Well, I don’t think it’s logically impossible at all.

Objectors to time travel point out that going back in time would change the course of the present which would change the traveler herself. If she went back, for example, and accidentally killed her infant self, what would happen to the traveler? Similarly for any alteration induced by the traveler’s presence, given the multiplying effects of chaos.

Seems this objection trades on a mistaken view of time and of time travel. We usually think that time is on a kind of line

A>——–>B

from A to B and thence to the future.  And time travel is viewed as some person, say, Jay, removing himself from the line and going back to a previous position on the line. So infant Jay at time A grows in time to point B and then decides to time-travel by going back to point A.

But that’s to assume that time A is a kind of permanent place to visit, and to assume that Jay can get out of time and go back to that same place. Both are mistaken. When Jay decides to time travel, Jay is not going back on a line, but, with respect to his personal “timeline”, is going “forward” or continuing in his own time of his life. In this progress of his life, he encounters himself as an infant. He manages to kill that infant. In the future of Jay-the-traveler there will be no youthful Jay, just Jay-the-traveler. But there’s no reason to believe that Jay-the-traveler can’t continue himself. In fact, you’d expect he would. Why wouldn’t he? He’s already there, and time and causality only move forward.

A>—-B>—-C/A>—–D>—-

Baby J>—–Grown J>——Grown J kills baby J>——Grown J continues

By point C/A, it’s too late for baby Jay’s death to affect grown Jay causally. He’s already there and he’s moving forward in one and only one world.

The going-back view of time traveling implies a kind of dual time with parallel worlds that are somehow related causally. If you go back and kill your infant self, then, cause-by-effect in time you could never have lived to travel and kill your infant self; hence time travel to the past could be argued to be logically impossible (on the going-back view).

That’s actually a reductio ad absurdum. Suppose time travel to the past were in fact possible. The above objection would say that the possible is logically impossible. That is itself a contradiction. So objectors conclude from logic that time travel is impossible regardless of the facts. That’s for logic to determine facts that are independent of logic.

The reductio misses because there are two premises, not just the one premise “time travel to the past is possible.” The other premise that the reductio can rule out is “there is a causal chain from the reprised time to the new present.” And there’s no reason to believe that.

If time travel to the past is impossible, logic isn’t the reason.  The reason, more likely, is that the past doesn’t exist at all. It’s a fiction of memory. There’s no place “the past” that perdures. Time is just change and change is just the things that were that aren’t anymore. Time, as it were, moves only forward, just as motion is always forward, whichever direction it is going. Walking back home at night is moving forward, just in a different direction from the direction you took in the morning. Flipping backward on a video is moving forward, even if it looks funny. The only remnant of the past is its causal consequences for the present and future. Otherwise it isn’t. But if we could come to a place of our past, there would be no contradiction in influencing it (to use Horwich’s important distinction between influencing the past and changing the past), partly because influencing it would only influence the one world in which we are, and partly since there’s no ‘real’ past there to change anyway. We’d just be moving forward with a difference like the world of the movie Groundhog Day. But we’ll never get there, because it’s not there.

Social illusions, freedom, autonomy, authenticity

March 25, 2011

The Times the other day had an interesting piece about free will: When people are persuaded that their actions are deterministic, they give reign to their desires irrespective of ethics. People who believe themselves to be free agents tend to curb their selfish inclinations in consideration of the consequences for others.

It’s a wonderful support for the notion of moral realism and moral universalism: as soon as people believe they are moral agents, they incline towards the universal principles (see below a couple of posts ago “Jesse Prinz at Philosophy Now”). It’s not conclusive — there might be cultural pressures — but it makes a great test for other cultures. It turns morality into an empirical question, which really is kind of wonderful.

The piece goes on to wonder whether people actually are moral agents — are we free? It seems odd to me that this is still a question. On the one hand, if you reject determinism, you still can’t give an account of freedom. Suppose your choices originate from yourself. So what is that self? If there’s a motivation behind it, then it’s not free. If it has no motivation, then it’s just mere randomness, not a coherent self.

On the other hand, if you accept determinism, there’s no reason to reject selfhood and responsibility. Just because it’s an illusion doesn’t mean you can’t believe it and hold to it, and allow yourself to be treated as if it were real — for the simple reason that you believe it and insist that others believe it too.

Surely we all by now know that the self is an illusion. It isn’t integral, it is moved by unconscious motives, it shifts according to context and emotion, it is deceived by motives that are hidden from itself.

But it’s a useful illusion. The question of agency is one of personal dignity. We accept responsibility in order to maintain the fiction that we have integrity and dignity. Otherwise how would we take credit for our accomplishments? I helped that family — I get to congratulate myself. I wrote that book — I’m proud of myself. I fixed up that chair — how clever I am! My friends like me — me for me, not for some robot. It’s all foolishness, but a very pleasant foolishness.

It’s a sham but one we cherish. And it seems to be determined for us. We all have it as individuals. But it’s also convenient socially. It’s the basis of criminal law and punishment and an integument of social, business, academic, interpersonal interaction.

We don’t hold to it categorically. The criminally insane are not held responsible. We fudge on our own self identity. We are always in a twilight between the illusion of integrity and succumbing to selfish interests, aware or unaware. The whole point of the illusion of free will and agency is a kind of self flattering. It is itself a selfish interest, but with a difference. It’s about human dignity, which is well beyond mere selfishness. It’s noble, even if completely false. And its nobility only emerges in traditional, universalist morality.

The selfhood that brags about its great accomplishments, however delightful to ourselves, is, after all, repulsive to everyone else.