Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

An Ethical Metaquestion, or a meta-Ethical Question.

1 view
Skip to first unread message

robert j kolker

unread,
Jan 15, 1993, 9:40:28 AM1/15/93
to
We know that in the relm of formal systems and computation theory, there
are recursively undecidable (formal) statements and recursively unsoluble
problems.

Question: Is there a similar situation in the domain of Ethical theory
(objectivist or otherwise). Are there Ethical problems that do not admit
of a solution. Since Ethics is not generally self-referential it may not
be as easy to answer this question as it is in the relm of formal logic
and mathematics.

Also there is a question of possible means to atain ends. I am thinking of
an analog to the proposition that an angle trisection is not possible with
Platonic ruler/compass constructions. Do similar problems exist in Ethics.

Please ponder and responder (bad verse!)

Conan the Libertarian

--
"If you can't love the Constitution, then at least hate the Government"

Mikhail Zeleny

unread,
Jan 20, 1993, 3:01:29 AM1/20/93
to
Please pardon my TeX, and observe the follow-up. The dollar sign is a
mathematical delimiter. This article has been written in collaboration
with Erin Zhu. At this point, our sole claim to possible originality is
constituted by the case of an "apposite God". The rest is derived from
well-known literature. Enjoy.

In article <C0y97...@ccu.umanitoba.ca>
fe...@ccu.umanitoba.ca (Michael Feld) writes:

>In article <C0wGr...@world.std.com>
>r...@world.std.com (robert j kolker) writes:

RJK:


>>We know that in the relm of formal systems and computation theory, there
>>are recursively undecidable (formal) statements and recursively unsoluble
>>problems.
>>
>>Question: Is there a similar situation in the domain of Ethical theory
>>(objectivist or otherwise). Are there Ethical problems that do not admit
>>of a solution. Since Ethics is not generally self-referential it may not
>>be as easy to answer this question as it is in the relm of formal logic

MF:
>I don't know of any genuine paradoxes in ethics; and, of course, many
>meta-ethicists (none of them Objectivists) deny that there are ANY
>normative truths in ethics; but two sorts of puzzle come to mind:
>
>1) dilemmas; one rule dictates, say, truthfulness, while the other
>dictates kindness, and it is physically impossible to obey the two
>simultaneously
>
>2) quasi-practical-paradoxes: suppose (for the sake of discussion; to
>humour me) that utilitarianism is true; and suppose further (as might
>well be true) that promulgating that doctrine (or adopting it
>conciously) might result in sub-optimal outcomes ...

\paragraph{}
A classic ethical paradox is implicit in a story told of Protagoras
and Euathlus. It is said that they agreed that Protagoras will
instruct Euathlus in the art of rhetoric, in exchange for a certain
consideration, to be paid by the student, {\it if and only if} he wins
his first law case. But upon completing his training, Euathlus shied
away from taking on any law cases. After some time, Protagoras sued
his former pupil for the fee, presenting the following argument to the
court.
\begin{prose}
{\it Protagoras:} If I win this case, then Euathlus has to pay me by
virtue of your verdict. On the other hand, if he wins the case, then
he will have won his first case, hence he has to pay me in virtue of
our agreement. In either case, he has to pay me; hence, he is obliged
to pay me.
\end{prose}
In his turn, Euathlus replied as follows.
\begin{prose}
{\it Euathlus:} If I win this case, then by virtue of your verdict, I
don't have to pay Protagoras. On the other hand, if he wins the case,
then I will not yet have won my first case, and hence do not have to
pay in virtue of our agreement. In either case, I do not have to pay
Protagoras; hence, am not obliged to pay him.
\end{prose}
The implications of this problem for deontic logic are adequately
described in literature;\footnote{See Lennart \AAqvist, {\sl Introduction
to Deontic Logic and the Theory of Normative Systems}, Napoli:
Bibliopolis, 1987, and the references therein.} for another version of a
genuine deontic paradox one might consider the Prisoners' Dilemma, or the
less well known Newcomb's Paradox.

\paragraph{}
Suppose that you are confronted with a choice. There are two boxes
before you, A and B. You may either open both boxes, or else just
open B. You may keep what is inside any box you open, but you may not
keep what is inside any box you do not open. The situation is as
follows.

A very powerful being, who has been invariably accurate in his
predictions about your behavior in the past, has already acted in the
following way:
\begin{itemize}
\item[i] He has put \$1,000 in box $A$.
\item[ii] If he has predicted that you will open just box $B$, he has in
addition put \$1,000,000 in box $B$.
\item[iii] If he has predicted that you will open both boxes, he has put
nothing in box $B$.
\end{itemize}

There are two principles which can be used to resolve this in the most
``rational'' way possible. One possibility, that of acting in order to
{\it maximize expected utility}, called MEU for short, essentially says
that one should rationally choose the action which has the best chance of
yielding the maximum utility, in this case, the most money. The other
principle, that of {\it dominance principle}, or DP for short, consists
of acting in such a way as to not end up worse off than any other action,
and there being at least one outcome in which the action you undertake
does more good than any other option.

If one acts according to MEU, one reasons as follows: let the probability
of this Predictor being correct this time around (which is, by his prior
record, likely to be very high indeed) be some variable, say $h$. In the
event that you open both boxes, the utility of that action is the amount
you stand to receive multiplied by the probability of your getting that
amount, which is \$1,000 $h$ + \$1,001,000 $(1-h)$ in this instance. In
the event that you open only box $B$, the utility, calculated in the same
manner, is \$1,000,000 $h$. Thus if one has any faith at all in the
abilities of the Predictor, one would stand to win far more by only
opening box $B$.

However, if one acts according to DP, one notices that whether the
Predictor has or has not placed money in box B, choosing to open both
boxes will give one at least \$1,000, that is, satisfying the requirement
to not end up worse than any other action, and has some possibility of
getting \$1,000 more than the other option. Hence, reason dictates that
you open both boxes, contrary to the seemingly equally reasonable
conclusion arrived at according to the alternative decision principle
MEU.\footnote{See the discussion in Richard M.\thinspace Sainsbury's {\sl
Paradoxes}, Cambridge: Cambridge University Press, 1988.}

\paragraph{}
Now for another conundrum. As is well-known, God helps those who help
themselves, which renders God's help rather superfluous. Now, let us
consider an {\it apposite God}, one who is commited to helping exactly
those who do {\it not} help themselves. We bear in mind that, unlike
Russell's barber, who is free to shrug off as impossible his duty to
shave those, and only those, who do not shave themselves, the
perfection of God requires that he actually do everything he is
commited to do. Then is our God under obligation to help Himself?

\paragraph{} Up to this point, we have been considering paradoxes,
that is, contradictions arrived at by reasoning from not only the laws
of logic but also certain pragmatic rules, such as the MEU and DP
principles employed above, or factual suppositions about the nature of
Godliness. However, even if one is strictly limited to logical
considerations, contradictions still emerge as deontic antinomies.
For example, consider a deontic logic $T$ containing Robinson's
arithmetic $Q$, the relevant properties of which are: the constants
consist of $0$, $S$, $+$, and $\cdot$; its set of valid sentences is
true; it is finitely axiomatizable; and all one-place recursive
functions of natural numbers are functionally numerable, such that,
for all sentences $\phi$ and $\psi$ in $T$, we have
\begin{itemize}
\item[i] $\vdash_T N[\phi] \rightarrow \phi$,
\item[ii] $\vdash_T N[N[\phi] \rightarrow \phi]$,
\item[iii] $\vdash_T N[\phi \rightarrow \psi]
\rightarrow (N[\phi] \rightarrow N[\psi])$,
\item[iv] $\vdash_T N[\phi]$ if $\phi$ is a logical axiom,
\end{itemize}
where $N$, to be taken as a {\it syntactical} necessity operator, is
any formula whose only free variable is $u$, and for any sentence
$\phi$ $N[\phi]$ is equal to the sentence $N[\Delta_{nr(\phi)}]$,
where $\Delta_{nr(\phi)}$ is the name of $\phi$, arrived at by
prefixing $0$ with $nr(\phi)$ number of $S$'s, where $nr(\phi)$ is the
G\"odel number of $\phi$.

Then the theory $T$ is inconsistent.\footnote{See Richard Montague,
``Syntactical treatment of modality'', reprinted in {\sl Formal
Philosophy}, New Haven: Yale University Press, 1974.} Thus, any deontic
logic which, per our requirements, is an extension of the weak modal
logic $S1$,\footnote{A list of such systems will be found in \AAqvist,
{\it op.\thinspace cit.}} with a syntactical deontic necessity operator,
is logically inconsistent. The moral of this story is that the
Literalist legal and ethical systems, so popular with both Protestant
Scripture interpreters, and certain U.S. law scholars, are
self-contradictory.

>--
>Michael Feld | E-mail: <fe...@ccu.umanitoba.ca>
>Dept. of Philosophy | FAX: (204) 261-0021
>University of Manitoba | Voice: (204) 474-9136
>Winnipeg, MB, R3T 2M8, Canada

cordially,
mikhail zel...@husc.harvard.edu
"Le cul des femmes est monotone comme l'esprit des hommes."


cordially,
mikhail zel...@husc.harvard.edu
"Les beaulx bastisseurs nouveaulx de pierres mortes ne sont escriptz
en mon livre de vie. Je ne bastis que pierres vives: ce sont hommes."

gam...@uxa.cso.uiuc.edu

unread,
Jan 24, 1993, 11:35:51 PM1/24/93
to
In article <C0y97...@ccu.umanitoba.ca> fe...@ccu.umanitoba.ca (Michael Feld) writes:
>In article <C0wGr...@world.std.com> r...@world.std.com (robert j kolker) writes:
>>We know that in the relm of formal systems and computation theory, there
>>are recursively undecidable (formal) statements and recursively unsoluble
>>problems.
>>
>>Question: Is there a similar situation in the domain of Ethical theory
>>(objectivist or otherwise). Are there Ethical problems that do not admit
>>of a solution. Since Ethics is not generally self-referential it may not
>>be as easy to answer this question as it is in the relm of formal logic
>
>I don't know of any genuine paradoxes in ethics; and, of course, many
>meta-ethicists (none of them Objectivists) deny that there are ANY
>normative truths in ethics; but two sorts of puzzle come to mind:
>
>1) dilemmas; one rule dictates, say, truthfulness, while the other
>dictates kindness, and it is physically impossible to obey the two
>simultaneously
>
>2) quasi-practical-paradoxes: suppose (for the sake of discussion; to
>humour me) that utilitarianism is true; and suppose further (as might
>well be true) that promulgating that doctrine (or adopting it
>conciously) might result in sub-optimal outcomes ...

I would say that there are no insoluble problems in ethics; there may be
situations in which there are several alternatives from which one may
choose, but, in accordance with a rational hierarchy of values, one choice
will doubtless be the best. If it comes to the point where there are
two (or more) choices which are equally good, and you must choose one of
them, then there can be no blame for not choosing the other one. (Because
ethics must be contextual, I'm omitting any specific examples.)

The two "puzzles" offered above do not apply to Objectivism. The first
mentions "rules"; Objectivism states that ethics is NOT a set of rules, but
is derived from rational value-judgements within a certain context. The
second is only a puzzle is one assumes utilitarianism is true. I think
it's a nice reductio ad absurdum for utilitarianism. :)

By the way, there was another follow-up to this article by Mikhail Zeleny
(which I did not follow-up from due to its length and the fact that it
would take me forever to edit using vi). He offered several paradoxes:

The Protagoras Paradox: I'd say the fault lies in Protagoras' agreement
to the terms of his instruction; he asked for that one. The problem is
not with Protagoras, but with his student, who is basically attempting to
renege on his agreement. The fault, therefore, lies with his student.

Newcomb's Paradox (the one about the two boxes and the Predictor): First off,
this isn't an ethical problem. Second, it presupposes the existence of
a "Predictor" who is omniscient. But A is A, a thing is what it is, has a
specific, delimited identity, and therefore the Predictor cannot exist.
And thus passes away the paradox. (So you should take both boxes! :) )

The "God Paradox": This one presupposes both the existence of a God and that
the maxim "God helps those who help themselves" is true. It should be
obvious by the outcome of Zeleny's discussion that neither assumption is
true.

As an aside, I am thinking of a taped lecture by Dr. Leonard Peikoff which
I once heard, in which someone asked him "What is the Objectivist stand
on symbolic logic?" (I am doing this from memory, but I'm pretty sure I
got it right.) Peikoff replied: "The same as the Objectivist stand on
numerology. . .all I can say is that if you want to play a particularly
foolish game, then do so at home, and don't tell anybody about it. And
especially don't go around teaching it as philosophy."


------------------------------------------------

Benjamin W. Lagow
Grad Res Asst, Dept. of Materials Science and Engineering
University of Illinois at Urbana-Champaign

0 new messages