Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How I introduce probability to my class

377 views
Skip to first unread message

Bill Jefferys

unread,
Dec 3, 2019, 9:30:03 PM12/3/19
to talk-o...@moderators.isc.org
I’ve been lurking here, and have noticed that there’s been a lot of talk about probability. Unfortunately, a lot has been written rather sloppily, so that the actual workings of probability theory have not been clarified but have actually been obscured. It’s clear that a lot of folks understand some basic issues, but they haven’t been written down carefully enough so that the real issues are clear.

By way of background, I have been teaching probability and statistics at the undergraduate and graduate level for a number of decades, and although I am now retired from teaching formal classes I am still associated with the statistics program at the University of Vermont and still work with our students and faculty there.

I do not want to get into arguments, name-calling and snide remarks, which I’ve seen all too much of on talk.origins. My reasons for posting this are to clarify issues that I think haven’t been expressed clearly. I will certainly be glad to explain anything I write that isn’t clear, but I refuse to get into name-calling and other nastiness. At my age, I really have little appetite for that kind of thing.

Probability Theory: The Logic Of Science, by the late E. T. Jaynes is probably the best free source readily available for what I’m going to talk about. The first few chapters of the book describe in detail the basics of probability theory and how, in the Bayesian context that I will adopt here, these principles can be used in practice. I highly recommend this source. The completed chapters of Prof. Jaynes’ book, which was still in progress when he passed away, can be downloaded here:

https://home.fnal.gov/~paterno/probability/jaynesbook.html

To explain the problems I’ve been having with the discussion here, I want to describe how I used my first class day in a course on Bayesian Inference and Decision Theory, which I taught many times at two universities. It was an honors college course for freshmen or sophomores (depending on the university) and was taught to a group of very intelligent students whose majors ran the gamut, some in the sciences, a number of pre-med students, liberal arts majors, and once even a dance major. It was taught as a seminar with the students and myself seated around a group of tables arranged in a square so that everyone could see everyone else. Typically the class would have 15-20 students.

On the first day of class (after everyone had introduced themselves) I would conduct the following experiment: I would tell them that I had a coin in my pocket and that I would in a moment take it out of my pocket and toss it. I asked them what the probability was (in their opinion) that the coin would come up heads. Universally they would answer, correctly, that it was 50%.

Then I would reach into my pocket, pull out the coin, toss it so that it landed on the floor and quickly, before anyone (including myself) saw how it came up, put my foot on it. I then asked what the probability was that it came up heads. The students would universally say, 50%.

I then would uncover the coin briefly and out of sight of the students note how it came up. I then would put my foot on it again, and say, “I know how the coin has fallen. What do you say the probability is that it is heads?” This question usually produced a difference of opinion amongst the students. Most still said it was 50%, but some would say that, since someone (me) knew how it actually came up, probability no longer applied, and would say that the coin had come up either heads or tails, but they just didn’t know which.

Actually, either answer is a reasonable one, although they reflect different interpretations of probability:

https://en.wikipedia.org/wiki/Probability_interpretations

The first one, that it was still 50%, reflects a Bayesian point of view that probability is a way of describing ones personal (subjective) uncertainty as to the truth of a proposition (in this case, the student’s belief that the coin shows heads) given what one knows:

https://en.wikipedia.org/wiki/Bayesian_inference

The second one, that it was either heads or tails but that probability no longer applies, reflects a frequentist point of view in which probability statements apply to a sequence of identical events (in this case, a lot of coin tosses) but that once a given event has been instantiated, it no longer makes sense to talk about the probability of that particular event:

https://en.wikipedia.org/wiki/Frequentist_inference


Now, I’m not certain of this but I’m guessing that it may be that Dr. Kleinman’s point of view is basically frequentist, in which case it may have something to do with the differences he’s been having with others on the list. Perhaps he can elaborate (or tell me that my guess is wrong). But there are problems with how others here have been explaining their point of view, which I’ll get to in a bit. In any case, if there were students in my class that took the point of view that probability no longer applies, I would spend some time discussing this as well as explaining the Bayesian point of view, and saying that the Bayesian point of view is the one we would be using during the semester. I would point out to them that, for example, it would still make sense for one student in the class to bet at even odds with another student that the coin came up heads, even though the professor knows how the coin came up, and that the ability to make bets at reasonable odds is part of the Bayesian view of probability.

As an important aside, I want to point out that under the Bayesian view of probability, it is perfectly reasonable to talk about the probability of events that have already happened (or not happened), or even about events that are not in any way the result of a random process like that of a sequence of identical coin tosses. The Bayesian use of probability, which does use the standard rules of ordinary probability theory, can and does apply to unique events. What it does is to give us a way of talking about how certain or uncertain we are about the truth or falsity of those events or facts, given what we know, given information we use to make that evaluation. Examples of the sorts of things we can sensibly discuss using the Bayesian approach would be the probability that a person on trial for murder is in fact the murderer, after we hear the evidence at trial; the probability that a particular physical theory like general relativity is a correct description of nature; or the probability that a particular horse will win the Kentucky Derby. All of these are unique events, some of them already a fact (if unknown to us…the person on trial for murder, for example, knows for sure if he is innocent or guilty, and of course the laws of physics are what they are). And for example, in the case of the Kentucky Derby, people actually do assess the probability of the truth or falsity of the proposition that a particular horse will win a particular race by either placing a bet (at particular odds that reflect the probability, in their view, that the horse bet on will win) or not placing such a bet (if in their view the odds offered would make the bet unfair to them given their assessment of the probability of the horse winning). Even physicists have been known to make and later pay off bets on the truth or falsity of a physical theory, and those bets can be translated into probabilities (e.g., a bet at even odds corresponds to a probability of 0.5 that the proposition is true; at 3:1 odds, depending on which side of the bet you take, it corresponds to a probability of 0.25 or 0.75). It turns out that the correct way to update probabilities on unique propositions like these as new data becomes available is prescribed by standard probability theory, as used by Bayesians.

Back to my experiment with my students.

Next, I tell the students what I saw when I looked at the coin; let’s say it was heads, and if so I will tell them I saw heads. I then ask them what the probability is that the coin is heads (and here we are talking about their subjective probability, each student’s individual degree of belief). At this point there’s a dilemma: The students realize that I might be lying! So, they have to make some estimate, on the first day of class, that the professor that they have just met might be lying, perhaps to make a pedagogical point. (In fact I always tell the truth here, but the students have no way of knowing this). So the students in general are unwilling to say that the probability is 100% that it’s heads. Generally they give me some credit for truthfulness but they don’t go all the way. They might typically say 90%.

I then invite a student to look at the coin. I uncover it by moving my foot, the student peeks, and I cover it again. I then ask the student to state what she saw. In almost every case the student would say what I said…and the other students will increase their estimate of the probability, but not necessarily to 100%. One time a student contradicted my statement; a very smart guy, he is now a tenured professor at a major university with a doctoral degree from one of the best statistics programs in the world.

Back to the experiment.

I then invite the students to look at the coin themselves (easy to do in a small seminar class), and everyone then agrees that it came up heads, so their probability that it came up heads (given what they now know) is 100%.

I then ask what the probability is that the coin will come up heads if I toss it again. The students say 50%. I then pass the coin around and ask them to examine it, and they discover that the coin has two heads (or in about half the classes, two tails). Even though it is true as I told them in the beginning that the probability of heads was 50%, I did not tell them that the reason for that is that I had two coins in my pocket, one with two heads, the other with two tails, and that I drew one of them at random. So it wasn’t the tossing of the coin that was the random process, it was the random choice of which coin to toss! This then gave me an additional opportunity to discuss the Bayesian point of view that probability is a way of describing your subjective uncertainty about the truth of a proposition, given the evidence you have available at the time you make that probability assessment.

OK, this is a long story, how does it bear on the discussions of probability in this newsgroup?

First, I want to say that all probability statements are conditional on the background information you assume. By this I mean that if you want to make a statement in probability theory, it’s not enough to ask “What’s the probability of X?” You also have to state the background information B that is either known or assumed as part of that probability assessment. As that background information changes, the probability assessments will also change, since for the Bayesian they are the subjective estimates of those probabilities given the background information.

So we should always be making that explicit, and in standard probability theory we do it by writing:

P(X | B) = The probability of X given that the background information B is true.

So let’s see how this goes in terms of the experiment I conducted in my first class of the semester.

Initially, the background information B is that there is a 50% chance the coin will come up Heads:

P(H | B) = 0.5

I then toss the coin and put my foot on it, call that information T (for the coin has been Tossed). The new background information is B1 = T & B (where & means logical ‘and’). But still,

P(H | B1) = 0.5

I then look at the coin and tell the students that I know how it came up. Call this new information P (for Professor knows). The new background information that the students have is B2 = P & B1. I have different background information, but we aren’t talking about my subjective probability, we are asking about the students’ subjective probability. At this point students who initially said “either heads or tails but probability doesn’t apply” are required to adopt a Bayesian point of view and talk about their subjective uncertainty, which hasn’t changed. So

P(H | B2) = 0.5

I then tell the students that the coin is Heads, and they have to assess the probability that I’m telling the truth. They do it somehow (theoretically they should do this by applying Bayes’ theorem, but since they haven’t learned about that at this point they have to do it seat-of-the-pants). The new information is S, the professor Says that the coin is Heads. The new background information is B3 = S & B2.

P(H | B3) = 0.9 (say).

There will be a similar situation if one of the students looks at the coin and agrees that it was Heads:

P(H | B4) = 0.95 (say) [Again, not for the student that looked, but for the others in the class]

Finally everyone looks and observes for themselves data H that the coin is heads: B5 = H & B4 so

P(H | B5) = P(H | H & B4) = 1, which follows because for any H and C, P(H | H & C) = 1

OK, so this is how my class experiment went and this is how I analyzed it in terms of Bayesian probability. It also illustrates my beef with some folks who have been saying things like “once we observe that X is true, the probability that X is true is 1, it’s no longer 0.5, because conditional, blah blah blah.” The problem is that a statement like this is ambiguous, because it doesn’t make proper use (or really any use) of the notation that’s been developed to make it clear exactly what you are saying when you make a statement using conditional probability, what is conditional on what. And the reason for that is that many people have just not been clear about background information they are assuming when they’ve been making these statements.

In particular in my experiment, P(H | B) = 0.5, even after the student has looked at the coin and determined that it came up heads. The probability P(H | B) has as its background information only that the coin, when flipped, has probability 0.5 of coming up heads. After you have looked at the coin and determined that it came up heads, P(H | B5) = 1 for sure, but that is not the same as P(H | B), which is still 0.5 and is not changed by anything that happened subsequently, because the background information B≠B5.

Proper use of conditioning on background information makes this very clear. But the wordy statements that have been made here haven’t been clear and that may be why there is confusion on this point.

To illustrate how important this is, I want to mention a case where failure to take careful account of precisely this issue led to incorrect conclusions published in the literature. A paper was published a number of years ago that demonstrates how careless use of probability theory, in particular its failure to condition correctly on known data, can lead to faulty conclusions. The argument the author made was that the Bayesian machinery does not allow one to use “old data”, that is, data that you already knew, in order to use Bayes’ theorem to update a prior to get an updated posterior. But because of the need for careful conditioning, the argument given and the conclusion were wrong.

Reminder: Bayes’ theorem is the basic tool used in Bayesian inference. It derives straightforwardly from the multiplication rule for probabilities (here X will be data, H a hypothesis under test, and B the background information):

P(H & X | B) = P(H | X & B)P(X | B) = P(X | H & B)P(H | B) [Multiplication rule used twice here.]

Note that the multiplication rule does not say that P(X & H | B) = P(X | B)P(H | B). This is only valid if X and H are independent under B, which is in general not the case.

Now, by dividing this equation through by P(X | B) (assuming this is not zero) we get Bayes’ theorem:

P(H | X & B) = P(X | H & B)P(H | B)/P(X | B).

Here I have explicitly noted the background information B. Leaving B out can be dangerous, as we will see below.

The divisor, P(X | B), is correctly computed by summation over the available hypotheses, which I will write for simplicity here as H and ¬H (not-H):

P(X | B) = P(X & H | B) + P(X & ¬H | B) = P(X | H & B)P(H | B) + P(X | ¬H & B)P(¬H | B)

OK, with this background, the philosopher’s argument went this way: Since the data X are known and “old”, according to the author (who left out B) it follows that

P(X) = 1 (??)

Note the failure to specify the background information that X is known, which is why I flag this equation as questionable. Normally one would calculate P(X) by summing over all hypotheses, using the formula I gave just above for P(X | B). One would never just assume the value of P(X) as this argument does.

It therefore follows from this suspect assumption and standard probability theory that P(X | H) = 1 for any H.

Let P(H) be the prior on hypothesis H. Then Bayes’ theorem would state, under these questionable assumptions, that the posterior probability on H, given X, is

P(H | X) = P(X | H)P(H)/P(X) = P(H)

That is, according to this argument if you plug the data into Bayes’ theorem, the posterior is equal to the prior and you haven’t learned anything.

This is clearly wrong and stems from the incorrect notion that P(X) = 1 if X is known data. The correct way to write this is by explicitly conditioning on the known data as part of the background information, that is, P(X | X & B) = 1, where for pedantic completeness I’ve included B, the additional background information that is independent of X. Then you get the following (correct) proof of something (but what?):

P(X | X & B) = 1

And from Bayes’ theorem (where, crucially, you have to condition the prior probability of H on X to write Bayes’ rule correctly):

P(H | X & X & B) = P(X | H & X & B)P(H | X & B)/P(X | X & B) = P(H | X & B)

and you haven’t learned anything.

But this isn’t a proof that you can’t learn from old data, it’s a proof that you can’t use the same piece of data X twice, because to write Bayes’ rule correctly, the prior on H has to be P(H | X & B), that is, it is the probability of H given that X is true, so your second attempt to use the data X doesn’t change the posterior probability of H. [Of course, this also follows trivially because logically X & X = X.] You can still determine P(H | X & B) by applying Bayes’ rule with everything on the right hand side unconditioned on X:

P(H | X & B) = P(X | H & B)P(H | B)/P(X | B)

where it is no longer incorrectly assumed that P(X)=1 [or P(X | B)=1], so that you can do the calculation without a problem arising due to the suspect assumption that P(X)=1 for old data. Instead, one would use the summation formula above by summing over all hypotheses to evaluate P(X | B).

The thing that worries me about some of the things that have been written here is that it looks like some people are making the same mistake that we see in this paper: That once data X have been observed, the probability of X is 1. But that is just failure to use conditioning carefully and properly. The correct statement is that whether or not data X have been observed, the probability of X given X is 1. That is entirely different. That’s just P(X | X) = 1, which is the probability calculus version of the logical tautology that for any X, X implies X (in logic notation, X → X) i.e., “If X is true, then X is true”. As the Jaynes book makes clear, from the point of view of probability theory that he describes, probability theory is (up to an isomorphism) the unique extension of standard logic to the case where truth values can take on any value on the interval [0,1]. But read the first few chapters of the book, which explains it very well.

The correct statement is that P(X | X) = 1, but that P(X) (unconditioned on X) can be anything, it is whatever you would think that probability to be without using the information that X is true. And when you use Bayes’ theorem, you should always calculate P(X) by summation over all hypotheses as described above.

Bill Jefferys

Mark Isaak

unread,
Dec 4, 2019, 1:50:03 AM12/4/19
to talk-o...@moderators.isc.org
On 12/3/19 6:28 PM, Bill Jefferys wrote:
> [snip long, excellent discussion of probabilities]

You left one important question unanswered: Where do you get two-headed
(and/or two-tailed) coins?

--
Mark Isaak eciton (at) curioustaxonomy (dot) net
"Omnia disce. Videbis postea nihil esse superfluum."
- Hugh of St. Victor

André G. Isaak

unread,
Dec 4, 2019, 2:40:02 AM12/4/19
to talk-o...@moderators.isc.org
On 2019-12-03 11:46 p.m., Mark Isaak wrote:
> On 12/3/19 6:28 PM, Bill Jefferys wrote:
>> [snip long, excellent discussion of probabilities]
>
> You left one important question unanswered: Where do you get two-headed
> (and/or two-tailed) coins?

Here is a four-headed coin:

<https://www.deamoneta.com/auctions/view/106/114>

--
To email remove 'invalid' & replace 'gm' with well known Google mail
service.

jillery

unread,
Dec 4, 2019, 5:10:03 AM12/4/19
to talk-o...@moderators.isc.org
On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys
<billje...@gmail.com> wrote:

First, I want to welcome you and your contributions to T.O. It's rare
for a first-time poster to submit something as comprehensive as you
did.

Second, as one of those who said "once we observe that X is true, the
probability that X is true is 1, it's no longer 0.5, because
conditional, blah blah blah". I want to thank you for pointing out
why such a statement is technically incorrect, and for identifying a
formal way to state that case more precisely.

Having said that, I point out that the individual you identified as a
possible frequentist, Dr. Kleinman, is also guilty of expressing his
arguments at least as imprecisely and informally.

As a first-time poster to T.O., you may be unfamiliar with why
arguments involving probabilities appear in Talk.Origins. Some
posters, including Dr. Kleinman, express the opinion that certain
scientific theories are false. They base their opinions in part on
probability calculations which they claim prove certain event are too
unlikely to have happened within the lifetime of the known universe.

In my opinion, such opinions are not just wrong, but wrong-headed.
However unlikely an event their calculation say, it's unreasonable to
handwave away an event that happened, and it's unreasonable to claim
that unlikely events don't happen.

I accept your statements below, that in order to discuss conditional
probabilities precisely, one must make explicitly the background
information of those conditions. However, my impression is, for
informal venues like T.O., and for the purpose of refuting the kind of
claims I described above, a formal specification of that information
as you describe below, is not only unnecessary but likely obfuscating.
Instead, a verbal description is sufficient.

My understanding is, probabilities range from 0 to 1, inclusive, so 1
is just as much a valid probability as 0.5. So when I say "the
probability that X happened, and is known to have happened, is one",
my statement is at least as precise as their claim, that "X is too
improbable".

In the specific cases involving Dr. Kleinman, my statement is to
refute his claim that species could not have evolved because
probability. As much as I value your contribution, and as much as I
recognize the technical merits of your post, my impression is,
explicitly including these things wouldn't address the actual issue
under discussion.

If you are interested in becoming familiar with the actual claims
discussed in T.O., the following is as good an introduction as any:

<http://www.talkorigins.org/faqs/abioprob/abioprob.html>

<http://www.talkorigins.org/faqs/abioprob/borelfaq.html>

<http://www.talkorigins.org/faqs/thermo/probability.html>

<http://www.talkreason.org/articles/chanceprob.cfm>

I hope other posters will provide additional links.
>In particular in my experiment, P(H | B) = 0.5, even after the student has looked at the coin and determined that it came up heads. The probability P(H | B) has as its background information only that the coin, when flipped, has probability 0.5 of coming up heads. After you have looked at the coin and determined that it came up heads, P(H | B5) = 1 for sure, but that is not the same as P(H | B), which is still 0.5 and is not changed by anything that happened subsequently, because the background information B?B5.
>The thing that worries me about some of the things that have been written here is that it looks like some people are making the same mistake that we see in this paper: That once data X have been observed, the probability of X is 1. But that is just failure to use conditioning carefully and properly. The correct statement is that whether or not data X have been observed, the probability of X given X is 1. That is entirely different. That’s just P(X | X) = 1, which is the probability calculus version of the logical tautology that for any X, X implies X (in logic notation, X ? X) i.e., “If X is true, then X is true”. As the Jaynes book makes clear, from the point of view of probability theory that he describes, probability theory is (up to an isomorphism) the unique extension of standard logic to the case where truth values can take on any value on the interval [0,1]. But read the first few chapters of the book, which explains it very well.
>
>The correct statement is that P(X | X) = 1, but that P(X) (unconditioned on X) can be anything, it is whatever you would think that probability to be without using the information that X is true. And when you use Bayes’ theorem, you should always calculate P(X) by summation over all hypotheses as described above.
>
>Bill Jefferys

--
I disapprove of what you say, but I will defend to the death your right to say it.

Evelyn Beatrice Hall
Attributed to Voltaire

Oxyaena

unread,
Dec 4, 2019, 6:40:02 AM12/4/19
to talk-o...@moderators.isc.org
On 12/4/2019 5:05 AM, jillery wrote:
[snip]
>
> If you are interested in becoming familiar with the actual claims
> discussed in T.O., the following is as good an introduction as any:
>
> <http://www.talkorigins.org/faqs/abioprob/abioprob.html>
>
> <http://www.talkorigins.org/faqs/abioprob/borelfaq.html>
>
> <http://www.talkorigins.org/faqs/thermo/probability.html>
>
> <http://www.talkreason.org/articles/chanceprob.cfm>
>
> I hope other posters will provide additional links.

http://www.talkreason.org/perakm/Sewell.htm

http://www.talkreason.org/Mark's%20sites/Mark's%20perakm%20site/members.cox.net/perakm/probabilities.htm

http://www.talkreason.org/articles/probabilities.cfm

https://peradectes.wordpress.com/2019/03/30/improbable-things-happen/ -
(shameless self promotion warning)




--
"I would rather be the son of an ape than be descended from a man afraid
to face the truth." - TH Huxley

https://peradectes.wordpress.com/

RonO

unread,
Dec 4, 2019, 7:00:02 AM12/4/19
to talk-o...@moderators.isc.org
On 12/4/2019 12:46 AM, Mark Isaak wrote:
> On 12/3/19 6:28 PM, Bill Jefferys wrote:
>> [snip long, excellent discussion of probabilities]
>
> You left one important question unanswered: Where do you get two-headed
> (and/or two-tailed) coins?
>

Magic shops used to sell them, but my guess is that not too many places
have the weird types of shops the Los Angeles area had, but someone was
making them to sell in those types of shops.

Ron Okimoto

RonO

unread,
Dec 4, 2019, 7:10:03 AM12/4/19
to talk-o...@moderators.isc.org
On 12/3/2019 8:28 PM, Bill Jefferys wrote:
> I’ve been lurking here, and have noticed that there’s been a lot of talk about probability. Unfortunately, a lot has been written rather sloppily, so that the actual workings of probability theory have not been clarified but have actually been obscured. It’s clear that a lot of folks understand some basic issues, but they haven’t been written down carefully enough so that the real issues are clear.
>
> By way of background, I have been teaching probability and statistics at the undergraduate and graduate level for a number of decades, and although I am now retired from teaching formal classes I am still associated with the statistics program at the University of Vermont and still work with our students and faculty there.
>
> I do not want to get into arguments, name-calling and snide remarks, which I’ve seen all too much of on talk.origins. My reasons for posting this are to clarify issues that I think haven’t been expressed clearly. I will certainly be glad to explain anything I write that isn’t clear, but I refuse to get into name-calling and other nastiness. At my age, I really have little appetite for that kind of thing.
>
> Probability Theory: The Logic Of Science, by the late E. T. Jaynes is probably the best free source readily available for what I’m going to talk about. The first few chapters of the book describe in detail the basics of probability theory and how, in the Bayesian context that I will adopt here, these principles can be used in practice. I highly recommend this source. The completed chapters of Prof. Jaynes’ book, which was still in progress when he passed away, can be downloaded here:
>
> https://home.fnal.gov/~paterno/probability/jaynesbook.html
>
> To explain the problems I’ve been having with the discussion here, I want to describe how I used my first class day in a course on Bayesian Inference and Decision Theory, which I taught many times at two universities. It was an honors college course for freshmen or sophomores (depending on the university) and was taught to a group of very intelligent students whose majors ran the gamut, some in the sciences, a number of pre-med students, liberal arts majors, and once even a dance major. It was taught as a seminar with the students and myself seated around a group of tables arranged in a square so that everyone could see everyone else. Typically the class would have 15-20 students.
>
> On the first day of class (after everyone had introduced themselves) I would conduct the following experiment: I would tell them that I had a coin in my pocket and that I would in a moment take it out of my pocket and toss it. I asked them what the probability was (in their opinion) that the coin would come up heads. Universally they would answer, correctly, that it was 50%.
>
> Then I would reach into my pocket, pull out the coin, toss it so that it landed on the floor and quickly, before anyone (including myself) saw how it came up, put my foot on it. I then asked what the probability was that it came up heads. The students would universally say, 50%.
>
> I then would uncover the coin briefly and out of sight of the students note how it came up. I then would put my foot on it again, and say, “I know how the coin has fallen. What do you say the probability is that it is heads?” This question usually produced a difference of opinion amongst the students. Most still said it was 50%, but some would say that, since someone (me) knew how it actually came up, probability no longer applied, and would say that the coin had come up either heads or tails, but they just didn’t know which.
>
> Actually, either answer is a reasonable one, although they reflect different interpretations of probability:
>
> https://en.wikipedia.org/wiki/Probability_interpretations
>
> The first one, that it was still 50%, reflects a Bayesian point of view that probability is a way of describing ones personal (subjective) uncertainty as to the truth of a proposition (in this case, the student’s belief that the coin shows heads) given what one knows:
>
> https://en.wikipedia.org/wiki/Bayesian_inference
>
> The second one, that it was either heads or tails but that probability no longer applies, reflects a frequentist point of view in which probability statements apply to a sequence of identical events (in this case, a lot of coin tosses) but that once a given event has been instantiated, it no longer makes sense to talk about the probability of that particular event:
>
> https://en.wikipedia.org/wiki/Frequentist_inference
>
>
> Now, I’m not certain of this but I’m guessing that it may be that Dr. Kleinman’s point of view is basically frequentist, in which case it may have something to do with the differences he’s been having with others on the list. Perhaps he can elaborate (or tell me that my guess is wrong). But there are problems with how others here have been explaining their point of view, which I’ll get to in a bit. In any case, if there were students in my class that took the point of view that probability no longer applies, I would spend some time discussing this as well as explaining the Bayesian point of view, and saying that the Bayesian point of view is the one we would be using during the semester. I would point out to them that, for example, it would still make sense for one student in the class to bet at even odds with another student that the coin came up heads, even though the professor knows how the coin came up, and that the ability to make bets at reasonable odds is part of the Bayesian view of probability.
>
> As an important aside, I want to point out that under the Bayesian view of probability, it is perfectly reasonable to talk about the probability of events that have already happened (or not happened), or even about events that are not in any way the result of a random process like that of a sequence of identical coin tosses. The Bayesian use of probability, which does use the standard rules of ordinary probability theory, can and does apply to unique events. What it does is to give us a way of talking about how certain or uncertain we are about the truth or falsity of those events or facts, given what we know, given information we use to make that evaluation. Examples of the sorts of things we can sensibly discuss using the Bayesian approach would be the probability that a person on trial for murder is in fact the murderer, after we hear the evidence at trial; the probability that a particular physical theory like general relativity is a correct description of nature; or the probability that a particular horse will win the Kentucky Derby. All of these are unique events, some of them already a fact (if unknown to us…the person on trial for murder, for example, knows for sure if he is innocent or guilty, and of course the laws of physics are what they are). And for example, in the case of the Kentucky Derby, people actually do assess the probability of the truth or falsity of the proposition that a particular horse will win a particular race by either placing a bet (at particular odds that reflect the probability, in their view, that the horse bet on will win) or not placing such a bet (if in their view the odds offered would make the bet unfair to them given their assessment of the probability of the horse winning). Even physicists have been known to make and later pay off bets on the truth or falsity of a physical theory, and those bets can be translated into probabilities (e.g., a bet at even odds corresponds to a probability of 0.5 that the proposition is true; at 3:1 odds, depending on which side of the be
t you take, it corresponds to a probability of 0.25 or 0.75). It turns out that the correct way to update probabilities on unique propositions like these as new data becomes available is prescribed by standard probability theory, as used by Bayesians.

In terms of why Kleinman was using the product rule incorrectly it was
not because the event had already happened. The event he was trying to
calculate the probability for was still in progress. The issue was that
one mutation could occur before the second one, and after the first
mutation happened it did not have to happen again in that cell lineage,
and basic biology would have to factor in and each subsequent generation
there would be a different probability that the two mutations would
occur in that lineage. Basically the probability would depend on how
many cells were in that cell lineage that already had the first
mutation, and how many would reproduce to make the next generation. It
is more complicated than that, but the product rule did not apply to the
issue that he was talking about. Kleinman wanted to multiply the
probability of each mutation ocurring, but that obviously is not the way
to do the calculation when the two mutations do not have to happen at
the same time in a cell lineage.

It was not just because the first mutation had already happened. It was
due to how biological evolution actually works (life builds on what came
before), and the basic biology of reproduction.

Ron Okimoto

Bill Rogers

unread,
Dec 4, 2019, 7:30:04 AM12/4/19
to talk-o...@moderators.isc.org
Thanks. That was great. I think you nailed it. The "talking past each other" business was coming from the fact that one side was using a frequentist interpretation and the other a Bayesian interpretation without really being explicit about it (or maybe not understanding that there was a difference in the first place).

Also great to see a fellow Vermonter here.

Bill Jefferys

unread,
Dec 4, 2019, 8:20:03 AM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 1:50:03 AM UTC-5, Mark Isaak wrote:
> On 12/3/19 6:28 PM, Bill Jefferys wrote:
> > [snip long, excellent discussion of probabilities]
>
> You left one important question unanswered: Where do you get two-headed
> (and/or two-tailed) coins?

Magic shop.

Bill

Bill Jefferys

unread,
Dec 4, 2019, 8:20:03 AM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 7:30:04 AM UTC-5, Bill Rogers wrote:

>
> Thanks. That was great. I think you nailed it. The "talking past each other" business was coming from the fact that one side was using a frequentist interpretation and the other a Bayesian interpretation without really being explicit about it (or maybe not understanding that there was a difference in the first place).
>
> Also great to see a fellow Vermonter here.

Where are you in VT? We live in Fayston, family place my parents bought over 60 years ago. Maybe we can get together!

Bill



Alan Kleinman MD PhD

unread,
Dec 4, 2019, 8:35:03 AM12/4/19
to talk-o...@moderators.isc.org
On Tuesday, December 3, 2019 at 6:30:03 PM UTC-8, Bill Jefferys wrote:
> I’ve been lurking here, and have noticed that there’s been a lot of talk about probability. Unfortunately, a lot has been written rather sloppily, so that the actual workings of probability theory have not been clarified but have actually been obscured. It’s clear that a lot of folks understand some basic issues, but they haven’t been written down carefully enough so that the real issues are clear.
>
> By way of background, I have been teaching probability and statistics at the undergraduate and graduate level for a number of decades, and although I am now retired from teaching formal classes I am still associated with the statistics program at the University of Vermont and still work with our students and faculty there.
>
> I do not want to get into arguments, name-calling and snide remarks, which I’ve seen all too much of on talk.origins. My reasons for posting this are to clarify issues that I think haven’t been expressed clearly. I will certainly be glad to explain anything I write that isn’t clear, but I refuse to get into name-calling and other nastiness. At my age, I really have little appetite for that kind of thing.
>
> Probability Theory: The Logic Of Science, by the late E. T. Jaynes is probably the best free source readily available for what I’m going to talk about. The first few chapters of the book describe in detail the basics of probability theory and how, in the Bayesian context that I will adopt here, these principles can be used in practice. I highly recommend this source. The completed chapters of Prof. Jaynes’ book, which was still in progress when he passed away, can be downloaded here:
>
> https://home.fnal.gov/~paterno/probability/jaynesbook.html
>
> To explain the problems I’ve been having with the discussion here, I want to describe how I used my first class day in a course on Bayesian Inference and Decision Theory, which I taught many times at two universities. It was an honors college course for freshmen or sophomores (depending on the university) and was taught to a group of very intelligent students whose majors ran the gamut, some in the sciences, a number of pre-med students, liberal arts majors, and once even a dance major. It was taught as a seminar with the students and myself seated around a group of tables arranged in a square so that everyone could see everyone else. Typically the class would have 15-20 students.
>
> On the first day of class (after everyone had introduced themselves) I would conduct the following experiment: I would tell them that I had a coin in my pocket and that I would in a moment take it out of my pocket and toss it. I asked them what the probability was (in their opinion) that the coin would come up heads. Universally they would answer, correctly, that it was 50%.
>
> Then I would reach into my pocket, pull out the coin, toss it so that it landed on the floor and quickly, before anyone (including myself) saw how it came up, put my foot on it. I then asked what the probability was that it came up heads. The students would universally say, 50%.
>
> I then would uncover the coin briefly and out of sight of the students note how it came up. I then would put my foot on it again, and say, “I know how the coin has fallen. What do you say the probability is that it is heads?” This question usually produced a difference of opinion amongst the students. Most still said it was 50%, but some would say that, since someone (me) knew how it actually came up, probability no longer applied, and would say that the coin had come up either heads or tails, but they just didn’t know which.
>
> Actually, either answer is a reasonable one, although they reflect different interpretations of probability:
>
> https://en.wikipedia.org/wiki/Probability_interpretations
>
> The first one, that it was still 50%, reflects a Bayesian point of view that probability is a way of describing ones personal (subjective) uncertainty as to the truth of a proposition (in this case, the student’s belief that the coin shows heads) given what one knows:
>
> https://en.wikipedia.org/wiki/Bayesian_inference
>
> The second one, that it was either heads or tails but that probability no longer applies, reflects a frequentist point of view in which probability statements apply to a sequence of identical events (in this case, a lot of coin tosses) but that once a given event has been instantiated, it no longer makes sense to talk about the probability of that particular event:
>
> https://en.wikipedia.org/wiki/Frequentist_inference
>
>
> Now, I’m not certain of this but I’m guessing that it may be that Dr. Kleinman’s point of view is basically frequentist, in which case it may have something to do with the differences he’s been having with others on the list. Perhaps he can elaborate (or tell me that my guess is wrong). But there are problems with how others here have been explaining their point of view, which I’ll get to in a bit. In any case, if there were students in my class that took the point of view that probability no longer applies, I would spend some time discussing this as well as explaining the Bayesian point of view, and saying that the Bayesian point of view is the one we would be using during the semester. I would point out to them that, for example, it would still make sense for one student in the class to bet at even odds with another student that the coin came up heads, even though the professor knows how the coin came up, and that the ability to make bets at reasonable odds is part of the Bayesian view of probability.
I read the "frequentist page" and I would not consider myself a "frequentist" based on this quote from the page:
"In a frequentist approach to inference, unknown parameters are often, but not always, treated as having fixed but unknown values that are not capable of being treated as random variates in any sense, and hence there is no way that probabilities can be associated with them. In contrast, a Bayesian approach to inference does allow probabilities to be associated with unknown parameters, where these probabilities can sometimes have a frequency probability interpretation as well as a Bayesian one. The Bayesian approach allows these probabilities to have an interpretation as representing the scientist's belief that given values of the parameter are true "
Based on the above statement, I would think Bayesian be more appropriate. However, the frequentist concept can also be useful as demonstrated from this quote from the same link:
"Frequentist inference is a type of statistical inference that draws conclusions from sample data by emphasizing the frequency or proportion of the data."
I find it useful to think of a probability as a ratio of the number of successes in series of random trials divided by the total number of trials.
>
> As an important aside, I want to point out that under the Bayesian view of probability, it is perfectly reasonable to talk about the probability of events that have already happened (or not happened), or even about events that are not in any way the result of a random process like that of a sequence of identical coin tosses. The Bayesian use of probability, which does use the standard rules of ordinary probability theory, can and does apply to unique events. What it does is to give us a way of talking about how certain or uncertain we are about the truth or falsity of those events or facts, given what we know, given information we use to make that evaluation. Examples of the sorts of things we can sensibly discuss using the Bayesian approach would be the probability that a person on trial for murder is in fact the murderer, after we hear the evidence at trial; the probability that a particular physical theory like general relativity is a correct description of nature; or the probability that a particular horse will win the Kentucky Derby. All of these are unique events, some of them already a fact (if unknown to us…the person on trial for murder, for example, knows for sure if he is innocent or guilty, and of course the laws of physics are what they are). And for example, in the case of the Kentucky Derby, people actually do assess the probability of the truth or falsity of the proposition that a particular horse will win a particular race by either placing a bet (at particular odds that reflect the probability, in their view, that the horse bet on will win) or not placing such a bet (if in their view the odds offered would make the bet unfair to them given their assessment of the probability of the horse winning). Even physicists have been known to make and later pay off bets on the truth or falsity of a physical theory, and those bets can be translated into probabilities (e.g., a bet at even odds corresponds to a probability of 0.5 that the proposition is true; at 3:1 odds, depending on which side of the bet you take, it corresponds to a probability of 0.25 or 0.75). It turns out that the correct way to update probabilities on unique propositions like these as new data becomes available is prescribed by standard probability theory, as used by Bayesians.
>
> Back to my experiment with my students.
>
> Next, I tell the students what I saw when I looked at the coin; let’s say it was heads, and if so I will tell them I saw heads. I then ask them what the probability is that the coin is heads (and here we are talking about their subjective probability, each student’s individual degree of belief). At this point there’s a dilemma: The students realize that I might be lying! So, they have to make some estimate, on the first day of class, that the professor that they have just met might be lying, perhaps to make a pedagogical point. (In fact I always tell the truth here, but the students have no way of knowing this). So the students in general are unwilling to say that the probability is 100% that it’s heads. Generally they give me some credit for truthfulness but they don’t go all the way. They might typically say 90%.
What is the probability that you are lying?
So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?
>
> Bill Jefferys


Alan Kleinman MD PhD

unread,
Dec 4, 2019, 8:40:03 AM12/4/19
to talk-o...@moderators.isc.org
Use whatever view you want, calculate the probability of beneficial mutation B occurring on some member of a population conditional to that member already having beneficial mutation A. And then explain to us how natural selection changes the calculation of that probability.

Bill Jefferys

unread,
Dec 4, 2019, 9:25:03 AM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 8:35:03 AM UTC-5, Alan Kleinman MD PhD wrote:

[Big snip]

> I read the "frequentist page" and I would not consider myself a "frequentist" based on this quote from the page:
> "In a frequentist approach to inference, unknown parameters are often, but not always, treated as having fixed but unknown values that are not capable of being treated as random variates in any sense, and hence there is no way that probabilities can be associated with them. In contrast, a Bayesian approach to inference does allow probabilities to be associated with unknown parameters, where these probabilities can sometimes have a frequency probability interpretation as well as a Bayesian one. The Bayesian approach allows these probabilities to have an interpretation as representing the scientist's belief that given values of the parameter are true "

> Based on the above statement, I would think Bayesian be more appropriate. However, the frequentist concept can also be useful as demonstrated from this quote from the same link:

> "Frequentist inference is a type of statistical inference that draws conclusions from sample data by emphasizing the frequency or proportion of the data."

> I find it useful to think of a probability as a ratio of the number of successes in series of random trials divided by the total number of trials.

This is an essentially frequentist point of view. If there is only one trial (e.g., a horse race), or actually no trial at all (e.g., deciding the probability of guilt given the evidence in a murder case), then this definition is not useful. Of course, Bayesians can and do use successes in a large number of trials as evidence, but they would use it by applying Bayes' theorem in a systematic way.

But I'm assuming from your earlier comment that you might consider Bayesian inference to be reasonable as well, so it appears my guess may not be correct.

...

> So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?

I'm not in that fight. My purpose here is just to clarify some issues about probability that I believe have obscured rather than clarifying how it works.

Bill


Bill Rogers

unread,
Dec 4, 2019, 9:40:03 AM12/4/19
to talk-o...@moderators.isc.org
Looks like you have a bit of a commute to UVM. I'm retired in the Northeast Kingdom, in Greensboro. The nearest I get to Fayston is hiking Camel's Hump every few years.

Alan Kleinman MD PhD

unread,
Dec 4, 2019, 9:40:04 AM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 6:25:03 AM UTC-8, Bill Jefferys wrote:
> On Wednesday, December 4, 2019 at 8:35:03 AM UTC-5, Alan Kleinman MD PhD wrote:
>
> [Big snip]
>
> > I read the "frequentist page" and I would not consider myself a "frequentist" based on this quote from the page:
> > "In a frequentist approach to inference, unknown parameters are often, but not always, treated as having fixed but unknown values that are not capable of being treated as random variates in any sense, and hence there is no way that probabilities can be associated with them. In contrast, a Bayesian approach to inference does allow probabilities to be associated with unknown parameters, where these probabilities can sometimes have a frequency probability interpretation as well as a Bayesian one. The Bayesian approach allows these probabilities to have an interpretation as representing the scientist's belief that given values of the parameter are true "
>
> > Based on the above statement, I would think Bayesian be more appropriate. However, the frequentist concept can also be useful as demonstrated from this quote from the same link:
>
> > "Frequentist inference is a type of statistical inference that draws conclusions from sample data by emphasizing the frequency or proportion of the data."
>
> > I find it useful to think of a probability as a ratio of the number of successes in series of random trials divided by the total number of trials.
>
> This is an essentially frequentist point of view. If there is only one trial (e.g., a horse race), or actually no trial at all (e.g., deciding the probability of guilt given the evidence in a murder case), then this definition is not useful. Of course, Bayesians can and do use successes in a large number of trials as evidence, but they would use it by applying Bayes' theorem in a systematic way.
>
> But I'm assuming from your earlier comment that you might consider Bayesian inference to be reasonable as well, so it appears my guess may not be correct.
Correct.
>
> ...
>
> > So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?
>
> I'm not in that fight. My purpose here is just to clarify some issues about probability that I believe have obscured rather than clarifying how it works.
That's ok, I've already done that computation. And good luck trying to clarify probability theory to those who refuse to learn these mathematical principles and rules because they don't fit with their world view. Charles Brenner didn't have much success either and his work with probability theory helps solve cold case murders.
>
> Bill


Bill Jefferys

unread,
Dec 4, 2019, 9:55:03 AM12/4/19
to talk-o...@moderators.isc.org
Yes, it's about an hour each way, and it's the main reason why I stopped classroom teaching. But Greensboro is even farther from us than Burlington...Not that Vermont is all that big.

But you are very lucky to live in Greensboro. You can watch the kids learn to be circus performers at Circus Smirkus, and are just one town away from Bread and Puppet Theater! Of course, the Mad River Valley has its attractions too.

Bill

Bill Jefferys

unread,
Dec 4, 2019, 10:30:04 AM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
> On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:

[Snip]

I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.

I'll post some reminiscences at the "Lay of the Land on talk.orgins" thread that Phil Nichols recently started (Hi, Phil!). I'll probably do this in the next few days. I'm going to keep my participation minimal, though. I only have so much time.

Bill

André G. Isaak

unread,
Dec 4, 2019, 10:40:03 AM12/4/19
to talk-o...@moderators.isc.org
On 2019-12-04 8:27 a.m., Bill Jefferys wrote:
> On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
>> On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
>
> [Snip]
>
> I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.

Actually, T.O is still on usenet. Google Groups simply provides
web-based access to it.

André

Alan Kleinman MD PhD

unread,
Dec 4, 2019, 10:55:03 AM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 7:30:04 AM UTC-8, Bill Jefferys wrote:
> On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
> > On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
>
> [Snip]
>
> I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.
jillery's warmth is superficial. As soon as you start explaining correctly the principles of probability theory, she'll start saying your answers to her questions are nonsequiturs and she'll ask you what the probability of your existence is. Try explaining to jillery the difference between a probability and an outcome.

Bill Jefferys

unread,
Dec 4, 2019, 11:10:03 AM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 10:40:03 AM UTC-5, André G. Isaak wrote:
> On 2019-12-04 8:27 a.m., Bill Jefferys wrote:
> > On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
> >> On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
> >
> > [Snip]
> >
> > I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.
>
> Actually, T.O is still on usenet. Google Groups simply provides
> web-based access to it.
>
> André

In email Jillery mentioned this, but I have no idea how to access it directly.

Bill

Oxyaena

unread,
Dec 4, 2019, 12:10:03 PM12/4/19
to talk-o...@moderators.isc.org
On 12/4/2019 8:30 AM, Alan Kleinman MD PhD wrote:
[snip]
> So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?

How many times do we have to tell you beneficial mutations do not occur
in a vacuum? Whether a mutation is considered detrimental, beneficial,
or merely neutral is entirely up to the environment (and in the case of
artificial selection, people), nothing more, nothing less. There's
nothing inherently intrinsic about so-called "beneficial" mutations
outside of selective pressures making them so.

Alan Kleinman MD PhD

unread,
Dec 4, 2019, 12:25:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 9:10:03 AM UTC-8, Oxyaena wrote:
> On 12/4/2019 8:30 AM, Alan Kleinman MD PhD wrote:
> [snip]
> > So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?
>
> How many times do we have to tell you beneficial mutations do not occur
> in a vacuum? Whether a mutation is considered detrimental, beneficial,
> or merely neutral is entirely up to the environment (and in the case of
> artificial selection, people), nothing more, nothing less. There's
> nothing inherently intrinsic about so-called "beneficial" mutations
> outside of selective pressures making them so.
So, let's hear you explain the physics and mathematics of the Kishony and Lenski experiments. Let's give Bill Jefferys a lesson on who he is trying to teach probability theory to.

jillery

unread,
Dec 4, 2019, 12:30:05 PM12/4/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 06:37:04 -0500, Oxyaena <soc...@is.a.spook> wrote:

>On 12/4/2019 5:05 AM, jillery wrote:
>[snip]
>>
>> If you are interested in becoming familiar with the actual claims
>> discussed in T.O., the following is as good an introduction as any:
>>
>> <http://www.talkorigins.org/faqs/abioprob/abioprob.html>
>>
>> <http://www.talkorigins.org/faqs/abioprob/borelfaq.html>
>>
>> <http://www.talkorigins.org/faqs/thermo/probability.html>
>>
>> <http://www.talkreason.org/articles/chanceprob.cfm>
>>
>> I hope other posters will provide additional links.
>
>http://www.talkreason.org/perakm/Sewell.htm
>
>http://www.talkreason.org/Mark's%20sites/Mark's%20perakm%20site/members.cox.net/perakm/probabilities.htm
>
>http://www.talkreason.org/articles/probabilities.cfm
>
>https://peradectes.wordpress.com/2019/03/30/improbable-things-happen/ -
>(shameless self promotion warning)


Might as well, no reason why you can't promote yourself, too.

jillery

unread,
Dec 4, 2019, 12:35:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 06:08:57 -0600, RonO <roki...@cox.net> wrote:

>In terms of why Kleinman was using the product rule incorrectly it was
>not because the event had already happened. The event he was trying to
>calculate the probability for was still in progress. The issue was that
>one mutation could occur before the second one, and after the first
>mutation happened it did not have to happen again in that cell lineage,
>and basic biology would have to factor in and each subsequent generation
>there would be a different probability that the two mutations would
>occur in that lineage. Basically the probability would depend on how
>many cells were in that cell lineage that already had the first
>mutation, and how many would reproduce to make the next generation. It
>is more complicated than that, but the product rule did not apply to the
>issue that he was talking about. Kleinman wanted to multiply the
>probability of each mutation ocurring, but that obviously is not the way
>to do the calculation when the two mutations do not have to happen at
>the same time in a cell lineage.
>
>It was not just because the first mutation had already happened. It was
>due to how biological evolution actually works (life builds on what came
>before), and the basic biology of reproduction.
>
>Ron Okimoto


While I agree that your statement above is technically correct, I
disagree that you describe a substantial distinction. AIUI Kleinman's
argument is that, in the case of features which require multiple
beneficial mutations, each beneficial mutation must appear and be
fixed in the population sequentially, with no overlap to the
appearance and fixation of prior or following beneficial mutations.
Also, for any particular feature, he accepts only the specific
mutation sequence found in extant populations.

If Kleinman's biological argument was factually correct, his
probability calculations based on a dependent probabilities would also
be correct. However, his biological argument does not apply to most
examples of evolutionary change, and so his probability calculations
aren't relevant to them.

IOW Kleinman's failure here is a misunderstanding of biology, not of
probability. His dependence on his alleged mathematical expertise,
and others' alleged ignorance, of probability are non-sequiturs.

Bob Casanova

unread,
Dec 4, 2019, 1:05:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 08:07:25 -0800 (PST), the following
appeared in talk.origins, posted by Bill Jefferys
<billje...@gmail.com>:

>On Wednesday, December 4, 2019 at 10:40:03 AM UTC-5, André G. Isaak wrote:
>> On 2019-12-04 8:27 a.m., Bill Jefferys wrote:
>> > On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
>> >> On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
>> >
>> > [Snip]
>> >
>> > I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.
>>
>> Actually, T.O is still on usenet. Google Groups simply provides
>> web-based access to it.

>In email Jillery mentioned this, but I have no idea how to access it directly.

If you're using a PC you own on which you can install
applications, all you need is to install a newsreader
(Eternal September is free, and seems to be a favorite of
many) and use it to download a list of groups, from which
you can select the ones you want to visit.

There's a description here...

https://www.newsgroupreviews.com/eternal-september.html

.... including addresses for the ET server.

Enjoy!
--

Bob C.

"The most exciting phrase to hear in science,
the one that heralds new discoveries, is not
'Eureka!' but 'That's funny...'"

- Isaac Asimov

Bob Casanova

unread,
Dec 4, 2019, 1:10:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 12:08:08 -0500, the following appeared in
talk.origins, posted by Oxyaena <soc...@is.a.spook>:

>On 12/4/2019 8:30 AM, Alan Kleinman MD PhD wrote:

>[snip]
>> So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?

>How many times do we have to tell you beneficial mutations do not occur
>in a vacuum? Whether a mutation is considered detrimental, beneficial,
>or merely neutral is entirely up to the environment (and in the case of
>artificial selection, people), nothing more, nothing less. There's
>nothing inherently intrinsic about so-called "beneficial" mutations
>outside of selective pressures making them so.

The number you're asking about seems to approach infinity,
since that point has been made, and ignored, too many times
to count. Maybe a 2x4 upside the head would help; at least
it might get his attention.

<Cue irrelevant comment from DocDoc about "2 courses in
statistics", with gratuitous personal epithets included...>

Alan Kleinman MD PhD

unread,
Dec 4, 2019, 1:10:03 PM12/4/19
to talk-o...@moderators.isc.org
Why don't you give us the reptifeatharian explanation of the physics and mathematics of the Kishony and Lenski experiments? And then you can explain to us how reptiles evolve feathers and fish evolve into mammals. I think that Bill Jefferys will enjoy your explanation of the stochastic process (DNA evolution).

jillery

unread,
Dec 4, 2019, 1:45:03 PM12/4/19
to talk-o...@moderators.isc.org
However you accessed T.O. before, that how you would access T.O.
"directly".

jillery

unread,
Dec 4, 2019, 1:50:03 PM12/4/19
to talk-o...@moderators.isc.org
Being a relative newbie, I'm not surprised that I didn't recognize
your nic. However, since you say you're an alumnus, I am surprised
that you didn't apply your expertise in probability to the logical
flaws of all the presenters. You can be sure Kleinman interprets your
silence as tacit acceptance of his assertions, no matter how unrelated
they are to what you wrote.

jillery

unread,
Dec 4, 2019, 1:50:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 07:43:08 -0800 (PST), Alan Kleinman MD PhD
<klei...@sti.net> wrote:

>On Wednesday, December 4, 2019 at 7:30:04 AM UTC-8, Bill Jefferys wrote:
>> On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
>> > On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
>>
>> [Snip]
>>
>> I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.
>jillery's warmth is superficial. As soon as you start explaining correctly the principles of probability theory, she'll start saying your answers to her questions are nonsequiturs and she'll ask you what the probability of your existence is. Try explaining to jillery the difference between a probability and an outcome.


Thank you for proving me right once again.


>> I'll post some reminiscences at the "Lay of the Land on talk.orgins" thread that Phil Nichols recently started (Hi, Phil!). I'll probably do this in the next few days. I'm going to keep my participation minimal, though. I only have so much time.
>>
>> Bill
>

Alan Kleinman MD PhD

unread,
Dec 4, 2019, 2:00:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 10:50:03 AM UTC-8, jillery wrote:
> On Wed, 4 Dec 2019 07:43:08 -0800 (PST), Alan Kleinman MD PhD
> <klei...@sti.net> wrote:
>
> >On Wednesday, December 4, 2019 at 7:30:04 AM UTC-8, Bill Jefferys wrote:
> >> On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
> >> > On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
> >>
> >> [Snip]
> >>
> >> I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.
> >jillery's warmth is superficial. As soon as you start explaining correctly the principles of probability theory, she'll start saying your answers to her questions are nonsequiturs and she'll ask you what the probability of your existence is. Try explaining to jillery the difference between a probability and an outcome.
>
>
> Thank you for proving me right once again.
When are you going to ask Bill Jefferys to explain the improbability of his existence?

jillery

unread,
Dec 4, 2019, 4:45:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 10:05:56 -0800 (PST), Alan Kleinman MD PhD
<klei...@sti.net> wrote:


>Why don't you give us the reptifeatharian explanation of the physics and mathematics of the Kishony and Lenski experiments? And then you can explain to us how reptiles evolve feathers and fish evolve into mammals.


You first.


> I think that Bill Jefferys will enjoy your explanation of the stochastic process (DNA evolution).


Bill Jefferys went in a different direction.

Alan Kleinman MD PhD

unread,
Dec 4, 2019, 5:00:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 1:45:03 PM UTC-8, jillery wrote:
> On Wed, 4 Dec 2019 10:05:56 -0800 (PST), Alan Kleinman MD PhD
> <klei...@sti.net> wrote:
>
>
> >Why don't you give us the reptifeatharian explanation of the physics and mathematics of the Kishony and Lenski experiments? And then you can explain to us how reptiles evolve feathers and fish evolve into mammals.
>
>
> You first.
Already done, peer-reviewed and published. But you won't take a course in introductory probability theory so that you might understand something about stochastic processes (like DNA evolution).
>
>
> > I think that Bill Jefferys will enjoy your explanation of the stochastic process (DNA evolution).
>
>
> Bill Jefferys went in a different direction.
Certainly not your direction. He doesn't want to answer questions like when an outcome occurs does it's probability become 1? And silly questions like explain his improbable existence.

Mark Isaak

unread,
Dec 4, 2019, 5:00:03 PM12/4/19
to talk-o...@moderators.isc.org
On 12/4/19 10:03 AM, Bob Casanova wrote:
> On Wed, 4 Dec 2019 08:07:25 -0800 (PST), the following
> appeared in talk.origins, posted by Bill Jefferys
> <billje...@gmail.com>:
>
>> On Wednesday, December 4, 2019 at 10:40:03 AM UTC-5, André G. Isaak wrote:
>>> On 2019-12-04 8:27 a.m., Bill Jefferys wrote:
>>>> On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
>>>>> On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
>>>>
>>>> [Snip]
>>>>
>>>> I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.
>>>
>>> Actually, T.O is still on usenet. Google Groups simply provides
>>> web-based access to it.
>
>> In email Jillery mentioned this, but I have no idea how to access it directly.
>
> If you're using a PC you own on which you can install
> applications, all you need is to install a newsreader
> (Eternal September is free, and seems to be a favorite of
> many) and use it to download a list of groups, from which
> you can select the ones you want to visit.
>
> There's a description here...
>
> https://www.newsgroupreviews.com/eternal-september.html
>
> .... including addresses for the ET server.

Actually, it requires two parts: subscribe to a news provider (Eternal
September is the one I also use, and the only one I have any familiarity
with), and install a news reader on your pc (Thunderbird is what I use;
I simply ignore its mail handling functions. And again, I have no
familiarity with others.)

jillery

unread,
Dec 4, 2019, 5:50:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 10:56:07 -0800 (PST), Alan Kleinman MD PhD
<klei...@sti.net> wrote:

>On Wednesday, December 4, 2019 at 10:50:03 AM UTC-8, jillery wrote:
>> On Wed, 4 Dec 2019 07:43:08 -0800 (PST), Alan Kleinman MD PhD
>> <klei...@sti.net> wrote:
>>
>> >On Wednesday, December 4, 2019 at 7:30:04 AM UTC-8, Bill Jefferys wrote:
>> >> On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
>> >> > On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
>> >>
>> >> [Snip]
>> >>
>> >> I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.
>> >jillery's warmth is superficial. As soon as you start explaining correctly the principles of probability theory, she'll start saying your answers to her questions are nonsequiturs and she'll ask you what the probability of your existence is. Try explaining to jillery the difference between a probability and an outcome.
>>
>>
>> Thank you for proving me right once again.
>When are you going to ask Bill Jefferys to explain the improbability of his existence?


Since you asked, right after Bill Jefferys claims bird didn't evolve
from reptiles, or feathers don't fossilize. You're welcome.

jillery

unread,
Dec 4, 2019, 6:00:03 PM12/4/19
to talk-o...@moderators.isc.org
Exactly. OTOH Bill Jefferys should know all that already, as he says
he was part of T.O. long ago.

jillery

unread,
Dec 4, 2019, 6:05:02 PM12/4/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 13:58:00 -0800 (PST), Alan Kleinman MD PhD
<klei...@sti.net> wrote:

>On Wednesday, December 4, 2019 at 1:45:03 PM UTC-8, jillery wrote:
>> On Wed, 4 Dec 2019 10:05:56 -0800 (PST), Alan Kleinman MD PhD
>> <klei...@sti.net> wrote:
>>
>>
>> >Why don't you give us the reptifeatharian explanation of the physics and mathematics of the Kishony and Lenski experiments? And then you can explain to us how reptiles evolve feathers and fish evolve into mammals.
>>
>>
>> You first.
>Already done, peer-reviewed and published.


Cite it. I dare you.


> But you won't take a course in introductory probability theory so that you might understand something about stochastic processes (like DNA evolution).


You must enjoy posting your nonsense non-sequiturs and asinine
ad-hominems, since that's almost all you ever do.


>> > I think that Bill Jefferys will enjoy your explanation of the stochastic process (DNA evolution).
>>
>>
>> Bill Jefferys went in a different direction.
>Certainly not your direction.


Certainly not your direction.

Alan Kleinman MD PhD

unread,
Dec 4, 2019, 6:15:03 PM12/4/19
to talk-o...@moderators.isc.org
Does Bill Jefferys think that birds evolved from reptiles? If he does, he certainly hasn't correctly applied his knowledge of probability theory to evolution. Before Bill Jefferys makes a claim like that, he should try to understand why combination therapy works for the treatment of hiv and why each evolutionary step in the Kishony experiment takes a billion replications (random trials). It's that pesky multiplication rule. You know that rule, its the one that reptifeatharians think doesn't apply to biological evolution.

Bill Jefferys

unread,
Dec 4, 2019, 6:25:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 6:00:03 PM UTC-5, jillery wrote:
> On Wed, 4 Dec 2019 13:54:55 -0800, Mark Isaak
> >
> >Actually, it requires two parts: subscribe to a news provider (Eternal
> >September is the one I also use, and the only one I have any familiarity
> >with), and install a news reader on your pc (Thunderbird is what I use;
> >I simply ignore its mail handling functions. And again, I have no
> >familiarity with others.)
>
>
> Exactly. OTOH Bill Jefferys should know all that already, as he says
> he was part of T.O. long ago.

Actually not. The computer I used was a department-owned mainframe running UNIX that I logged onto, and it had UseNet and hence talk.origins directly available as a program under that operating system. That computer is long gone, and I now access things via the web from a Mac. Very different.

It appears that I could use Mark's method, but as long as Google Groups does everything I need I'll stick to that. When we discussed this by email earlier, I was trying to put some links into my message, but it appears that I can now do this by just typing them into Google Groups, so unless something comes up I'll stick with that. Mark can tell me if i'm missing something by doing it that way, and if so I'll reassess the situation.

Like I say, I really don't want to get into the weeds here. I've got a few things to say, but don't expect a lot of posts from me.

Bill Jefferys



Öö Tiib

unread,
Dec 4, 2019, 6:40:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wednesday, 4 December 2019 08:50:03 UTC+2, Mark Isaak wrote:
> On 12/3/19 6:28 PM, Bill Jefferys wrote:
> > [snip long, excellent discussion of probabilities]
>
> You left one important question unanswered: Where do you get two-headed
> (and/or two-tailed) coins?

It is actually easy to make if you have time.
Grind same side of two identical coins away.
Make the halves hot. Drop of molten lead between, press
together hard and cool your new two-headed coin down.

RonO

unread,
Dec 4, 2019, 6:55:03 PM12/4/19
to talk-o...@moderators.isc.org
This was not what he was wrong about. He was just trying to estimate
the probability of two mutations occurring in the same cell lineage when
they didn't have to happen at the same time. Nothing about fixation in
the population.

Ron Okimoto

Alan Kleinman MD PhD

unread,
Dec 4, 2019, 7:15:03 PM12/4/19
to talk-o...@moderators.isc.org
Competition slows evolution, that's why the evolutionary process occurs much more rapidly in the Kishony experiment than the Lenski experiment. Fixation is neither necessary nor sufficient for evolutionary adaptation to occur. And my math is correct, it was peer-reviewed by people (unlike you) who understand probability theory.

jillery

unread,
Dec 4, 2019, 7:50:02 PM12/4/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 15:20:19 -0800 (PST), Bill Jefferys
<billje...@gmail.com> wrote:

>On Wednesday, December 4, 2019 at 6:00:03 PM UTC-5, jillery wrote:
>> On Wed, 4 Dec 2019 13:54:55 -0800, Mark Isaak
>> >
>> >Actually, it requires two parts: subscribe to a news provider (Eternal
>> >September is the one I also use, and the only one I have any familiarity
>> >with), and install a news reader on your pc (Thunderbird is what I use;
>> >I simply ignore its mail handling functions. And again, I have no
>> >familiarity with others.)
>>
>>
>> Exactly. OTOH Bill Jefferys should know all that already, as he says
>> he was part of T.O. long ago.
>
>Actually not. The computer I used was a department-owned mainframe running UNIX that I logged onto, and it had UseNet and hence talk.origins directly available as a program under that operating system. That computer is long gone, and I now access things via the web from a Mac. Very different.


Ok, I misunderstood what you meant. My bad.


>It appears that I could use Mark's method, but as long as Google Groups does everything I need I'll stick to that. When we discussed this by email earlier, I was trying to put some links into my message, but it appears that I can now do this by just typing them into Google Groups, so unless something comes up I'll stick with that. Mark can tell me if i'm missing something by doing it that way, and if so I'll reassess the situation.


Almost everybody here can answer that, but they likely won't agree
with each other.


>Like I say, I really don't want to get into the weeds here. I've got a few things to say, but don't expect a lot of posts from me.
>
>Bill Jefferys


I agree real-life takes precedence. You have enriched my
understanding, and I hope you will participate as often as you can.

jillery

unread,
Dec 4, 2019, 8:10:03 PM12/4/19
to talk-o...@moderators.isc.org
Are you saying that's not Kleinman's argument? Or are you saying that
is his argument but his argument isn't wrong? I am pretty sure I am
correct on both counts.


>> If Kleinman's biological argument was factually correct, his
>> probability calculations based on a dependent probabilities would also
>> be correct. However, his biological argument does not apply to most
>> examples of evolutionary change, and so his probability calculations
>> aren't relevant to them.
>>
>> IOW Kleinman's failure here is a misunderstanding of biology, not of
>> probability. His dependence on his alleged mathematical expertise,
>> and others' alleged ignorance, of probability are non-sequiturs.
>>

jillery

unread,
Dec 4, 2019, 8:10:03 PM12/4/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 15:12:25 -0800 (PST), Alan Kleinman MD PhD
<klei...@sti.net> wrote:

>On Wednesday, December 4, 2019 at 2:50:03 PM UTC-8, jillery wrote:
>> On Wed, 4 Dec 2019 10:56:07 -0800 (PST), Alan Kleinman MD PhD
>> <klei...@sti.net> wrote:
>>
>> >On Wednesday, December 4, 2019 at 10:50:03 AM UTC-8, jillery wrote:
>> >> On Wed, 4 Dec 2019 07:43:08 -0800 (PST), Alan Kleinman MD PhD
>> >> <klei...@sti.net> wrote:
>> >>
>> >> >On Wednesday, December 4, 2019 at 7:30:04 AM UTC-8, Bill Jefferys wrote:
>> >> >> On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
>> >> >> > On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
>> >> >>
>> >> >> [Snip]
>> >> >>
>> >> >> I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.
>> >> >jillery's warmth is superficial. As soon as you start explaining correctly the principles of probability theory, she'll start saying your answers to her questions are nonsequiturs and she'll ask you what the probability of your existence is. Try explaining to jillery the difference between a probability and an outcome.
>> >>
>> >>
>> >> Thank you for proving me right once again.
>> >When are you going to ask Bill Jefferys to explain the improbability of his existence?
>>
>>
>> Since you asked, right after Bill Jefferys claims bird didn't evolve
>> from reptiles, or feathers don't fossilize. You're welcome.
>Does Bill Jefferys think that birds evolved from reptiles?


Since you asked, I don't know, and neither do you. He didn't say.
Until he does, you have no basis for assuming anything. You're
welcome.

Oxyaena

unread,
Dec 4, 2019, 9:30:03 PM12/4/19
to talk-o...@moderators.isc.org
On 12/4/2019 1:07 PM, Bob Casanova wrote:
> On Wed, 4 Dec 2019 12:08:08 -0500, the following appeared in
> talk.origins, posted by Oxyaena <soc...@is.a.spook>:
>
>> On 12/4/2019 8:30 AM, Alan Kleinman MD PhD wrote:
>
>> [snip]
>>> So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?
>
>> How many times do we have to tell you beneficial mutations do not occur
>> in a vacuum? Whether a mutation is considered detrimental, beneficial,
>> or merely neutral is entirely up to the environment (and in the case of
>> artificial selection, people), nothing more, nothing less. There's
>> nothing inherently intrinsic about so-called "beneficial" mutations
>> outside of selective pressures making them so.
>
> The number you're asking about seems to approach infinity,
> since that point has been made, and ignored, too many times
> to count. Maybe a 2x4 upside the head would help; at least
> it might get his attention.

LMAO.

Let's now calculate the probability that a 2x4 upside the head would
meaningfully affect his behavior....

>
> <Cue irrelevant comment from DocDoc about "2 courses in
> statistics", with gratuitous personal epithets included...>
>

As always.

Oxyaena

unread,
Dec 4, 2019, 9:30:03 PM12/4/19
to talk-o...@moderators.isc.org
On 12/4/2019 12:21 PM, Alan Kleinman MD PhD wrote:
> On Wednesday, December 4, 2019 at 9:10:03 AM UTC-8, Oxyaena wrote:
>> On 12/4/2019 8:30 AM, Alan Kleinman MD PhD wrote:
>> [snip]
>>> So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?
>>
>> How many times do we have to tell you beneficial mutations do not occur
>> in a vacuum? Whether a mutation is considered detrimental, beneficial,
>> or merely neutral is entirely up to the environment (and in the case of
>> artificial selection, people), nothing more, nothing less. There's
>> nothing inherently intrinsic about so-called "beneficial" mutations
>> outside of selective pressures making them so.
> So, let's hear you explain the physics and mathematics of the Kishony and Lenski experiments. Let's give Bill Jefferys a lesson on who he is trying to teach probability theory to.

Careful, you're on autopilot again.

RonO

unread,
Dec 4, 2019, 9:55:03 PM12/4/19
to talk-o...@moderators.isc.org
I am saying that, that isn't what he was wrong about in my case. He
screwed up over 2 years ago and has been lying about it ever since.

Ron Okimoto

RonO

unread,
Dec 4, 2019, 10:00:03 PM12/4/19
to talk-o...@moderators.isc.org
My guess is that there is some weird name for what you are. Insanity is
about your only excuse for continuing to lie about what you were wrong
about. You were just wrong and you will never have been correct. You
even understand this to be true or you would have multiplied those two
numbers that you wanted to multiply. How can you not understand that?
The product rule did not apply. If it did you would use it and multiply
those two numbers, but all you ever do is lie about it. You are
supposed to be the math wiz and yet you have never been able to multiply
those two numbers and give the answer.

Ron Okimoto

Mark Isaak

unread,
Dec 5, 2019, 2:10:02 AM12/5/19
to talk-o...@moderators.isc.org
On 12/4/19 3:20 PM, Bill Jefferys wrote:
> On Wednesday, December 4, 2019 at 6:00:03 PM UTC-5, jillery wrote:
>> On Wed, 4 Dec 2019 13:54:55 -0800, Mark Isaak
>>>
>>> Actually, it requires two parts: subscribe to a news provider (Eternal
>>> September is the one I also use, and the only one I have any familiarity
>>> with), and install a news reader on your pc (Thunderbird is what I use;
>>> I simply ignore its mail handling functions. And again, I have no
>>> familiarity with others.)
>>
>>
>> Exactly. OTOH Bill Jefferys should know all that already, as he says
>> he was part of T.O. long ago.
>
> Actually not. The computer I used was a department-owned mainframe running UNIX that I logged onto, and it had UseNet and hence talk.origins directly available as a program under that operating system. That computer is long gone, and I now access things via the web from a Mac. Very different.
>
> It appears that I could use Mark's method, but as long as Google Groups does everything I need I'll stick to that. When we discussed this by email earlier, I was trying to put some links into my message, but it appears that I can now do this by just typing them into Google Groups, so unless something comes up I'll stick with that. Mark can tell me if i'm missing something by doing it that way, and if so I'll reassess the situation.

I have used Google Groups very little myself, usually to look for
specific past posts. I have heard others complain about its abilities,
but I don't remember specifics.

Mark Isaak

unread,
Dec 5, 2019, 2:25:03 AM12/5/19
to talk-o...@moderators.isc.org
> Competition slows evolution, [...]

Which would imply that evolution is fastest when there is zero
competition. Which is counterfactual, at least for adaptive evolution.

Martin Harran

unread,
Dec 5, 2019, 5:05:03 AM12/5/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 15:12:25 -0800 (PST), Alan Kleinman MD PhD
<klei...@sti.net> wrote:

>On Wednesday, December 4, 2019 at 2:50:03 PM UTC-8, jillery wrote:
>> On Wed, 4 Dec 2019 10:56:07 -0800 (PST), Alan Kleinman MD PhD
>> <klei...@sti.net> wrote:
>>
>> >On Wednesday, December 4, 2019 at 10:50:03 AM UTC-8, jillery wrote:
>> >> On Wed, 4 Dec 2019 07:43:08 -0800 (PST), Alan Kleinman MD PhD
>> >> <klei...@sti.net> wrote:
>> >>
>> >> >On Wednesday, December 4, 2019 at 7:30:04 AM UTC-8, Bill Jefferys wrote:
>> >> >> On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
>> >> >> > On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
>> >> >>
>> >> >> [Snip]
>> >> >>
>> >> >> I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.
>> >> >jillery's warmth is superficial. As soon as you start explaining correctly the principles of probability theory, she'll start saying your answers to her questions are nonsequiturs and she'll ask you what the probability of your existence is. Try explaining to jillery the difference between a probability and an outcome.
>> >>
>> >>
>> >> Thank you for proving me right once again.
>> >When are you going to ask Bill Jefferys to explain the improbability of his existence?
>>
>>
>> Since you asked, right after Bill Jefferys claims bird didn't evolve
>> from reptiles, or feathers don't fossilize. You're welcome.
>Does Bill Jefferys think that birds evolved from reptiles? If he does, he certainly hasn't correctly applied his knowledge of probability theory to evolution. Before Bill Jefferys makes a claim like that, he should try to understand why combination therapy works for the treatment of hiv and why each evolutionary step in the Kishony experiment takes a billion replications (random trials). It's that pesky multiplication rule. You know that rule, its the one that reptifeatharians think doesn't apply to biological evolution.

I'm still waiting for you to explain how one of those pesky
reptifeatharians - Robert Peter Gale - came to be a leading expert
in combination therapy as cited by you.

Bill Rogers

unread,
Dec 5, 2019, 5:55:03 AM12/5/19
to talk-o...@moderators.isc.org
I've been thinking about that, too. I think that Alan's idea of a situation in which there is no competition is the Kishony experiment. Once a bug has the first drug resistance mutation, it moves into a zone of antibiotic concentration where there's no competition from wild type. Then when it's descendants get a second mutation, they move into a zone where there's a higher concentration and no competition from the single mutants, etc. So in his view you are seeing pure, unslowed "adaptation." In his view, in this case, the only math you worry about is the binomial distribution and the mutation rate, which tell you how many replications are required to get the next mutation. Once you've got it, you can ignore the "mathematics of competition" because the mutant finds itself in an environment where the wild type cannot compete with it. Artificial selection would be another example of a case without competition. The breeder simply picks the offspring with the desired trait and kills the others, so there's nothing for the favored line to compete with.

Most of us, I think, would consider the Kishony experiment an example of intense competition, in which successive environments create big fitness differences between lines with or without particular mutations. And I think of an environment without any competition as one in which all genotypes have identical fitness. Obviously, in such an environment, evolutionary adaptation is slowed to a stop.

So Alan truly believes (I guess) that we are all idiots for not seeing that "competition slows adaptation," but that's because he uses idiosyncratic definitions of competition and adaptation and never specifies the situation compared to which adaptation is supposed to be slowed.

jillery

unread,
Dec 5, 2019, 8:30:03 AM12/5/19
to talk-o...@moderators.isc.org
drdr polypolymath posts so many wrong statements, it's certainly
possible your case isn't the same as my case. However, my impression
is they are the same case. You say your case makes no mention of
fixation. I mentioned fixation only because he did, as a requirement
that would slow down evolution. However, IIUC fixation doesn't apply
to probability calculations, while his point about the timing of the
appearance of beneficial mutations does. Ok?


>>>> If Kleinman's biological argument was factually correct, his
>>>> probability calculations based on a dependent probabilities would also
>>>> be correct. However, his biological argument does not apply to most
>>>> examples of evolutionary change, and so his probability calculations
>>>> aren't relevant to them.
>>>>
>>>> IOW Kleinman's failure here is a misunderstanding of biology, not of
>>>> probability. His dependence on his alleged mathematical expertise,
>>>> and others' alleged ignorance, of probability are non-sequiturs.
>>>>
>>

jillery

unread,
Dec 5, 2019, 9:15:03 AM12/5/19
to talk-o...@moderators.isc.org
As I previously noted to drdr polypolymath, the Kishony experiment
demonstrates strong competition among lineages, even within each zone.
The video of the Kishony experiment shows some lineages multiplying
faster than others, and by doing so, make other lineages go extinct.

I also previously noted to drdr polypolymath, the LTEE imposes
constant and intense competition to reproduce as fast as possible, but
the rate of evolutionary change has decreased dramatically over time.

Bob Casanova

unread,
Dec 5, 2019, 3:00:03 PM12/5/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 13:54:55 -0800, the following appeared in
talk.origins, posted by Mark Isaak
<eciton@curiousta/xyz/xonomy.net>:
Ummm... Since I mentioned both Eternal September
(subscription implied) and an installed newsreader, IOW both
parts, I fail to see where "actually" comes from.
--

Bob C.

"The most exciting phrase to hear in science,
the one that heralds new discoveries, is not
'Eureka!' but 'That's funny...'"

- Isaac Asimov

Bob Casanova

unread,
Dec 5, 2019, 3:05:03 PM12/5/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 13:54:55 -0800, the following appeared in
talk.origins, posted by Mark Isaak
<eciton@curiousta/xyz/xonomy.net>:
Ummm... Since I mentioned both Eternal September
(subscription implied) and an installed newsreader, IOW both
parts, I fail to see where "actually" comes from.

Followup: OK, I see where it could be interpreted that I was
referring to ET as a newsreader from the way I phrased it;
my bad.

Bob Casanova

unread,
Dec 5, 2019, 3:05:03 PM12/5/19
to talk-o...@moderators.isc.org
On Wed, 4 Dec 2019 21:26:43 -0500, the following appeared in
talk.origins, posted by Oxyaena <soc...@is.a.spook>:

>On 12/4/2019 1:07 PM, Bob Casanova wrote:
>> On Wed, 4 Dec 2019 12:08:08 -0500, the following appeared in
>> talk.origins, posted by Oxyaena <soc...@is.a.spook>:
>>
>>> On 12/4/2019 8:30 AM, Alan Kleinman MD PhD wrote:
>>
>>> [snip]
>>>> So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?
>>
>>> How many times do we have to tell you beneficial mutations do not occur
>>> in a vacuum? Whether a mutation is considered detrimental, beneficial,
>>> or merely neutral is entirely up to the environment (and in the case of
>>> artificial selection, people), nothing more, nothing less. There's
>>> nothing inherently intrinsic about so-called "beneficial" mutations
>>> outside of selective pressures making them so.
>>
>> The number you're asking about seems to approach infinity,
>> since that point has been made, and ignored, too many times
>> to count. Maybe a 2x4 upside the head would help; at least
>> it might get his attention.
>
>LMAO.
>
>Let's now calculate the probability that a 2x4 upside the head would
>meaningfully affect his behavior....

Feel free. Don't forget the error bars! ;-)

>> <Cue irrelevant comment from DocDoc about "2 courses in
>> statistics", with gratuitous personal epithets included...>
>>
>
>As always.
--

Bill Rogers

unread,
Dec 5, 2019, 4:50:03 PM12/5/19
to talk-o...@moderators.isc.org
You were right to note those things.

Oxyaena

unread,
Dec 5, 2019, 5:40:03 PM12/5/19
to talk-o...@moderators.isc.org
On 12/5/2019 3:01 PM, Bob Casanova wrote:
> On Wed, 4 Dec 2019 21:26:43 -0500, the following appeared in
> talk.origins, posted by Oxyaena <soc...@is.a.spook>:
>
>> On 12/4/2019 1:07 PM, Bob Casanova wrote:
>>> On Wed, 4 Dec 2019 12:08:08 -0500, the following appeared in
>>> talk.origins, posted by Oxyaena <soc...@is.a.spook>:
>>>
>>>> On 12/4/2019 8:30 AM, Alan Kleinman MD PhD wrote:
>>>
>>>> [snip]
>>>>> So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?
>>>
>>>> How many times do we have to tell you beneficial mutations do not occur
>>>> in a vacuum? Whether a mutation is considered detrimental, beneficial,
>>>> or merely neutral is entirely up to the environment (and in the case of
>>>> artificial selection, people), nothing more, nothing less. There's
>>>> nothing inherently intrinsic about so-called "beneficial" mutations
>>>> outside of selective pressures making them so.
>>>
>>> The number you're asking about seems to approach infinity,
>>> since that point has been made, and ignored, too many times
>>> to count. Maybe a 2x4 upside the head would help; at least
>>> it might get his attention.
>>
>> LMAO.
>>
>> Let's now calculate the probability that a 2x4 upside the head would
>> meaningfully affect his behavior....
>
> Feel free. Don't forget the error bars! ;-)

Another 2x4?

j.nobel...@gmail.com

unread,
Dec 5, 2019, 8:50:03 PM12/5/19
to talk-o...@moderators.isc.org
On Thursday, December 5, 2019 at 2:25:03 AM UTC-5, Mark Isaak wrote:
The fact that competition slows evolution is a well known finding of
population genetics. And then we get into the definition(s) of evolution.
With the definition of change in allele frequencies over time, the
result is fairly trivial. Only in rare instances like extremely strong
selection starting from very low frequency of a highly beneficial allele
will selection speed up evolution according to the change in allele
frequency definition. And that only temporarily because the beneficial
allele becomes fixed and, presumably, alternatives will be selected
against and pruned away.

The general objection from the armchair is that such is not what many
think of as evolution. They are thinking about starting from some wolf
ancestor and applying selection to create many breeds of dogs. The
apparent morphological variation seems like evolution to them. And that's
done by applying selection.

There are myriad problems with that view. One non-obvious one (I think)
is that if they are comparing the domesticated dogs to pre-domestication
wolves, they aren't actually making a comparison to a case of no selection.
Unless the population of wolves is growing geometrically at nearly the
birth rate (or however I should write that, essentially meaning every
pup goes on to reproduce, with equal chance, to equal sized litters) ---
unless that then there is selection, you just don't understand all the
factors involved in selection.

The other problem is that it involves applying a twisted ruler to measure
rates of evolution. It's applying an arbitrary (and post-hoc) definition
of what counts as significant to that which has changed. Size, posture,
fur type, fur length, posture, temperament: that these are "significant"
is just a capricious value judgement. It may hurt our feelings to admit
that what superficially seems significant to us isn't a good metric,
but it isn't, not really.

One might retreat and say "well at least it's a subset that are
consequential to the expressed phenotype", surely it is those things
that are expressed in divergent phenotypes that matter.

On this, philosophical differences might be debated but into a web of
complications. Phenotypes very often need to be weighed in an environmental
context. Resistance to a a certain pathogen depends on the existence
and prevalence of that pathogen. And how does one compare diversity in
one trait to diversity in another? Inevitably, it seems, we fall back
to what does the observer happen to care about at that time in some
arbitrary context.

But if we retreat to the foundation, the differences at the genetic
level, we have a robust metric. And there, we have good mathematical
models: population genetics. And they affirm that selection almost
always slows evolution.

If that still deeply bothers you (generic you, anyone foolish enough
or bored enough to have read this far), here's some comfort. It's
almost entirely academic because there is essentially always some selection.

The real question then, is how much selection, and what type.
That's a topic for another day.

Martin Harran

unread,
Dec 6, 2019, 5:50:03 AM12/6/19
to talk-o...@moderators.isc.org
[snip outstanding contribution for brevity]

Bill Jeffries has given an excellent explanation of the various
aspects of probability - it's a pity that there are not more teachers
like him - but, unfortunately, I do not think it addresses the real
issue here.

Kleinman claims ad nauseam that the problem is to do with what he
calls "the multiplication rule" and his insistence that people here do
not have even a basic grasp of statistics. The real issue, however, is
the unwarranted significance that Kleinman attaches to a specific
outcome - one that happened to lead to reptiles gaining feathers.

There are two aspects to the impact of mutations on evolution; one is
the occurrence of random sequences of mutations and the other is the
influence of natural selection on any particular sequence of mutations
enduring to eventually create a different lifeform. In this post, I am
only going to focus on the first aspect.

Let's consider this by looking at the ubiquitous coin tossing
calculations. Three different people, A, B and C each toss a coin 10
times and get the following results:

Thrower A: H-T-T-H-T-H-H-T-H-T
Thrower B: H-H-H-H-H-T-T-T-T-T
Thrower C: H-H-H-H-H-H-H-H-H-H

Thrower A is unlikely to see anything out of the ordinary in his
results. Thrower B is likely to be bemused by his result - five
consecutive heads followed by five consecutive tails seems unusual but
as there are an equal number of heads and tails, he will not have any
reason to suspect any bias in the coin used and will probably put it
down to just one of those odd things that happen now and then. Thrower
C is unlikely to regard his results as random and will probably be
highly suspicious that there is something peculiar about the coin
used.

The reality, however, is that each of those sets of results had an
identical probability of occurring; there is absolutely no
significance about the results for Thrower B our Thrower C except for
the significance that the throwers themselves attribute to patterns
they see in the outcome.

This unwarranted significance being placed on a particular sequence of
mutations can also be illustrated by considering the results of the
Florida lotto.

The odds of 6 particular numbers being drawn in the lotto are 1 in
22,957,480. The odds of a sequence of 312 particular numbers being
drawn works out at just over 1 chance in 10^382 [1] but such a
specific sequence of numbers has in fact been drawn out in weekend
draws over the last twelve months; according to Kleinman's
"multiplication rule", however, that sequence is so improbable that we
can effectively dismiss it as not having happened!

Along the way, that sequence of numbers has produced a number of
winners, some very big winners, a lot of very small winner and an
overwhelming number of loserss. Some sequence of numbers had to come
out and if a different one had come out, it would simply have produced
a different set of winners and losers.

The exact same principle applies in regard to the role of mutations in
evolution. Kleinman looks at a pattern of mutations that happened to
lead to reptiles gaining feathers; that getting of feathers looks so
outstanding to him that he cannot accept that that sequence of
mutations was entirely random or unplanned. In reality, however, that
pattern or sequence of mutations was just one of an infinite range of
patterns or sequences that could have occurred, it was just the one
that happened to occur and happened to lead to the gaining of
feathers. If the sequence of mutations had been different, then maybe
feathers would not have been gained but there may well have been some
other outcome; reptiles may well have evolved into apes or fish or
insects. On the other hand, there might have been no evolution at all
at all. That is exactly what we see in the incredible diversity of
life forms around us, some lifeforms remaining unchanged for very long
periods of time but different sequences of mutations leading to a
massive variety of outcomes - just one of which happened to involve
reptiles gaining feathers.


=====================
[1] I would love to have calculated the odds for the total sequence of
numbers of the last 30+ years but unfortunately it is beyond both my
own mathematical competence and the range that either my calculator or
computer can handle!

Bill Rogers

unread,
Dec 6, 2019, 6:10:04 AM12/6/19
to talk-o...@moderators.isc.org
That's an excellent summary. I think the reason Kleinman falls into this trap is this. He mostly thinks about drug resistance. In drug resistance, you really can set up situations where there are essentially target mutations, specified in advance, which are required for survival. Even in those cases nature can surprise you with unexpected mechanisms of resistance, but you often are not far wrong - for some drugs, streptomycin in TB or atovaquone for malaria, a single, specific point mutation in the relevant gene produces clinical resistance, so you can almost think of that mutation as a pre-specified target. If you spend most of your time thinking about such situations, it may be easier to fall into the trap of thinking that, for example, feathers were a pre-existing target of evolution. Most people, though, would quickly recognize the error when it was pointed out to them.

Martin Harran

unread,
Dec 6, 2019, 6:50:03 AM12/6/19
to talk-o...@moderators.isc.org
On Fri, 06 Dec 2019 10:46:32 +0000, Martin Harran
<martin...@gmail.com> wrote:


[...]

>Bill Jeffries ...

Apologies - should be Bill Jefferys.

Reentrant

unread,
Dec 6, 2019, 8:05:03 AM12/6/19
to talk-o...@moderators.isc.org
This "landscape of possibilities" is exactly Dawkins' argument in
"Climbing Mount Improbable" (1996).

--
Reentrant

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 8:35:04 AM12/6/19
to talk-o...@moderators.isc.org
On Wednesday, December 4, 2019 at 6:30:03 PM UTC-8, Oxyaena wrote:
> On 12/4/2019 12:21 PM, Alan Kleinman MD PhD wrote:
> > On Wednesday, December 4, 2019 at 9:10:03 AM UTC-8, Oxyaena wrote:
> >> On 12/4/2019 8:30 AM, Alan Kleinman MD PhD wrote:
> >> [snip]
> >>> So tell us, how would you compute the probability of a beneficial mutation B occurring on a member of the population conditional to already having the beneficial mutation A?
> >>
> >> How many times do we have to tell you beneficial mutations do not occur
> >> in a vacuum? Whether a mutation is considered detrimental, beneficial,
> >> or merely neutral is entirely up to the environment (and in the case of
> >> artificial selection, people), nothing more, nothing less. There's
> >> nothing inherently intrinsic about so-called "beneficial" mutations
> >> outside of selective pressures making them so.
> > So, let's hear you explain the physics and mathematics of the Kishony and Lenski experiments. Let's give Bill Jefferys a lesson on who he is trying to teach probability theory to.
>
> Careful, you're on autopilot again.
You certainly aren't the one to explain that DNA evolution is a Markov process.

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 8:35:04 AM12/6/19
to talk-o...@moderators.isc.org
Why don't you demonstrate to Bill Jefferys your mastery of probability theory by showing him how you plug numbers into StatTrek?

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 8:40:03 AM12/6/19
to talk-o...@moderators.isc.org
Perhaps because he read Edward Tatum's 1958 Nobel Laureate Lecture? You should read it, you might learn something important about how evolution works.

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 8:40:03 AM12/6/19
to talk-o...@moderators.isc.org
Still no expertise in population genetics despite taking a graduate-level course in the subject? Then why does the Kishony experiment demonstrate that evolutionary adaptation occurs much more rapidly than the Lenski experiment? You really aren't very good at this subject.

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 9:05:03 AM12/6/19
to talk-o...@moderators.isc.org
You are starting to get the point, Rogers. And also note that the drug-sensitive variants are happily growing in their regions and not driven to extinction as would happen when a fixation (competition) process was occurring.
>
> Most of us, I think, would consider the Kishony experiment an example of intense competition, in which successive environments create big fitness differences between lines with or without particular mutations. And I think of an environment without any competition as one in which all genotypes have identical fitness. Obviously, in such an environment, evolutionary adaptation is slowed to a stop.
And your thought would be incorrect. Lenski's ancestral populations are grown in much larger carrying capacity environments with low selection pressure which allows many more different variants to come about and survive. That's why he had drug-resistant variants even though these ancestral populations were never exposed to these drugs. Only if these different variants were put in the particular environment that would demonstrate that their particular mutations give improved fitness would you know these variants exist. Put these drug-resistant variants into a starvation environment as Lenski does shows that drug-resistance imposes a small energy cost to these variants and so they are selected out. Likewise, in the Kishony experiment, in his billion-member colony in the drug-free region, a mutation occurs that allows for growth in the low concentration region of Ciprofloxacin. But in that same colony, there is also a member with a mutation that allow growth in a low concentration of Trimethoprim. Only in the appropriate environment will these particular variants be demonstrated.
>
> So Alan truly believes (I guess) that we are all idiots for not seeing that "competition slows adaptation," but that's because he uses idiosyncratic definitions of competition and adaptation and never specifies the situation compared to which adaptation is supposed to be slowed.
And Lenski's team knows that competition slows evolution in his experiment as well and the explanation is quite simple. Competition reduces the number of replications for all variants. This slows the process of DNA evolution because the replication is the random trial for the next beneficial mutation. You need to learn that competition and evolutionary adaptation are different physical and mathematical processes. Once you understand this, you might make some progress with your malaria problem.

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 9:05:03 AM12/6/19
to talk-o...@moderators.isc.org
Rogers, there are target mutations when you are talking about evolutionary adaptation. If there weren't, sequencing hiv for particular mutations would be a wasted effort. You are in a trap that prevents you from correctly understanding the evolution of drug-resistance.

Bill Jefferys

unread,
Dec 6, 2019, 9:05:04 AM12/6/19
to talk-o...@moderators.isc.org
No problem, Martin. It is a very unusual spelling and it's often gotten wrong. I'm quite used to it after all these years :)

Thanks for an excellent discussion. You are right that what I wrote did not answer (and was not intended to answer) Dr. Kleinman's claims, it was simply meant to make clear the importance of conditional probability and how, when used carelessly, it can lead to significant problems. I appreciate your following up with your discussion of Dr. Kleinman's claims.

Bill

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 9:40:03 AM12/6/19
to talk-o...@moderators.isc.org
Martin Harran should try to do the coin tossing problem with asymmetric outcomes because that is the evolutionary adaptation problem. On replication (the "coin toss") does the beneficial mutation occur or not. DNA evolution is nothing more than nested asymmetric binomial probability problems (a Markov Chain process) where the binomial probability problems are linked to each other by the multiplication rule (as computed using conditional probabilities). And because of the asymmetry between outcomes at each evolutionary step, it takes a lot of replications of the particular variant for there to be a reasonable probability of the next evolutionary step occurring.
>
> Bill


jillery

unread,
Dec 6, 2019, 9:45:03 AM12/6/19
to talk-o...@moderators.isc.org
If that's an excellent summary, it's only because it summarizes what
you and I and others have already posted.

Bill Rogers

unread,
Dec 6, 2019, 9:45:03 AM12/6/19
to talk-o...@moderators.isc.org
As I said, in the case of drug resistance, for example HIV resistance to anti-virals, you don't go wildly astray by acting as though there are specific target mutations required for survival.
Your mistake is in generalizing that situation to evolution as a whole.

Even in the Lenski experiment, each of the 12 replicates of the experiment used a different pathway and different set of mutations to reach improved fitness. There was clearly no target.

Look at the set of wildly different organisms that live in the ocean and feed on plankton - there's clearly no "target" of evolutionary adaptation there, just a bunch of very different ways of improving fitness in that environment.

Laboratory studies of mutation often create a situation where there is a single or a few specific mutations that will improve fitness to a defined, experimentally controlled selection pressure - drug selection, starvation, heat stress, reversion of auxotrophic mutants. Those situations are all created deliberately to be very different from evolution in nature. They can be good models for limited aspects of mutation and selection. You err by treating them as models for evolution as a whole.

>You are in a trap that prevents you from correctly understanding the evolution of drug-resistance.

The evolution of drug resistance is not hard to understand; a single selection pressure aimed at a single metabolic pathway and a limited number of potential mutations leading to a limited number of potential resistance mechanisms. Your "model" for the evolution of drug resistance ignores a lot of interesting details, but it's not terribly wrong. It's just not a model of evolution in nature.


Martin Harran

unread,
Dec 6, 2019, 9:50:03 AM12/6/19
to talk-o...@moderators.isc.org
On Fri, 6 Dec 2019 05:39:30 -0800 (PST), Alan Kleinman MD PhD
Very poor attempt at handwaving, Alan. You persistently claim that
reptifeatharians haven't a clue about multiple drug therapy and kill
people yet you cite a reptifeatharian as a leading authority in the
field. The two things contradict each other, you really need to
withdraw one of them.

Martin Harran

unread,
Dec 6, 2019, 9:55:03 AM12/6/19
to talk-o...@moderators.isc.org
On Fri, 6 Dec 2019 06:02:32 -0800 (PST), Bill Jefferys
<billje...@gmail.com> wrote:

>On Friday, December 6, 2019 at 6:50:03 AM UTC-5, Martin Harran wrote:
>> On Fri, 06 Dec 2019 10:46:32 +0000, Martin Harran
>> <martin...@gmail.com> wrote:
>>
>>
>> [...]
>>
>> >Bill Jeffries ...
>>
>> Apologies - should be Bill Jefferys.
>
>No problem, Martin. It is a very unusual spelling and it's often gotten wrong. I'm quite used to it after all these years :)

Tell me about it; I have 9 brothers whose surname end "on" but mine
ends "an" due to the vagaries of someone different registering my
birth - whatever you are on the birth register is your legal name
unless you go though the process of legally changing it.

Mind you, when my dad passed on and my sister had to get his birth
certificate, we discovered that he too was registered as "an" so I was
real chuffed to be able to tell my brothers that they really all were
out of step except me!

And don't even start me on getting called by every first name except
my own :)

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 10:25:04 AM12/6/19
to talk-o...@moderators.isc.org
That's the facts, Jack. Even in Lenski's experiment, he is identifying specific mutations which give improved fitness. Why don't you present a real, measurable, and repeatable example which show otherwise?
> Your mistake is in generalizing that situation to evolution as a whole.
Your mistake is that recognizing that most mutations are detrimental. Only in very specific cases are particular mutations beneficial.
>
> Even in the Lenski experiment, each of the 12 replicates of the experiment used a different pathway and different set of mutations to reach improved fitness. There was clearly no target.
Sure there is a target in the Lenski experiment, it is improving fitness in a starvation environment. And there can be more than one evolutionary trajectory to that target but each trajectory obeys the mathematics that I've presented. And there are specific mutation which allow his population to reach this target.
>
> Look at the set of wildly different organisms that live in the ocean and feed on plankton - there's clearly no "target" of evolutionary adaptation there, just a bunch of very different ways of improving fitness in that environment.
Why aren't there targets? Plankton is a prey, they are subject to temperature stresses, sun-light availability, nutrient availability,... And each of these stressors imposes a different set of mutations that give improved fitness to these stressors and the probability of the mutations occurring is subject to the same mathematics as the Kishony or Lenski experiments. And the probability of a lineage accumulating the mutations to improve fitness against all these selection pressures is dependent on the number of replications these populations can do. If you want to learn how to do this math, read this paper:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6175190/
>
> Laboratory studies of mutation often create a situation where there is a single or a few specific mutations that will improve fitness to a defined, experimentally controlled selection pressure - drug selection, starvation, heat stress, reversion of auxotrophic mutants. Those situations are all created deliberately to be very different from evolution in nature. They can be good models for limited aspects of mutation and selection. You err by treating them as models for evolution as a whole.
These lab experiments limit the number of selection pressures to 1 because any experiment with more than a single selection pressure acting will require exponentially larger populations. This is why Kishony hasn't run his experiment with two drugs because his petri dish would have to be exponentially larger for that experiment to work. The same applies to Lenski's experiment and this is the reason combination therapy works for the treatment of hiv.
>
> >You are in a trap that prevents you from correctly understanding the evolution of drug-resistance.
>
> The evolution of drug resistance is not hard to understand; a single selection pressure aimed at a single metabolic pathway and a limited number of potential mutations leading to a limited number of potential resistance mechanisms. Your "model" for the evolution of drug resistance ignores a lot of interesting details, but it's not terribly wrong. It's just not a model of evolution in nature.
The point you continue to miss is that the greater the number of selection pressures, the greater the number of instances of the multiplication rule applied in the evolutionary process. And when you are talking about a highly asymmetric binomial probability problem associated with evolutionary adaptation, that's a vast number of replications necessary from these kinds of evolutionary processes to have a reasonable probability of occurring.


Alan Kleinman MD PhD

unread,
Dec 6, 2019, 10:35:03 AM12/6/19
to talk-o...@moderators.isc.org
There's a lesson to learn here Martin if you are willing. There is a mathematical reason why combination therapy works at suppressing the evolutionary process. Each evolutionary step is subject to the multiplication rule. And if you force a population to evolve to multiple selection pressures simultaneously, you are imposing multiple instances of the multiplication rule simultaneously. Edward Tatum recognized this and spoke about this in his Nobel Laureate Lecture. Try doing your coin tossing problem with two coins, a nickel and a dime. But instead of symmetric outcomes, let the probability of getting a head be the mutation rate (eg e-9) and the probability of getting a tail be (1-(e-9)). Now compute your probability of getting both heads on the toss of your two asymmetric outcome coins.

Bill Rogers

unread,
Dec 6, 2019, 11:30:04 AM12/6/19
to talk-o...@moderators.isc.org
There are specific mutations which occurred, but they were not a target. Different combinations of mutations occurred in each of the 12 lines. You cannot look at the specific mutations that occurred in these lines and then calculate the probability of their occurring as if they were the only possibilities from the start.

That's why Jillery and I keep asking you to account for the enormous improbability of your own birth. Any real event that has happened, if you had specified it in sufficient detail far enough in advance would have been fantastically improbable. And that's the problem you have. You keep drawing bull's eye around arrows after they've hit the side of the barn.

> >
> > Look at the set of wildly different organisms that live in the ocean and feed on plankton - there's clearly no "target" of evolutionary adaptation there, just a bunch of very different ways of improving fitness in that environment.
> Why aren't there targets? Plankton is a prey, they are subject to temperature stresses, sun-light availability, nutrient availability,... And each of these stressors imposes a different set of mutations that give improved fitness to these stressors and the probability of the mutations occurring is subject to the same mathematics as the Kishony or Lenski experiments. And the probability of a lineage accumulating the mutations to improve fitness against all these selection pressures is dependent on the number of replications these populations can do. If you want to learn how to do this math, read this paper:
> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6175190/
> >
> > Laboratory studies of mutation often create a situation where there is a single or a few specific mutations that will improve fitness to a defined, experimentally controlled selection pressure - drug selection, starvation, heat stress, reversion of auxotrophic mutants. Those situations are all created deliberately to be very different from evolution in nature. They can be good models for limited aspects of mutation and selection. You err by treating them as models for evolution as a whole.
> These lab experiments limit the number of selection pressures to 1 because any experiment with more than a single selection pressure acting will require exponentially larger populations. This is why Kishony hasn't run his experiment with two drugs because his petri dish would have to be exponentially larger for that experiment to work. The same applies to Lenski's experiment and this is the reason combination therapy works for the treatment of hiv.
> >
> > >You are in a trap that prevents you from correctly understanding the evolution of drug-resistance.
> >
> > The evolution of drug resistance is not hard to understand; a single selection pressure aimed at a single metabolic pathway and a limited number of potential mutations leading to a limited number of potential resistance mechanisms. Your "model" for the evolution of drug resistance ignores a lot of interesting details, but it's not terribly wrong. It's just not a model of evolution in nature.
> The point you continue to miss is that the greater the number of selection pressures, the greater the number of instances of the multiplication rule applied in the evolutionary process.

I keep missing that point because it is not true. If you are talking about lethal selection pressures applied suddenly and chosen such that no single mutation will allow the organism to survive, then, sure, the multiplication rule applies. And that's what happens in triple drug therapy for TB.

But in nature, in a stable population in a natural environment, there are multiple selection pressures acting all the time. An organism which is better at dealing with one of those selection pressures will have an advantage over the others without waiting to have acquired a set of mutations that gives it an advantage over every single one of the selection pressures.

I've given you this analogy before, but here goes. I have a certain risk of dying before age 70 from cancer, a certain risk from cardiovascular disease, a certain risk from accidents, a certain risk from infectious disease. Combine all those risks and you get an overall risk of dying before age 70. If I do something that decreases my risk of dying from cardiovascular disease, I reduce my overall risk of dying before age 70, even if all the other risks are unchanged.

The question is not "how many selection pressures are there?" but "how many mutations are required before you can have an increase in fitness?" A single selection pressure might require multiple mutations before there's any improvement in fitness (e.g. chloroquine or Fansidar treatment of malaria) or, as in most cases in nature, a single mutation might allow an improvement in fitness in the presence of multiple selection pressures (e.g. mild heat stress plus mild osmotic shock plus mild nutrient scarcity).

Most lab experiments are designed to look at strong selection pressures (like Lenski) or lethal ones (like Kishony) because you can get clear results relatively quickly. But, with the exception of our driving species to extinction or asteroids hitting the earth, natural populations don't generally face selection pressures that eliminate 99.99% of the adult population in each generation.

Burkhard

unread,
Dec 6, 2019, 11:55:03 AM12/6/19
to talk-o...@moderators.isc.org
Martin Harran wrote:
> On Fri, 6 Dec 2019 06:02:32 -0800 (PST), Bill Jefferys
> <billje...@gmail.com> wrote:
>
>> On Friday, December 6, 2019 at 6:50:03 AM UTC-5, Martin Harran wrote:
>>> On Fri, 06 Dec 2019 10:46:32 +0000, Martin Harran
>>> <martin...@gmail.com> wrote:
>>>
>>>
>>> [...]
>>>
>>>> Bill Jeffries ...
>>>
>>> Apologies - should be Bill Jefferys.
>>
>> No problem, Martin. It is a very unusual spelling and it's often gotten wrong. I'm quite used to it after all these years :)
>
> Tell me about it; I have 9 brothers whose surname end "on" but mine
> ends "an" due to the vagaries of someone different registering my
> birth - whatever you are on the birth register is your legal name
> unless you go though the process of legally changing it.


Reminds me of the Terry Pratchett book, where one of the characters ends
up being officially called "Esmerelda Margaret Note Spelling of Lancre"
due to the local custom that your name is whatever the priest says at
the naming ceremony is your name - the result being a king called
My-God-He's-Heavy the First, aand a farmer named James What the Hell's
That Cow Doing in Here Poorchick.

Martin Harran

unread,
Dec 6, 2019, 12:10:03 PM12/6/19
to talk-o...@moderators.isc.org
On Fri, 6 Dec 2019 07:34:13 -0800 (PST), Alan Kleinman MD PhD
I'd be much more interested in hearing you explain how a
reptifeatharian can be an expert in multi dug therapy but I have a
pretty good idea that the probability of you doing so is just about
zero

Martin Harran

unread,
Dec 6, 2019, 12:10:03 PM12/6/19
to talk-o...@moderators.isc.org
On Fri, 6 Dec 2019 16:54:09 +0000, Burkhard <b.sc...@ed.ac.uk>
wrote:

>Martin Harran wrote:
>> On Fri, 6 Dec 2019 06:02:32 -0800 (PST), Bill Jefferys
>> <billje...@gmail.com> wrote:
>>
>>> On Friday, December 6, 2019 at 6:50:03 AM UTC-5, Martin Harran wrote:
>>>> On Fri, 06 Dec 2019 10:46:32 +0000, Martin Harran
>>>> <martin...@gmail.com> wrote:
>>>>
>>>>
>>>> [...]
>>>>
>>>>> Bill Jeffries ...
>>>>
>>>> Apologies - should be Bill Jefferys.
>>>
>>> No problem, Martin. It is a very unusual spelling and it's often gotten wrong. I'm quite used to it after all these years :)
>>
>> Tell me about it; I have 9 brothers whose surname end "on" but mine
>> ends "an" due to the vagaries of someone different registering my
>> birth - whatever you are on the birth register is your legal name
>> unless you go though the process of legally changing it.
>
>
>Reminds me of the Terry Pratchett book, where one of the characters ends
>up being officially called "Esmerelda Margaret Note Spelling of Lancre"
>due to the local custom that your name is whatever the priest says at
>the naming ceremony is your name - the result being a king called
>My-God-He's-Heavy the First, aand a farmer named James What the Hell's
>That Cow Doing in Here Poorchick.

Which in turn reminds me of the story about the 3 wise men arriving in
Jerusalem. As they enter the stable, one of them trips over the step
and exclaims "Jesus Christ!"

Joseph turns to his wife and remarks "That sounds like a nice name,
Mary."

Tim Norfolk

unread,
Dec 6, 2019, 12:15:04 PM12/6/19
to talk-o...@moderators.isc.org
A trick we used to use in the early days of UNIX was to just add logs for large/small number multiplication.

Bob Casanova

unread,
Dec 6, 2019, 12:25:03 PM12/6/19
to talk-o...@moderators.isc.org
On Thu, 5 Dec 2019 17:39:34 -0500, the following appeared in
Wouldn't that be an "error plank"?
--

Bob C.

"The most exciting phrase to hear in science,
the one that heralds new discoveries, is not
'Eureka!' but 'That's funny...'"

- Isaac Asimov

Bob Casanova

unread,
Dec 6, 2019, 12:25:03 PM12/6/19
to talk-o...@moderators.isc.org
On Thu, 05 Dec 2019 12:59:39 -0700, the following appeared
in talk.origins, posted by Bob Casanova <nos...@buzz.off>:

>On Wed, 4 Dec 2019 13:54:55 -0800, the following appeared in
>talk.origins, posted by Mark Isaak
><eciton@curiousta/xyz/xonomy.net>:
>
>>On 12/4/19 10:03 AM, Bob Casanova wrote:
>>> On Wed, 4 Dec 2019 08:07:25 -0800 (PST), the following
>>> appeared in talk.origins, posted by Bill Jefferys
>>> <billje...@gmail.com>:
>>>
>>>> On Wednesday, December 4, 2019 at 10:40:03 AM UTC-5, André G. Isaak wrote:
>>>>> On 2019-12-04 8:27 a.m., Bill Jefferys wrote:
>>>>>> On Wednesday, December 4, 2019 at 5:10:03 AM UTC-5, jillery wrote:
>>>>>>> On Tue, 3 Dec 2019 18:28:21 -0800 (PST), Bill Jefferys wrote:
>>>>>>
>>>>>> [Snip]
>>>>>>
>>>>>> I thank Jillery for her warm welcome. Actually, I am not a "first poster", because I was involved with talk.origins way back in the days of UseNet; but my academic responsibilities (e.g., being department chair, stuff like that) meant that I had to curtail my participation and I just stopped. It was only recently that I learned that the group had migrated to Google Groups.
>>>>>
>>>>> Actually, T.O is still on usenet. Google Groups simply provides
>>>>> web-based access to it.
>>>
>>>> In email Jillery mentioned this, but I have no idea how to access it directly.
>>>
>>> If you're using a PC you own on which you can install
>>> applications, all you need is to install a newsreader
>>> (Eternal September is free, and seems to be a favorite of
>>> many) and use it to download a list of groups, from which
>>> you can select the ones you want to visit.
>>>
>>> There's a description here...
>>>
>>> https://www.newsgroupreviews.com/eternal-september.html
>>>
>>> .... including addresses for the ET server.
>>
>>Actually, it requires two parts: subscribe to a news provider (Eternal
>>September is the one I also use, and the only one I have any familiarity
>>with), and install a news reader on your pc (Thunderbird is what I use;
>>I simply ignore its mail handling functions. And again, I have no
>>familiarity with others.)
>
>Ummm... Since I mentioned both Eternal September
>(subscription implied) and an installed newsreader, IOW both
>parts, I fail to see where "actually" comes from.
>
>Followup: OK, I see where it could be interpreted that I was
>referring to ET as a newsreader from the way I phrased it;
>my bad.

Aargghhh! "ES"! Maybe I should avoid initials...

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 12:30:04 PM12/6/19
to talk-o...@moderators.isc.org
There's a difference between understanding that combination therapy works and understanding why it works. Edward Tatum explained why it works and I did the math.

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 12:30:04 PM12/6/19
to talk-o...@moderators.isc.org
Improving fitness is the target, mutations are the arrows. Replication fires those arrows and they occur at very low frequency (the mutation rate). You better start looking at specific mutations if you want to understand how evolutionary adaptation works.
>
> That's why Jillery and I keep asking you to account for the enormous improbability of your own birth. Any real event that has happened, if you had specified it in sufficient detail far enough in advance would have been fantastically improbable. And that's the problem you have. You keep drawing bull's eye around arrows after they've hit the side of the barn.
Why do you take up this silly analogy? That's like saying it is impossible to ever deal a game of cards because each of the possible outcomes is so highly improbable. I just happened to be the lucky winner in that deal. If you want advice on probability theory, sillery is the last person to ask for that kind of advice.
>
> > >
> > > Look at the set of wildly different organisms that live in the ocean and feed on plankton - there's clearly no "target" of evolutionary adaptation there, just a bunch of very different ways of improving fitness in that environment.
> > Why aren't there targets? Plankton is a prey, they are subject to temperature stresses, sun-light availability, nutrient availability,... And each of these stressors imposes a different set of mutations that give improved fitness to these stressors and the probability of the mutations occurring is subject to the same mathematics as the Kishony or Lenski experiments. And the probability of a lineage accumulating the mutations to improve fitness against all these selection pressures is dependent on the number of replications these populations can do. If you want to learn how to do this math, read this paper:
> > https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6175190/
> > >
> > > Laboratory studies of mutation often create a situation where there is a single or a few specific mutations that will improve fitness to a defined, experimentally controlled selection pressure - drug selection, starvation, heat stress, reversion of auxotrophic mutants. Those situations are all created deliberately to be very different from evolution in nature. They can be good models for limited aspects of mutation and selection. You err by treating them as models for evolution as a whole.
> > These lab experiments limit the number of selection pressures to 1 because any experiment with more than a single selection pressure acting will require exponentially larger populations. This is why Kishony hasn't run his experiment with two drugs because his petri dish would have to be exponentially larger for that experiment to work. The same applies to Lenski's experiment and this is the reason combination therapy works for the treatment of hiv.
> > >
> > > >You are in a trap that prevents you from correctly understanding the evolution of drug-resistance.
> > >
> > > The evolution of drug resistance is not hard to understand; a single selection pressure aimed at a single metabolic pathway and a limited number of potential mutations leading to a limited number of potential resistance mechanisms. Your "model" for the evolution of drug resistance ignores a lot of interesting details, but it's not terribly wrong. It's just not a model of evolution in nature.
> > The point you continue to miss is that the greater the number of selection pressures, the greater the number of instances of the multiplication rule applied in the evolutionary process.
>
> I keep missing that point because it is not true. If you are talking about lethal selection pressures applied suddenly and chosen such that no single mutation will allow the organism to survive, then, sure, the multiplication rule applies. And that's what happens in triple drug therapy for TB.
That's not what I'm saying and you should know it. If a selection pressure is lethal, the population goes extinct. One of the many reasons why you failed to solve your malaria problem is that you failed to understand that the huge populations that malaria can achieve means that drug-resistant variants can appear de novo without ever being exposed to the drug. You should have learned that lesson by now based on what Lenski's experiment shows with his recognition of drug-resistant variant appearing without his populations ever being exposed to the drugs. It the same math you did in med school but you have decided to throw out your binomial probability calculator. That's a big mistake if you are trying to understand malaria drug-resistance.
>
> But in nature, in a stable population in a natural environment, there are multiple selection pressures acting all the time. An organism which is better at dealing with one of those selection pressures will have an advantage over the others without waiting to have acquired a set of mutations that gives it an advantage over every single one of the selection pressures.
The problem with your argument is you can't demonstrate it in the laboratory. What actually happens to a population under these circumstances is that they either are driven to extinction or drift if they are able to survive despite all the environmental stressors. And if you want to see a real, measurable, and repeatable example of what I argue, look what happens to hiv when subject to just 3 selection pressures targeting only 2 genes.
>
> I've given you this analogy before, but here goes. I have a certain risk of dying before age 70 from cancer, a certain risk from cardiovascular disease, a certain risk from accidents, a certain risk from infectious disease. Combine all those risks and you get an overall risk of dying before age 70. If I do something that decreases my risk of dying from cardiovascular disease, I reduce my overall risk of dying before age 70, even if all the other risks are unchanged.
If you are trying to argue that reducing selection pressures on populations allows for more diversity in a population to survive and replicate, no problem. But if you are trying to argue that Lenski's population if not only being starved but placed under thermal stress that it will evolve to these selection conditions more quickly, you are wrong. Why don't you contact Lenski to run his experiment at a non-optimal temperature and see if fixation and adaptation occur more quickly with fewer replications necessary for the evolutionary process?
>
> The question is not "how many selection pressures are there?" but "how many mutations are required before you can have an increase in fitness?" A single selection pressure might require multiple mutations before there's any improvement in fitness (e.g. chloroquine or Fansidar treatment of malaria) or, as in most cases in nature, a single mutation might allow an improvement in fitness in the presence of multiple selection pressures (e.g. mild heat stress plus mild osmotic shock plus mild nutrient scarcity).
You know that Kishony's experiment requires that the increase in concentration of the drug must be limited so that a single mutation allows for growth in the next region. And you also know that if using drugs at lower values than the MIC aids in the evolutionary process. But even under the best of circumstances, the evolutionary process will take (1/mutation rate) replications for each evolutionary step. That's the starting point, and any evolutionary process which requires 2 or more mutations to improve fitness will require exponentially more replications for each evolutionary step. See if you can find an empirical example which contradicts this claim.
>
> Most lab experiments are designed to look at strong selection pressures (like Lenski) or lethal ones (like Kishony) because you can get clear results relatively quickly. But, with the exception of our driving species to extinction or asteroids hitting the earth, natural populations don't generally face selection pressures that eliminate 99.99% of the adult population in each generation.
What makes you think that Kishony's and Lenski's selection pressures are strong? Both experiments require only a single beneficial mutation for each evolutionary step to improve fitness. You keep making these irrational arguments. Perhaps this is the reason you failed to solve your malaria problem despite the fact you wrote 50 papers on the subject. Most researchers would have taken a step back and reevaluated their approach and understanding of their problem. You just don't have the capability to do that.

Bob Casanova

unread,
Dec 6, 2019, 12:35:03 PM12/6/19
to talk-o...@moderators.isc.org
On Fri, 6 Dec 2019 06:43:08 -0800 (PST), the following
appeared in talk.origins, posted by Bill Rogers
<broger...@gmail.com>:
He's been told, repeatedly and with explanations, why his
idea that tightly-controlled lab experiments on selected
organisms, with specific targets and goals, does not work in
a changing natural environment in which "does it improve
fitness?" is the only, or at least the main, objective
driving evolution. He hasn't listened yet, so I expect he
never will; his mind is fixed on "does this support my
contentions and beliefs, regardless of the objective
evidence?".

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 12:40:03 PM12/6/19
to talk-o...@moderators.isc.org
When will Rogers actually explain how drug-resistance occurs and describe a durable treatment for malaria. He certainly won't get the correct advice from someone who didn't get anything out of his two courses in statistics.

Martin Harran

unread,
Dec 6, 2019, 12:55:03 PM12/6/19
to talk-o...@moderators.isc.org
On Fri, 6 Dec 2019 09:29:13 -0800 (PST), Alan Kleinman MD PhD
An expert in any field needs to know both how and why; I guess that's
why Robert Peter Gale is an expert in this field and you aren't.


Alan Kleinman MD PhD

unread,
Dec 6, 2019, 1:05:03 PM12/6/19
to talk-o...@moderators.isc.org
It appears that Robert Peter Gale forgot to publish his explanation of why combination therapy works. But you stick with your study of coin tossing and you might get past your two-bit understanding of this subject.

Martin Harran

unread,
Dec 6, 2019, 1:30:04 PM12/6/19
to talk-o...@moderators.isc.org
On Fri, 6 Dec 2019 10:04:31 -0800 (PST), Alan Kleinman MD PhD
Ah, back to the ad hominems, always a sure sign of lack of expertise.

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 1:50:03 PM12/6/19
to talk-o...@moderators.isc.org
Post a link to where Robert Peter Gale has published his explanation of why combination therapy works. You won't because neither Gale nor you understand why combination therapy works. But I'm sure you think that reptiles evolve feathers and fish evolve into mammals based on your understanding of probability theory. That's your level of expertise on this subject and why you and your fellow reptifeatharians ignorance of this subject harms people with drug-resistant infections and failed cancer treatments.

Martin Harran

unread,
Dec 6, 2019, 2:05:03 PM12/6/19
to talk-o...@moderators.isc.org
On Fri, 6 Dec 2019 10:47:17 -0800 (PST), Alan Kleinman MD PhD
You cited him as an authority, now you claim he doesn't know what he
is talking about - I do wish you would make up your mind.

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 2:35:03 PM12/6/19
to talk-o...@moderators.isc.org
I cited him as someone who uses combination therapy to treat cancer. You are the one who thinks he knows why it works. There are lots of physicians (and farmers) who use combination selection pressures to treat infections and cancers (and crops) without knowing why it works. Reptifeatharians are still muddling around trying to understand how evolution works with a single selection pressure. Reptifeatharians haven't even figured out the difference between competition (and fixation) and evolutionary adaptation. Once you understand the physics, the math is easy. But you don't understand either the physics or the mathematics of evolution and neither does Gale.

Martin Harran

unread,
Dec 6, 2019, 3:10:03 PM12/6/19
to talk-o...@moderators.isc.org
On Fri, 6 Dec 2019 11:31:30 -0800 (PST), Alan Kleinman MD PhD
So you cited as authority a guy who you now reckon doesn't really know
much more than a typical farmer - doesn't exactly say a whole lot for
your research skills.

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 4:15:04 PM12/6/19
to talk-o...@moderators.isc.org
No, I gave a citation to someone who doesn't know how much he owes to farmers. You owe your health far more to farmers than physicians. And it doesn't take much research skills to know more about evolution than you when you don't even know where your food comes from.

RonO

unread,
Dec 6, 2019, 6:40:03 PM12/6/19
to talk-o...@moderators.isc.org
On 12/6/2019 7:34 AM, Alan Kleinman MD PhD wrote:
> On Wednesday, December 4, 2019 at 7:00:03 PM UTC-8, Ron O wrote:
>> On 12/4/2019 6:14 PM, Alan Kleinman MD PhD wrote:
>>> On Wednesday, December 4, 2019 at 3:55:03 PM UTC-8, Ron O wrote:
>>>> On 12/4/2019 11:33 AM, jillery wrote:
>>>>> On Wed, 4 Dec 2019 06:08:57 -0600, RonO <roki...@cox.net> wrote:
>>>>>
>>>>>> In terms of why Kleinman was using the product rule incorrectly it was
>>>>>> not because the event had already happened. The event he was trying to
>>>>>> calculate the probability for was still in progress. The issue was that
>>>>>> one mutation could occur before the second one, and after the first
>>>>>> mutation happened it did not have to happen again in that cell lineage,
>>>>>> and basic biology would have to factor in and each subsequent generation
>>>>>> there would be a different probability that the two mutations would
>>>>>> occur in that lineage. Basically the probability would depend on how
>>>>>> many cells were in that cell lineage that already had the first
>>>>>> mutation, and how many would reproduce to make the next generation. It
>>>>>> is more complicated than that, but the product rule did not apply to the
>>>>>> issue that he was talking about. Kleinman wanted to multiply the
>>>>>> probability of each mutation ocurring, but that obviously is not the way
>>>>>> to do the calculation when the two mutations do not have to happen at
>>>>>> the same time in a cell lineage.
>>>>>>
>>>>>> It was not just because the first mutation had already happened. It was
>>>>>> due to how biological evolution actually works (life builds on what came
>>>>>> before), and the basic biology of reproduction.
>>>>>>
>>>>>> Ron Okimoto
>>>>>
>>>>>
>>>>> While I agree that your statement above is technically correct, I
>>>>> disagree that you describe a substantial distinction. AIUI Kleinman's
>>>>> argument is that, in the case of features which require multiple
>>>>> beneficial mutations, each beneficial mutation must appear and be
>>>>> fixed in the population sequentially, with no overlap to the
>>>>> appearance and fixation of prior or following beneficial mutations.
>>>>> Also, for any particular feature, he accepts only the specific
>>>>> mutation sequence found in extant populations.
>>>>
>>>> This was not what he was wrong about. He was just trying to estimate
>>>> the probability of two mutations occurring in the same cell lineage when
>>>> they didn't have to happen at the same time. Nothing about fixation in
>>>> the population.
>>> Competition slows evolution, that's why the evolutionary process occurs much more rapidly in the Kishony experiment than the Lenski experiment. Fixation is neither necessary nor sufficient for evolutionary adaptation to occur. And my math is correct, it was peer-reviewed by people (unlike you) who understand probability theory.
>>
>> My guess is that there is some weird name for what you are. Insanity is
>> about your only excuse for continuing to lie about what you were wrong
>> about. You were just wrong and you will never have been correct. You
>> even understand this to be true or you would have multiplied those two
>> numbers that you wanted to multiply. How can you not understand that?
>> The product rule did not apply. If it did you would use it and multiply
>> those two numbers, but all you ever do is lie about it. You are
>> supposed to be the math wiz and yet you have never been able to multiply
>> those two numbers and give the answer.
> Why don't you demonstrate to Bill Jefferys your mastery of probability theory by showing him how you plug numbers into StatTrek?

Why don't you just stop lying about what you were wrong about over 2
years ago. As far as I know Bill Jefferys had nothing to do with what
you were wrong about, so why bring him up? You are just a pathetic
lying nut job.

Ron Okimoto
>>
>> Ron Okimoto
>>
>>>>
>>>> Ron Okimoto
>>>>
>>>>>
>>>>> If Kleinman's biological argument was factually correct, his
>>>>> probability calculations based on a dependent probabilities would also
>>>>> be correct. However, his biological argument does not apply to most
>>>>> examples of evolutionary change, and so his probability calculations
>>>>> aren't relevant to them.
>>>>>
>>>>> IOW Kleinman's failure here is a misunderstanding of biology, not of
>>>>> probability. His dependence on his alleged mathematical expertise,
>>>>> and others' alleged ignorance, of probability are non-sequiturs.
>>>>>
>>>
>>>
>
>

Alan Kleinman MD PhD

unread,
Dec 6, 2019, 6:55:04 PM12/6/19
to talk-o...@moderators.isc.org
Why doesn't moRON show how to use StatTrek to compute the joint probability of mutation B occurring on some member of the population that already has mutation A since StatTrek is all that moRON understands about the binomial distribution?
It is loading more messages.
0 new messages