[discussion] Yudkowsky on Popper

21 views
Skip to first unread message

Alan Forrester

unread,
Nov 10, 2017, 6:14:51 PM11/10/17
to FI, FIGG
“Rationality From AI to Zombies” by Eliezer Yudkowsky, pp. 820-821:

> Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism—this is the old philosophy that the Bayesian revolution is currently dethroning. Karl Popper’s idea that theories can be definitely falsified, but never definitely confirmed, is yet another special case of the Bayesian rules; if P(X|A) ≈ 1—if the theory makes a definite prediction—then observing ¬X very strongly falsifies A. On the other hand, if P(X|A) ≈ 1, and we observe X, this doesn’t definitely confirm the theory; there might be some other condition B such that P (X|B) ≈ 1, in which case observing X doesn’t favor A over B. For observing X to definitely confirm A, we would have to know, not that P(X|A) ≈ 1, but that P(X|¬A) ≈ 0, which is something that we can’t know because we can’t range over all possible alternative explanations. For example, when Einstein’s theory of General Relativity toppled Newton’s incredibly well-confirmed theory of gravity, it turned out that all of Newton’s predictions were just a special case of Einstein’s predictions.

Popper didn’t call his position falsificationism. He explicitly rejected that label, see the introduction to 'Realism and the aim of science'.

Popper called his position critical rationalism. Popper said that all knowledge creation proceeds by guessing and criticism. First, you spot a problem with your current ideas, then you guess solutions to the problem, then you criticise the guesses.

Falsifying an idea by experimental testing is just one of the possible ways of criticising an idea: it's not central to Popper’s ideas.

> You can even formalize Popper’s philosophy mathematically. The likelihood ratio for X, the quantity P(X|A)/P(X|¬A), determines how much observing X slides the probability for A; the likelihood ratio is what says how strong X is as evidence. Well, in your theory A, you can predict X with probability 1, if you like; but you can’t control the denominator of the likelihood ratio, P(X|¬A)—there will always be some alternative theories that also predict X, and while we go with the simplest theory that fits the current evidence, you may someday encounter some evidence that an alternative theory predicts but your theory does not.

You can’t formalise guessing and criticism mathematically. Popper explicitly denied that his theories could be formalised (‘The Logic of Scientific Discovery', Chapter 2, Section 9):

> I am quite ready to admit that there is a need for a purely logical analysis of theories, for an analysis which takes no account of how they change and develop. But this kind of analysis does not elucidate those aspects of the empirical sciences which I, for one, so highly prize. A system such as classical mechanics may be ‘scientific’ to any degree you like; but those who uphold it dogmatically—believing, perhaps, that it is their business to defend such a successful system against criticism as long as it is not conclusively disproved—are adopting the very reverse of that critical attitude which in my view is the proper one for the scientist. In point of fact, no conclusive disproof of a theory can ever be produced; for it is always possible to say that the experimental results are not reliable, or that the discrepancies which are asserted to exist between the experimental results and the theory are only apparent and that they will disappear with the advance of our understanding. (In the struggle against Einstein, both these arguments were often used in support of Newtonian mechanics, and similar arguments abound in the field of the social sciences.) If you insist on strict proof (or strict disproof) in the empirical sciences, you will never benefit from experience, and never learn from it how wrong you are.

Back to Yudowsky:

> That’s the hidden gotcha that toppled Newton’s theory of gravity. So there’s a limit on how much mileage you can get from successful predictions; there’s a limit on how high the likelihood ratio goes for confirmatory evidence.

The idea that Newton’s theory was replaced cuz it was incompatible with evidence is misleading. Newton’s theory of gravity got into trouble because it is incompatible with the special theory of relativity.

One problem is that Newton’s theory implies that changes in the mass distribution instantly change gravitational forces everywhere. Special relativity sez that the maximum speed at which one system can influence another is the speed of light.

Another problem is that special relativity sez that mass and energy are tied together as parts of a single conserved quantity. So what should take the place of mass in a relativistic theory of gravitation?

Einstein thought about how to solve those problems and others and came up with the general theory of relativity. Newton’s theory of gravity was replaced because it was a bad explanation. The experimental evidence that Newton’s theory was wrong was mostly found after general relativity was invented. Until then, nobody knew where to look for experimental evidence against Newton’s theory.

There’s a lot of historical discussion of the origin of general relativity, e.g. - “The genesis of general relativity” vols 1-4 edited by Jurgen Renn. If Yudkowsky is going to use historical examples, he should look them up.

> On the other hand, if you encounter some piece of evidence Y that is definitely not predicted by your theory, this is enormously strong evidence against your theory. If P (Y |A) is infinitesimal, then the likelihood ratio will also be infinitesimal. For example, if P (Y |A) is 0.0001%, and P (Y |¬A) is 1%, then the likelihood ratio P (Y |A)/P (Y |¬A) will be 1:10,000. at’s −40 decibels of evidence! Or, flipping the likelihood ratio, if P (Y |A) is very small, then P (Y |¬A)/P (Y |A) will be very large, meaning that observing Y greatly favors ¬A over A. Falsification is much stronger than confirmation. It is is a consequence of the earlier point that very strong evidence is not the product of a very high probability that A leads to X, but the product of a very low probability that not-A could have led to X. This is the precise Bayesian rule that underlies the heuristic value of Popper’s falsificationism.
>
> Similarly, Popper’s dictum that an idea must be falsifiable can be interpreted as a manifestation of the Bayesian conservation-of-probability rule; if a result X is positive evidence for the theory, then the result ¬X would have disconfirmed the theory to some extent. If you try to interpret both X and ¬X as “confirming” the theory, the Bayesian rules say this is impossible! To increase the probability of a theory you must expose it to tests that can potentially decrease its probability; this is not just a rule for detecting would-be cheaters in the social process of science, but a consequence of Bayesian probability theory. On the other hand, Popper’s idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes’s theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.

Popper made many arguments against the idea of inductive probability, see 'Realism and the aim of science', part II, chapter II. For example, in Section 13 of that chapter Popper points out that we must have an explanation of what counts as repeating the same experiment to do induction, but induction provides us with no means to get that explanation. Yudkowksy shows no sign of being aware of any of these arguments.

The only Popper book Yudkowsky cites is LScD and he doesn’t seem to have understood it.

Alan

Elliot Temple

unread,
Nov 10, 2017, 6:29:22 PM11/10/17
to FIGG, FI
On Nov 10, 2017, at 3:14 PM, 'Alan Forrester' via Fallible Ideas <fallibl...@googlegroups.com> wrote:

> “Rationality From AI to Zombies” by Eliezer Yudkowsky, pp. 820-821:
>
>> Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism—this is the old philosophy that the Bayesian revolution is currently dethroning. Karl Popper’s idea that theories can be definitely falsified, but never definitely confirmed, is yet another special case of the Bayesian rules; if P(X|A) ≈ 1—if the theory makes a definite prediction—then observing ¬X very strongly falsifies A. On the other hand, if P(X|A) ≈ 1, and we observe X, this doesn’t definitely confirm the theory; there might be some other condition B such that P (X|B) ≈ 1, in which case observing X doesn’t favor A over B. For observing X to definitely confirm A, we would have to know, not that P(X|A) ≈ 1, but that P(X|¬A) ≈ 0, which is something that we can’t know because we can’t range over all possible alternative explanations. For example, when Einstein’s theory of General Relativity toppled Newton’s incredibly well-confirmed theory of gravity, it turned out that all of Newton’s predictions were just a special case of Einstein’s predictions.
>
> Popper didn’t call his position falsificationism. He explicitly rejected that label, see the introduction to 'Realism and the aim of science'.
>
> Popper called his position critical rationalism. Popper said that all knowledge creation proceeds by guessing and criticism. First, you spot a problem with your current ideas, then you guess solutions to the problem, then you criticise the guesses.
>
> Falsifying an idea by experimental testing is just one of the possible ways of criticising an idea: it's not central to Popper’s ideas.
>
>> You can even formalize Popper’s philosophy mathematically. The likelihood ratio for X, the quantity P(X|A)/P(X|¬A), determines how much observing X slides the probability for A; the likelihood ratio is what says how strong X is as evidence. Well, in your theory A, you can predict X with probability 1, if you like; but you can’t control the denominator of the likelihood ratio, P(X|¬A)—there will always be some alternative theories that also predict X, and while we go with the simplest theory that fits the current evidence, you may someday encounter some evidence that an alternative theory predicts but your theory does not.
>
> You can’t formalise guessing and criticism mathematically. Popper explicitly denied that his theories could be formalised

I think, in theory, you can mathematically formalize it. In other words, I think it's computable (since our brains are computers...) and I think AGI is possible to create.

But we don't yet know how to formalize it. And so-called AGI researchers like Yudkowsky aren't working on formalizing it, and therefore aren't really working on AGI progress.

> (‘The Logic of Scientific Discovery', Chapter 2, Section 9):
>
>> I am quite ready to admit that there is a need for a purely logical analysis of theories, for an analysis which takes no account of how they change and develop. But this kind of analysis does not elucidate those aspects of the empirical sciences which I, for one, so highly prize. A system such as classical mechanics may be ‘scientific’ to any degree you like; but those who uphold it dogmatically—believing, perhaps, that it is their business to defend such a successful system against criticism as long as it is not conclusively disproved—are adopting the very reverse of that critical attitude which in my view is the proper one for the scientist. In point of fact, no conclusive disproof of a theory can ever be produced; for it is always possible to say that the experimental results are not reliable, or that the discrepancies which are asserted to exist between the experimental results and the theory are only apparent and that they will disappear with the advance of our understanding. (In the struggle against Einstein, both these arguments were often used in support of Newtonian mechanics, and similar arguments abound in the field of the social sciences.) If you insist on strict proof (or strict disproof) in the empirical sciences, you will never benefit from experience, and never learn from it how wrong you are.

This isn't talking about formalizing stuff (like coding it and math), it's talking about "conclusive disproof" (also called "strict") and the problem of not having "conclusive" (*infallible*) arguments.


> Back to Yudowsky:
>
>> That’s the hidden gotcha that toppled Newton’s theory of gravity. So there’s a limit on how much mileage you can get from successful predictions; there’s a limit on how high the likelihood ratio goes for confirmatory evidence.
>
> The idea that Newton’s theory was replaced cuz it was incompatible with evidence is misleading. Newton’s theory of gravity got into trouble because it is incompatible with the special theory of relativity.
>
> One problem is that Newton’s theory implies that changes in the mass distribution instantly change gravitational forces everywhere. Special relativity sez that the maximum speed at which one system can influence another is the speed of light.
>
> Another problem is that special relativity sez that mass and energy are tied together as parts of a single conserved quantity. So what should take the place of mass in a relativistic theory of gravitation?
>
> Einstein thought about how to solve those problems and others and came up with the general theory of relativity. Newton’s theory of gravity was replaced because it was a bad explanation. The experimental evidence that Newton’s theory was wrong was mostly found after general relativity was invented. Until then, nobody knew where to look for experimental evidence against Newton’s theory.

Makes sense.


> There’s a lot of historical discussion of the origin of general relativity, e.g. - “The genesis of general relativity” vols 1-4 edited by Jurgen Renn. If Yudkowsky is going to use historical examples, he should look them up.

You're so demanding! Like me!


>> On the other hand, if you encounter some piece of evidence Y that is definitely not predicted by your theory, this is enormously strong evidence against your theory. If P (Y |A) is infinitesimal, then the likelihood ratio will also be infinitesimal. For example, if P (Y |A) is 0.0001%, and P (Y |¬A) is 1%, then the likelihood ratio P (Y |A)/P (Y |¬A) will be 1:10,000. at’s −40 decibels of evidence! Or, flipping the likelihood ratio, if P (Y |A) is very small, then P (Y |¬A)/P (Y |A) will be very large, meaning that observing Y greatly favors ¬A over A. Falsification is much stronger than confirmation. It is is a consequence of the earlier point that very strong evidence is not the product of a very high probability that A leads to X, but the product of a very low probability that not-A could have led to X. This is the precise Bayesian rule that underlies the heuristic value of Popper’s falsificationism.
>>
>> Similarly, Popper’s dictum that an idea must be falsifiable can be interpreted as a manifestation of the Bayesian conservation-of-probability rule; if a result X is positive evidence for the theory, then the result ¬X would have disconfirmed the theory to some extent. If you try to interpret both X and ¬X as “confirming” the theory, the Bayesian rules say this is impossible! To increase the probability of a theory you must expose it to tests that can potentially decrease its probability; this is not just a rule for detecting would-be cheaters in the social process of science, but a consequence of Bayesian probability theory. On the other hand, Popper’s idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes’s theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.
>
> Popper made many arguments against the idea of inductive probability, see 'Realism and the aim of science', part II, chapter II. For example, in Section 13 of that chapter Popper points out that we must have an explanation of what counts as repeating the same experiment to do induction, but induction provides us with no means to get that explanation. Yudkowksy shows no sign of being aware of any of these arguments.
>
> The only Popper book Yudkowsky cites is LScD and he doesn’t seem to have understood it.

Yeah.

But you can't tell this to these people b/c they don't do Paths Forward, nor have they specified any halfway reasonable alternative that they follow. Instead they just do whatever arbitrary biased shit they feel like, with some fractured bits of rationality thrown in when they want to or find it easy/convenient.


Elliot Temple
www.elliottemple.com

Alan Forrester

unread,
Nov 10, 2017, 7:57:28 PM11/10/17
to FI, FIGG
On 10 Nov 2017, at 23:29, Elliot Temple cu...@curi.us [fallible-ideas] <fallibl...@yahoogroups.com> wrote:

> On Nov 10, 2017, at 3:14 PM, 'Alan Forrester' via Fallible Ideas <fallibl...@googlegroups.com> wrote:
>
>> “Rationality From AI to Zombies” by Eliezer Yudkowsky, pp. 820-821:
>>
>>> Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism—this is the old philosophy that the Bayesian revolution is currently dethroning. Karl Popper’s idea that theories can be definitely falsified, but never definitely confirmed, is yet another special case of the Bayesian rules; if P(X|A) ≈ 1—if the theory makes a definite prediction—then observing ¬X very strongly falsifies A. On the other hand, if P(X|A) ≈ 1, and we observe X, this doesn’t definitely confirm the theory; there might be some other condition B such that P (X|B) ≈ 1, in which case observing X doesn’t favor A over B. For observing X to definitely confirm A, we would have to know, not that P(X|A) ≈ 1, but that P(X|¬A) ≈ 0, which is something that we can’t know because we can’t range over all possible alternative explanations. For example, when Einstein’s theory of General Relativity toppled Newton’s incredibly well-confirmed theory of gravity, it turned out that all of Newton’s predictions were just a special case of Einstein’s predictions.
>>
>> Popper didn’t call his position falsificationism. He explicitly rejected that label, see the introduction to 'Realism and the aim of science'.
>>
>> Popper called his position critical rationalism. Popper said that all knowledge creation proceeds by guessing and criticism. First, you spot a problem with your current ideas, then you guess solutions to the problem, then you criticise the guesses.
>>
>> Falsifying an idea by experimental testing is just one of the possible ways of criticising an idea: it's not central to Popper’s ideas.
>>
>>> You can even formalize Popper’s philosophy mathematically. The likelihood ratio for X, the quantity P(X|A)/P(X|¬A), determines how much observing X slides the probability for A; the likelihood ratio is what says how strong X is as evidence. Well, in your theory A, you can predict X with probability 1, if you like; but you can’t control the denominator of the likelihood ratio, P(X|¬A)—there will always be some alternative theories that also predict X, and while we go with the simplest theory that fits the current evidence, you may someday encounter some evidence that an alternative theory predicts but your theory does not.
>>
>> You can’t formalise guessing and criticism mathematically. Popper explicitly denied that his theories could be formalised
>
> I think, in theory, you can mathematically formalize it. In other words, I think it's computable (since our brains are computers...) and I think AGI is possible to create.
>
> But we don't yet know how to formalize it. And so-called AGI researchers like Yudkowsky aren't working on formalizing it, and therefore aren't really working on AGI progress.

gp. Do you think trying to formalise footnotes, trees of ideas and variants from yes-no philosophy could result in progress?

>> (‘The Logic of Scientific Discovery', Chapter 2, Section 9):
>>
>>> I am quite ready to admit that there is a need for a purely logical analysis of theories, for an analysis which takes no account of how they change and develop. But this kind of analysis does not elucidate those aspects of the empirical sciences which I, for one, so highly prize. A system such as classical mechanics may be ‘scientific’ to any degree you like; but those who uphold it dogmatically—believing, perhaps, that it is their business to defend such a successful system against criticism as long as it is not conclusively disproved—are adopting the very reverse of that critical attitude which in my view is the proper one for the scientist. In point of fact, no conclusive disproof of a theory can ever be produced; for it is always possible to say that the experimental results are not reliable, or that the discrepancies which are asserted to exist between the experimental results and the theory are only apparent and that they will disappear with the advance of our understanding. (In the struggle against Einstein, both these arguments were often used in support of Newtonian mechanics, and similar arguments abound in the field of the social sciences.) If you insist on strict proof (or strict disproof) in the empirical sciences, you will never benefit from experience, and never learn from it how wrong you are.
>
> This isn't talking about formalizing stuff (like coding it and math), it's talking about "conclusive disproof" (also called "strict") and the problem of not having "conclusive" (*infallible*) arguments.

You’re right. I misinterpreted the end of the first sentence to mean that the process of change and development couldn’t be formalised.

>> Back to Yudowsky:
>>
>>> That’s the hidden gotcha that toppled Newton’s theory of gravity. So there’s a limit on how much mileage you can get from successful predictions; there’s a limit on how high the likelihood ratio goes for confirmatory evidence.
>>
>> The idea that Newton’s theory was replaced cuz it was incompatible with evidence is misleading. Newton’s theory of gravity got into trouble because it is incompatible with the special theory of relativity.
>>
>> One problem is that Newton’s theory implies that changes in the mass distribution instantly change gravitational forces everywhere. Special relativity sez that the maximum speed at which one system can influence another is the speed of light.
>>
>> Another problem is that special relativity sez that mass and energy are tied together as parts of a single conserved quantity. So what should take the place of mass in a relativistic theory of gravitation?
>>
>> Einstein thought about how to solve those problems and others and came up with the general theory of relativity. Newton’s theory of gravity was replaced because it was a bad explanation. The experimental evidence that Newton’s theory was wrong was mostly found after general relativity was invented. Until then, nobody knew where to look for experimental evidence against Newton’s theory.
>
> Makes sense.
>
>> There’s a lot of historical discussion of the origin of general relativity, e.g. - “The genesis of general relativity” vols 1-4 edited by Jurgen Renn. If Yudkowsky is going to use historical examples, he should look them up.
>
> You're so demanding! Like me!

We’re demanding relative to other people. Objectively, saying you should look up a historical example before you say something about it isn’t extraordinary.

>>> On the other hand, if you encounter some piece of evidence Y that is definitely not predicted by your theory, this is enormously strong evidence against your theory. If P (Y |A) is infinitesimal, then the likelihood ratio will also be infinitesimal. For example, if P (Y |A) is 0.0001%, and P (Y |¬A) is 1%, then the likelihood ratio P (Y |A)/P (Y |¬A) will be 1:10,000. at’s −40 decibels of evidence! Or, flipping the likelihood ratio, if P (Y |A) is very small, then P (Y |¬A)/P (Y |A) will be very large, meaning that observing Y greatly favors ¬A over A. Falsification is much stronger than confirmation. It is is a consequence of the earlier point that very strong evidence is not the product of a very high probability that A leads to X, but the product of a very low probability that not-A could have led to X. This is the precise Bayesian rule that underlies the heuristic value of Popper’s falsificationism.
>>>
>>> Similarly, Popper’s dictum that an idea must be falsifiable can be interpreted as a manifestation of the Bayesian conservation-of-probability rule; if a result X is positive evidence for the theory, then the result ¬X would have disconfirmed the theory to some extent. If you try to interpret both X and ¬X as “confirming” the theory, the Bayesian rules say this is impossible! To increase the probability of a theory you must expose it to tests that can potentially decrease its probability; this is not just a rule for detecting would-be cheaters in the social process of science, but a consequence of Bayesian probability theory. On the other hand, Popper’s idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes’s theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.
>>
>> Popper made many arguments against the idea of inductive probability, see 'Realism and the aim of science', part II, chapter II. For example, in Section 13 of that chapter Popper points out that we must have an explanation of what counts as repeating the same experiment to do induction, but induction provides us with no means to get that explanation. Yudkowksy shows no sign of being aware of any of these arguments.
>>
>> The only Popper book Yudkowsky cites is LScD and he doesn’t seem to have understood it.
>
> Yeah.
>
> But you can't tell this to these people b/c they don't do Paths Forward, nor have they specified any halfway reasonable alternative that they follow. Instead they just do whatever arbitrary biased shit they feel like, with some fractured bits of rationality thrown in when they want to or find it easy/convenient.

I think they’re talking about AGI this way cuz it looks kinda easy and plausible.

Alan

Elliot Temple

unread,
Nov 10, 2017, 8:17:30 PM11/10/17
to FI, FIGG
On Nov 10, 2017, at 4:57 PM, 'Alan Forrester' via Fallible Ideas <fallibl...@googlegroups.com> wrote:

> On 10 Nov 2017, at 23:29, Elliot Temple cu...@curi.us [fallible-ideas] <fallibl...@yahoogroups.com> wrote:
>
>> On Nov 10, 2017, at 3:14 PM, 'Alan Forrester' via Fallible Ideas <fallibl...@googlegroups.com> wrote:
>>
>>> “Rationality From AI to Zombies” by Eliezer Yudkowsky, pp. 820-821:
>>>
>>>> Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism—this is the old philosophy that the Bayesian revolution is currently dethroning. Karl Popper’s idea that theories can be definitely falsified, but never definitely confirmed, is yet another special case of the Bayesian rules; if P(X|A) ≈ 1—if the theory makes a definite prediction—then observing ¬X very strongly falsifies A. On the other hand, if P(X|A) ≈ 1, and we observe X, this doesn’t definitely confirm the theory; there might be some other condition B such that P (X|B) ≈ 1, in which case observing X doesn’t favor A over B. For observing X to definitely confirm A, we would have to know, not that P(X|A) ≈ 1, but that P(X|¬A) ≈ 0, which is something that we can’t know because we can’t range over all possible alternative explanations. For example, when Einstein’s theory of General Relativity toppled Newton’s incredibly well-confirmed theory of gravity, it turned out that all of Newton’s predictions were just a special case of Einstein’s predictions.
>>>
>>> Popper didn’t call his position falsificationism. He explicitly rejected that label, see the introduction to 'Realism and the aim of science'.
>>>
>>> Popper called his position critical rationalism. Popper said that all knowledge creation proceeds by guessing and criticism. First, you spot a problem with your current ideas, then you guess solutions to the problem, then you criticise the guesses.
>>>
>>> Falsifying an idea by experimental testing is just one of the possible ways of criticising an idea: it's not central to Popper’s ideas.
>>>
>>>> You can even formalize Popper’s philosophy mathematically. The likelihood ratio for X, the quantity P(X|A)/P(X|¬A), determines how much observing X slides the probability for A; the likelihood ratio is what says how strong X is as evidence. Well, in your theory A, you can predict X with probability 1, if you like; but you can’t control the denominator of the likelihood ratio, P(X|¬A)—there will always be some alternative theories that also predict X, and while we go with the simplest theory that fits the current evidence, you may someday encounter some evidence that an alternative theory predicts but your theory does not.
>>>
>>> You can’t formalise guessing and criticism mathematically. Popper explicitly denied that his theories could be formalised
>>
>> I think, in theory, you can mathematically formalize it. In other words, I think it's computable (since our brains are computers...) and I think AGI is possible to create.
>>
>> But we don't yet know how to formalize it. And so-called AGI researchers like Yudkowsky aren't working on formalizing it, and therefore aren't really working on AGI progress.
>
> gp. Do you think trying to formalise footnotes, trees of ideas and variants from yes-no philosophy could result in progress?

Yes. Also decision charts and Paths Forward. AGI's will need Paths Forward so they don't get just as stuck as everyone else...

I think binary judgements are a big deal cuz I think that's really not what AGI ppl are currently trying to do. Also evolution! (They have some things they call "evolutionary" algorithms, but it's different.)

It's hard though, and I think it's better to do a lot more philosophy work before focusing on AGI much. Like: DD added some stuff to Popper, and then I added PF and YESNO, but: there are totally still more things to add! And that will help!

Some other particularly hard parts of AGI, imo, are how a data structure to represent *any idea* – including explanations, criticism, etc, not just observations and predictions. And how to evaluate criticisms/disagreements in general. The data structure for ideas also needs to either be able to represent emotions and hunches, or be a lower level thing they emerge from.


>>> Back to Yudowsky:
>>>
>>>> That’s the hidden gotcha that toppled Newton’s theory of gravity. So there’s a limit on how much mileage you can get from successful predictions; there’s a limit on how high the likelihood ratio goes for confirmatory evidence.
>>>
>>> The idea that Newton’s theory was replaced cuz it was incompatible with evidence is misleading. Newton’s theory of gravity got into trouble because it is incompatible with the special theory of relativity.
>>>
>>> One problem is that Newton’s theory implies that changes in the mass distribution instantly change gravitational forces everywhere. Special relativity sez that the maximum speed at which one system can influence another is the speed of light.
>>>
>>> Another problem is that special relativity sez that mass and energy are tied together as parts of a single conserved quantity. So what should take the place of mass in a relativistic theory of gravitation?
>>>
>>> Einstein thought about how to solve those problems and others and came up with the general theory of relativity. Newton’s theory of gravity was replaced because it was a bad explanation. The experimental evidence that Newton’s theory was wrong was mostly found after general relativity was invented. Until then, nobody knew where to look for experimental evidence against Newton’s theory.
>>
>> Makes sense.
>>
>>> There’s a lot of historical discussion of the origin of general relativity, e.g. - “The genesis of general relativity” vols 1-4 edited by Jurgen Renn. If Yudkowsky is going to use historical examples, he should look them up.
>>
>> You're so demanding! Like me!
>
> We’re demanding relative to other people. Objectively, saying you should look up a historical example before you say something about it isn’t extraordinary.

I know, right!

You're asking for basic competence and mild effort, and that's too damn demanding for people...

Read about the history of one part of science before writing about it in your book? Blasphemy! (Blasphemy just like Newton was hung for by the inquisition, wasn't it? Or should I check a book before I say that?)


>>>> On the other hand, if you encounter some piece of evidence Y that is definitely not predicted by your theory, this is enormously strong evidence against your theory. If P (Y |A) is infinitesimal, then the likelihood ratio will also be infinitesimal. For example, if P (Y |A) is 0.0001%, and P (Y |¬A) is 1%, then the likelihood ratio P (Y |A)/P (Y |¬A) will be 1:10,000. at’s −40 decibels of evidence! Or, flipping the likelihood ratio, if P (Y |A) is very small, then P (Y |¬A)/P (Y |A) will be very large, meaning that observing Y greatly favors ¬A over A. Falsification is much stronger than confirmation. It is is a consequence of the earlier point that very strong evidence is not the product of a very high probability that A leads to X, but the product of a very low probability that not-A could have led to X. This is the precise Bayesian rule that underlies the heuristic value of Popper’s falsificationism.
>>>>
>>>> Similarly, Popper’s dictum that an idea must be falsifiable can be interpreted as a manifestation of the Bayesian conservation-of-probability rule; if a result X is positive evidence for the theory, then the result ¬X would have disconfirmed the theory to some extent. If you try to interpret both X and ¬X as “confirming” the theory, the Bayesian rules say this is impossible! To increase the probability of a theory you must expose it to tests that can potentially decrease its probability; this is not just a rule for detecting would-be cheaters in the social process of science, but a consequence of Bayesian probability theory. On the other hand, Popper’s idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes’s theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.
>>>
>>> Popper made many arguments against the idea of inductive probability, see 'Realism and the aim of science', part II, chapter II. For example, in Section 13 of that chapter Popper points out that we must have an explanation of what counts as repeating the same experiment to do induction, but induction provides us with no means to get that explanation. Yudkowksy shows no sign of being aware of any of these arguments.
>>>
>>> The only Popper book Yudkowsky cites is LScD and he doesn’t seem to have understood it.
>>
>> Yeah.
>>
>> But you can't tell this to these people b/c they don't do Paths Forward, nor have they specified any halfway reasonable alternative that they follow. Instead they just do whatever arbitrary biased shit they feel like, with some fractured bits of rationality thrown in when they want to or find it easy/convenient.
>
> I think they’re talking about AGI this way cuz it looks kinda easy and plausible.

Yeah. I bet that helps with funding, schmoozing, and their own self-esteem while not actually thinking much.


Elliot Temple
www.curi.us

Reply all
Reply to author
Forward
0 new messages