On Apr 1, 2019, at 3:52 PM, 'Dennis C. Hackethal' via Fallible Ideas
<
fallibl...@googlegroups.com> wrote:
> On Sat, Mar 30, 2019 at 4:06 AM 'Alan Forrester' via Fallible Ideas
> <
fallibl...@googlegroups.com> wrote:
>>
>> On 29 Mar 2019, at 19:49, 'Dennis C. Hackethal' via Fallible Ideas
>> <
fallibl...@googlegroups.com> wrote:
>>
>>> I was struggling the other day to explain to someone why the growth
>>> of knowledge is inherently unpredictable. I *think* I can explain it
>>> in terms of “it’s a generic algorithm, and a genetic algorithm
>>> has unpredictable output”, but unless the other party is already
>>> familiar with the concept of knowledge being the result of a genetic
>>> algorithm, that doesn’t go very far. It also made me think that a
>>> genetic algorithm is unpredictable *to a degree*. If someone runs a
>>> genetic algorithm for eg the traveling salesman problem, they know
>>> it’s going to return a solution to the problem in terms of
>>> distances etc, and not something completely unexpected. So there’s
>>> at least some way to constrain the space of possible answers. I
>>> don’t think it’s possible to constrain human answers in this
>>> way, but I don’t think I understand why. I also don’t know if
>>> probabilistic = unpredictable (my guess is “no”).
>>
>> If you know that tomorrow you will know X then you would already know
>> X. So you should start working on some other problem that X doesn’t
>> solve.
>
> That is a beautifully concise reductio ad absurdum. Thanks, Alan.