Any reason to believe that the real environment is "adversarial"?

40 views
Skip to first unread message

Eray Ozkural

unread,
Mar 6, 2016, 10:19:26 PM3/6/16
to ai-phi...@yahoogroups.com, magic...@googlegroups.com
Dear all,

My colleague Alexey Potapov brought up a weak objection in the AGI folklore to my dismissal of Dr. Legg's fantastical claim about general learners being necessarily too-complex, i.e., a kind of no-free-lunch theorem, that there is an "arms race" in evolution, thus the environment may be considered adversarial. This is an important question to me, as it has technological significance.

This seems to be the relevant paper, but correct me if I am wrong.

Well, I believe that is a quite weak justification, because other agents are only a small part of the environment to begin with. And yet, only a small part of those agents are predators. Thus, that is merely making some unrealistic exaggerations about the environment -- a common methodological error of mixing unreliable informal arguments with extreme formal arguments. In other words, the real environment is not adversarial as Legg assumes, so his theorem, which seems like it was adapted from the Godel-Lucas-Penrose argument against mechanical minds, simply does not apply.

Conceivability does not entail possibility, let alone actuality. (Why almost every paper of David Chalmers and Nick Bostrom is wrong -- it's a methodological error no serious philosopher should commit).

If you make extraordinary claims, you need extraordinary evidence. When you make extraordinary claims about the environment, you need direct and reliable, quantitative evidence about the environment, toy scenarios are irrelevant -- due to incompleteness theorems. Thus, the extraordinary evidence (about the physical world) cannot be just a mathematical lemma. One cannot define such evidence into existence; abra-cadabra, does not work. It must be based on physical sensor readings, in general, qua empiricism. Physical justification is entirely lacking here, I believe.

I wonder if there are any stronger objections to my dismissal of Dr. Legg's interpretation about the supposed infeasibility of general learners?

To simplify, I *of course* assume scientific realism -- I will not discuss any other kind of framework. 

This brings me to state that there is one actual environment, and imaginary environments are irrelevant.

I then observe the actual environment, and trivially, it is not adversarial.

I am now going to examine an *extreme* case of why it is not adversarial, but *non-extreme* (ordinary, daily, boring) cases constitute a better refutation. I am applying Legg's methodology, taking a formal claim to an extreme imaginary playground. Which should, by his own methodology, tell us something about the real world.

Extreme claim 1: If the environment were indeed adversarial, no scientist would ever be able to extract any significant amount of algorithmic information from the environment using computational methods. No scientist could infer any interesting axiomatic theory, in particular. Evidence shows they do, and that they do rather easily, and routinely, contradicting with Legg's extreme claim. (Why is not the adversarial environment preventing them?)

This would in turn vindicate (creationist) Leonid Levin's claim that computers cannot form any significant mathematical information: 
http://arxiv.org/abs/cs/0203029

This places Legg firmly in the Platonist/creationist mathematicians camp, together with Levin -- if we accepted Legg's own methodology, in proposing extreme claim 1.

In other words, Legg's theorem (and methodology), if true, would imply either dualism, or creationism -- a conclusion which I would regard as poor philosophy, rather than poor mathematics which it certainly is not. 

The theorem is mathematically true and rigorous, however, it is unfortunately scientifically implausible. In a manner of speaking, Legg's interpretation of his theorem is false, not his theorem -- merely an exaggeration with no useful numbers. Similar to the case that Godel's interperation of his incompleteness theorems were false, not his theorems -- you can find his interpretations in Godel's unpublished manuscripts, available in any academic library. [1]

In other words, Legg's claim seems to be a case of making an improbable scenario seem probable, or even actual. However, I disagree -- with the informal argumentation / interpretation part, because the environment is neither adversarial, nor "friendly". It is neither heaven, nor hell. The environment is not the adversary, i.e., the devil. It is wrong to attribute such a moral, or mythical character to the environment, most philosophers regard the environment as neutral and uncaring about life, but not adversarial. Not even in the mathematical sense -- as in adversary arguments, proofs by contradiction. Nature is seldomly filled with very difficult problems (NP-Complete problems for instance), but adversarial behavior is certainly not mother nature's preference or the default case. Many further (strong) refutations are immediately possible based on my physicalist re-interpretation of AIT, but I thought I would share this one first.

I am hinting that there seems to be a common methodological error behind the interpretation, although the mathematics is certainly true. I object to the interpretation on physicalist/empiricist grounds. There seems to be a vast gulf between the theorem and the actual environment, invalidating the interpretation. Briefly, to believe the interpretation, we will need some -- hard to obtain -- numbers, otherwise, it is not quite useful for AI researchers. Just how adversarial is the actual environment on Earth, for instance, and how would you quantify it? How much of the environment is adversarial, and how good can they be at adversarial/deceptive behavior? What are the limits? Why is not the human society full of disinformation if that were true, and why is it easy to detect disinformation? Without asking such relevant, scientifically formed questions, and answering them satisfactorily, I find it hard to believe in the supposed interpretation of the theorem.

Comments most welcome, as always.

Kind Regards,

potapov

unread,
Mar 7, 2016, 9:05:35 AM3/7/16
to MAGIC, ai-phi...@yahoogroups.com
Hello.

Eray Özkural:
My colleague Alexey Potapov brought up a weak objection in the AGI folklore to my dismissal of Dr. Legg's fantastical claim about general learners being necessarily too-complex, i.e., a kind of no-free-lunch theorem, that there is an "arms race" in evolution, thus the environment may be considered adversarial. This is an important question to me, as it has technological significance.

Let me clarify my position a little bit. Any predictor independent of its complexity can be tricked by specially constructed data source with slightly higher complexity and resources, so this is not the reason to claim that general learners should necessarily be very complex. This conclusion is indeed strange. And of course, this doesn't imply that general learners cannot exist in principle. Although it does impose some obvious restrictions on what can be predicted by learners possessing limited complexity and computational power. This simply means that these theorems want too much from 'general learners'. If we first fix a data source than some (simple) general predictor will be able to crack it using more time. Loosely speaking, it's senseless to demand even from a genius to solve some simple problem e.g. in a nanosecond or to solve some extremely difficult problem in lifetime. So what? Shouldn't we call humans general predictors/learners/solvers after this?
Indeed, I pointed out that Legg's theorems are not that inapplicable to the real world, since at least to some extent intelligence evolution was caused by an 'arms race' - you don't need 'complex intelligence' to survive in a simple environment, but weaker apes need to outsmart stronger apes to take dominant position and leave offsprings. But my point wasn't exactly that our environment is highly adversarial. More undoubtedly, our environment is more complex than ourselves. In general, "nature is mischievous but not bent on trickery". So, the problem is that 'general learners' should efficiently deal with this complexity.
This doesn't follow from Legg's theorems for realistic environments, but I do believe that simple learners will be either not efficient or not general. There is some trade-off between algorithmic and computational complexity. An agent can (learn to) play chess using brute force, or using great deal of knowledge. Science is also very large system that includes not only factual knowledge ('inductive bias'), but also the scientific method, and specific inference strategies for particular domains. 'General learners' are possible since they invented all this stuff. But it took a lot of time, and was an incremental process (and general learners were not simple already at the beginning of this process, and creation of these learners took even more computational resources).
I guess, this is the nature of complex inversion problems. Chess game is quite simple, but its 'inversion' is either computationally or algorithmically complex. Of course, the same is even more true for the inversion of universal Turing machine, when we are talking about general learners...

Kind regards,
Alexey.

Eray Ozkural

unread,
Mar 7, 2016, 7:36:04 PM3/7/16
to magic...@googlegroups.com, ai-phi...@yahoogroups.com
Hello there!

Thanks a lot for the explanation Alexey. I was quite sick, and I only recently recovered, this was the first thing that came to my mind. Our small but spirited discussion about Legg's theorem. 

It still boils down to whether brains contain a lot of a priori information to deal with deceptions and create deceptions, I find that unlikely. It looks like most of it is about, actually perception, reasoning, and problem solving, as expected.

Here is an amusing business angle to this discussion. If Legg's theorem is true, paradoxically, no investor should ever invest in AGI startups like NNAISSENCE (correct spelling?), Sentient Technologies, Vicarious, Brain Corp., and the like. It would be a scam. Or, is it a matter of degree? 

However, Legg is a co-founder of Deep Mind which does proclaim "general-purpose machine learning" on their website. That is exactly the description I used in my 2010 papers/project proposals. I think that, either Legg is right about this, or Deep Mind's website is right. Or is he saying: we are using a lot of algorithms that could be considered narrow AI, but those are absolutely essential for "true" AGI? Do they have special "clever" algorithms that can counter all these adversaries? Is that a trade secret? I am trying to understand what he thinks of this dilemma. Is it a matter of gradation, perhaps? How many of these "clever tricks" supposed to be a priori information, and how are we going to add them to the system? Does this mean, that the intelligent agents that Deep Mind produces cannot be trusted, and would try to deceive you? Are they Machiavellian agents? Or is he saying: AGI is impossible -- interpreting his theorem as a kind of NFL, therefore we are working on other kinds of agents? Inquiring minds wish to know. 

If Legg's theorem is generally applicable, then no general purpose machine learning system of foreseeable/feasible complexity should work well enough, and we should not be able to take over the "magic" that evolution has produced with our "naive theories about epistemology" that apparently do not depend on untangling the evolutionary arms race.

Well, then, this does seem to boil down to the question of whether, as I intuited, the scientific method works well or not. I already gave my argument why it should work: because inductive inference theory is *physically* complete. I can also use it to refute disinformation. You can deceive me for some time, but unless you've managed to fry my circuitry, I will find it out.

On the other hand, cases like adversarial environments or environments dominated by adversaries seem quite rare. The reason I give for this is my energy prior. The energy budget of the cosmos is locally limited, and that is a highly unlikely kind of environment. In other words, intelligence *necessarily* yields collaborative strategies / symbiosis, rather than vicious, deceptive predators preying upon each other, nature does not seem like a fight club. And the simple, basic, universal reason is energy economy. Deception is too costly, its dominance would decrease the fitness of all, not improve them as Legg seems to assume, because it does not come for free.

That is rather like, the fact that deceiving people into believing that the richer people should pay less taxes, makes everyone poorer on the average. Quite analogous, actually, but not the same of course.

In science, these matters are usually investigated in ecology, which is a complex systems approach to understanding ecosystems. An ecosystem study usually contains several tens of thousands of variables, just to predict the simplest dynamics. Such as predator/prey population dynamics. However, it is possible. If the environments were adversarial, such projections would not work.

I guess the society is not full of spies, and the ecology is not comprised of vicious predators after all. There are other mechanism that are also quite common in nature, and I agree with your apt description:  

Nature is mischievous, but not bent on trickery.

That should be a slogan to remember for human-level AI researchers! And now, we can explain and even prove why!

It was the naturalist interpretation of the famous Einstein quote, thank (mostly not evil) Google for reminding.


The Lord God is subtle, but malicious he is not. -- Albert Einstein

 
Kind regards,
Alexey.


Best Wishes,

Eray
 

--
Before posting, please read this: https://groups.google.com/forum/#!topic/magic-list/_nC7PGmCAE4
---
You received this message because you are subscribed to the Google Groups "MAGIC" group.
To unsubscribe from this group and stop receiving emails from it, send an email to magic-list+...@googlegroups.com.
To post to this group, send email to magic...@googlegroups.com.
Visit this group at https://groups.google.com/group/magic-list.

Marcus Abundis

unread,
Apr 15, 2016, 10:32:11 AM4/15/16
to MAGIC, ai-phi...@yahoogroups.com
Initially, I tend to view this question in a more reductive sense . . . doesn't the heart of this issue lie in how one views entropy (natural dissipative tendencies) rather than agency, ethics, or adversarial roles?
I am new to this group so I am "trying out the ground" here, but the notions I saw raised here seemed rather "higher order" as compared to computational fundaments I think might be needed to address the matter – am I missing something here, just off key, or? I confess, I did not read the linked paper and stopped reading at the mention of "ethics." Any corrective guidance or thoughts are appreciated.
thanks!
Reply all
Reply to author
Forward
0 new messages