My colleague Alexey Potapov brought up a weak objection in the AGI folklore to my dismissal of Dr. Legg's fantastical claim about general learners being necessarily too-complex, i.e., a kind of no-free-lunch theorem, that there is an "arms race" in evolution, thus the environment may be considered adversarial. This is an important question to me, as it has technological significance.
This seems to be the relevant paper, but correct me if I am wrong.
Well, I believe that is a quite weak justification, because other agents are only a small part of the environment to begin with. And yet, only a small part of those agents are predators. Thus, that is merely making some unrealistic exaggerations about the environment -- a common methodological error of mixing unreliable informal arguments with extreme formal arguments. In other words, the real environment is not adversarial as Legg assumes, so his theorem, which seems like it was adapted from the Godel-Lucas-Penrose argument against mechanical minds, simply does not apply.
Conceivability does not entail possibility, let alone actuality. (Why almost every paper of David Chalmers and Nick Bostrom is wrong -- it's a methodological error no serious philosopher should commit).
If you make extraordinary claims, you need extraordinary evidence. When you make extraordinary claims about the environment, you need direct and reliable, quantitative evidence about the environment, toy scenarios are irrelevant -- due to incompleteness theorems. Thus, the extraordinary evidence (about the physical world) cannot be just a mathematical lemma. One cannot define such evidence into existence; abra-cadabra, does not work. It must be based on physical sensor readings, in general, qua empiricism. Physical justification is entirely lacking here, I believe.
I wonder if there are any stronger objections to my dismissal of Dr. Legg's interpretation about the supposed infeasibility of general learners?
To simplify, I *of course* assume scientific realism -- I will not discuss any other kind of framework.
This brings me to state that there is one actual environment, and imaginary environments are irrelevant.
I then observe the actual environment, and trivially, it is not adversarial.
I am now going to examine an *extreme* case of why it is not adversarial, but *non-extreme* (ordinary, daily, boring) cases constitute a better refutation. I am applying Legg's methodology, taking a formal claim to an extreme imaginary playground. Which should, by his own methodology, tell us something about the real world.
Extreme claim 1: If the environment were indeed adversarial, no scientist would ever be able to extract any significant amount of algorithmic information from the environment using computational methods. No scientist could infer any interesting axiomatic theory, in particular. Evidence shows they do, and that they do rather easily, and routinely, contradicting with Legg's extreme claim. (Why is not the adversarial environment preventing them?)
This would in turn vindicate (creationist) Leonid Levin's claim that computers cannot form any significant mathematical information: http://arxiv.org/abs/cs/0203029
This places Legg firmly in the Platonist/creationist mathematicians camp, together with Levin -- if we accepted Legg's own methodology, in proposing extreme claim 1.
In other words, Legg's theorem (and methodology), if true, would imply either dualism, or creationism -- a conclusion which I would regard as poor philosophy, rather than poor mathematics which it certainly is not.
The theorem is mathematically true and rigorous, however, it is unfortunately scientifically implausible. In a manner of speaking, Legg's interpretation of his theorem is false, not his theorem -- merely an exaggeration with no useful numbers. Similar to the case that Godel's interperation of his incompleteness theorems were false, not his theorems -- you can find his interpretations in Godel's unpublished manuscripts, available in any academic library. 
In other words, Legg's claim seems to be a case of making an improbable scenario seem probable, or even actual. However, I disagree -- with the informal argumentation / interpretation part, because the environment is neither adversarial, nor "friendly". It is neither heaven, nor hell. The environment is not the adversary, i.e., the devil. It is wrong to attribute such a moral, or mythical character to the environment, most philosophers regard the environment as neutral and uncaring about life, but not adversarial. Not even in the mathematical sense -- as in adversary arguments, proofs by contradiction. Nature is seldomly filled with very difficult problems (NP-Complete problems for instance), but adversarial behavior is certainly not mother nature's preference or the default case. Many further (strong) refutations are immediately possible based on my physicalist re-interpretation of AIT, but I thought I would share this one first.
I am hinting that there seems to be a common methodological error behind the interpretation, although the mathematics is certainly true. I object to the interpretation on physicalist/empiricist grounds. There seems to be a vast gulf between the theorem and the actual environment, invalidating the interpretation. Briefly, to believe the interpretation, we will need some -- hard to obtain -- numbers, otherwise, it is not quite useful for AI researchers. Just how adversarial is the actual environment on Earth, for instance, and how would you quantify it? How much of the environment is adversarial, and how good can they be at adversarial/deceptive behavior? What are the limits? Why is not the human society full of disinformation if that were true, and why is it easy to detect disinformation? Without asking such relevant, scientifically formed questions, and answering them satisfactorily, I find it hard to believe in the supposed interpretation of the theorem.
Comments most welcome, as always.