Overlap between BPL papers and my previous HAM papers

Skip to first unread message

Eray Ozkural

Nov 30, 2016, 10:37:12 AM11/30/16
to magic...@googlegroups.com, Ben Goertzel, ai-phi...@yahoogroups.com, jue...@idsia.ch, ray solomonoff, David Dowe
Dear colleagues, Someone pointed out that the algorithm in https://arxiv.org/abs/1110.5667 is too similar to my algorithm in https://arxiv.org/abs/1103.1003 published approximately 6 months before.

The paper titles, authors, and dates on arxiv are as follows:

Inducing Probabilistic Programs by Bayesian Program Merging

(Submitted on 25 Oct 2011)

Teraflop-scale Incremental Machine Learning

(Submitted on 5 Mar 2011)

This is quite unsettling for me. The transfer learning algorithms are virtually identical. As far as I can tell, only terminology has changed. They used terminology from PL literature instead of my (informative IMHO) nomenclature. Their Bayesian Program Learning term is inaccurate anyhow, as all learning is Bayesian. There is no such thing as non-Bayesian learning. It is Algorithmic Memory. It solves the incremental machine learning problem, or transfer learning problem and is a model of long-term memory; it is not program learning as usual, as we often refer to the inference task by that, not the transfer learning task. I used the additional term Heuristic, because inductive systems invent heuristic solutions.

The term Bayesian Program Merging is also inaccurate, we are updating the guiding probability distribution. It is called the "update problem". Not program merging. They also failed to cite Ray Solomonoff's seminal 1989 and 2002 papers that introduced the update problem, and Schmidhuber's 2002-2005 OOPS papers, of which my work is a continuation. This does not just seem like neglecting to cite. It looks like neglecting the whole AGI and AIT research community. Therefore, you should know that this is a situation that is beyond my personal contributions. I do not intend anyone else to rip off Solomonoff's original, ground-breaking research, for one. I am fed up with this nonsense. They cannot "deep learning" their way out of this, appropriating research they have not done themselves. It is nigh impossible for them not to know Ray Solomonoff's aforementioned papers if they did something about program induction, even if we agree that they do not like me because they do not know my name, they do not like my "ethnicity", or because they despise the AGI community because we are not the AAAI community and we are not completely US-centered.

The transfer learning algorithm seems about the same, it is merely applied to pattern recognition instead of universal induction, so it is more limited in scope of course. They did not cite my work, and it looks like my algorithms found their way into at least one PhD thesis, and several nice publications at coveted venues, but my original contribution was neglected. Because according to a professor, AGI is a "minor" conference and that is normal. How can these allegedly "minor" conferences become "major" if everyone neglects the work published therein? This is a logical contradiction, and that argument is invalid.

For the record, HAM is more general than BPL method, it precedes it by 1.5 years, and it should have been cited by their papers. They cannot say they did not see the paper, because it was on the arxiv 6 months before they put theirs on arxiv. That is exactly why I put it on the arxiv in March 2011. And I published and presented pretty much the same thing in AGI 2010 conference, and later published in AGI 2011 conference. I put it on arxiv for exactly this reason, I knew that it would appear somewhere else uncannily. I am not accusing anyone with a particular charge, I do not know what happened, but that does *not* matter. I think they simply will not cite papers from AGI conference, which looks to me like a serious ethical error. That attitude does not sit well with my understanding of academic integrity and peer review process. Even if they forgot, the reviewers should have detected this neglect. Therefore, this situation cannot be excused. They had 5 years to do their literature survey. Why have not they noticed it in the intervening years when they could have found it easily on google scholar or arxiv search using common technical terms like "stochastic context-free grammar"? What is the probability of that happening, all else being equal? That is how the "informant" found the paper, he was apparently searching for my paper on arxiv, and found theirs, as well. If I did not tell anyone, people would likely keep pretending this did not happen. I am bringing this matter to your attention, for some "clever" people might attempt to erase my existence from the net even if with a small probability (<5%). I am not going to pretend that I am not worried about this. The papers downloaded from arxiv are attached in case anyone attempts to delete my arxiv account or destroy my contribution to AGI-10 and AGI-11 proceedings by some means I cannot yet imagine. You are my witness.


Eray Ozkural, PhD. Computer Scientist
Founder, Gok Us Sibernetik Ar&Ge Ltd.
Reply all
Reply to author
0 new messages