Has anyone else encountered below-chance generalization performance with a properly balanced binary classification situation? Any thoughts or insights on this matter would be much appreciated.
On Mon, 15 Sep 2008, Francisco Pereira wrote:
> On Mon, Sep 15, 2008 at 7:02 PM, Jesse Rissman <[1]ris...@gmail.com>
> wrote:
--
Yaroslav Halchenko
Research Assistant, Psychology Department, Rutgers-Newark
Student Ph.D. @ CS Dept. NJIT
Office: (973) 353-5440x263 | FWD: 82823 | Fax: (973) 353-1171
101 Warren Str, Smith Hall, Rm 4-105, Newark NJ 07102
WWW: http://www.linkedin.com/in/yarik
imho good strategy in such cases is to check what is the actual
empirical chance performance on a given dataset (data + labels) ;-)
permute labels randomly and do exactly the same
learning/feature_selection/testing pipeline... and do that quite a few
times... and then see how well training on randomized labels does.
On Mon, 15 Sep 2008, Jesse Rissman wrote:
> I have recently encountered a few situations in which a 2-layer
> backpropagation neural net classifier consistently yields below-chance
> levels of performance. The classification is binary, and the number of