Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Article: Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms

0 views
Skip to first unread message

jai...@ptolemy.arc.nasa.gov

unread,
Sep 9, 2005, 9:10:32 PM9/9/05
to
JAIR is pleased to announce the publication of the following article:

Khardon, R., Roth, D. and Servedio, R.A. (2005)
"Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms",
Volume 24, pages 341-356.

For quick access via your WWW browser, use this URL:
http://www.jair.org/abstracts/khardon05a.html

Abstract:
The paper studies machine learning problems where each example is
described using a set of Boolean features and where hypotheses are
represented by linear threshold elements. One method of increasing
the expressiveness of learned hypotheses in this context is to expand
the feature set to include conjunctions of basic features. This can be
done explicitly or where possible by using a kernel function.
Focusing on the well known Perceptron and Winnow algorithms, the paper
demonstrates a tradeoff between the computational efficiency with
which the algorithm can be run over the expanded feature space and the
generalization ability of the corresponding learning algorithm.
</p><p>
We first describe several kernel functions which capture either
limited forms of conjunctions or all conjunctions. We show that these
kernels can be used to efficiently run the Perceptron algorithm over a
feature space of exponentially many conjunctions; however we also show
that using such kernels, the Perceptron algorithm can provably make an
exponential number of mistakes even when learning simple functions.
</p><p>
We then consider the question of whether kernel functions can
analogously be used to run the multiplicative-update Winnow algorithm
over an expanded feature space of exponentially many conjunctions.
Known upper bounds imply that the Winnow algorithm can learn
Disjunctive Normal Form (DNF) formulae with a polynomial mistake bound
in this setting. However, we prove that it is computationally hard to
simulate Winnow's behavior for learning DNF over such a feature set.
This implies that the kernel functions which correspond to running
Winnow for this problem are not efficiently computable, and that there
is no general construction that can run Winnow with kernels.

The article is available via:

-- comp.ai.jair.papers (also see comp.ai.jair.announce)

-- World Wide Web: The URL for our World Wide Web server is
http://www.jair.org/
For direct access to this article and related files try:
http://www.jair.org/abstracts/khardon05a.html

-- Anonymous FTP from Carnegie-Mellon University (USA):
ftp://ftp.cs.cmu.edu/project/jair/volume24/khardon05a.ps
The compressed PostScript file is named khardon05a.ps.Z

For more information about JAIR, visit our WWW or FTP sites, or
contact jai...@isi.edu

--
Steven Minton
JAIR Managing Editor

0 new messages