Skupiny Google už nepodporují nová předplatná ani příspěvky Usenet. Historický obsah lze zobrazit stále.

Can neural networks extrapolate?

1 407 zobrazení
Přeskočit na první nepřečtenou zprávu

Henry Choy

nepřečteno,
23. 7. 1993 23:09:3723.07.93
komu:
Do neural networks guarantee that they have learned a particular
function? I think they can interpolate well, but extrapolation may
be a problem. For instance, a neural network may be taugh the
function y = cos x for values of x between -pi and pi, but how
can we be sure that the output AFTER TEACHING is cos x for x
outside this range?

--

Henry Choy
ch...@cs.usask.ca

Anything worth doing is worth overdoing. - R. Heinlein
is worth doing well. - Philip Dormer Stanhope, Earl of Chesterfield

Bill Armstrong

nepřečteno,
25. 7. 1993 12:12:4525.07.93
komu:
choy@dvinci (Henry Choy) writes:

>Do neural networks guarantee that they have learned a particular
>function? I think they can interpolate well, but extrapolation may
>be a problem. For instance, a neural network may be taugh the
>function y = cos x for values of x between -pi and pi, but how
>can we be sure that the output AFTER TEACHING is cos x for x
>outside this range?

You can't. In fact, you can be pretty sure that it won't be close to
cos x. Your best hope would be that the function would be close to a
constant -1 outside the interval. Then it would be extrapolating via
a nearest neighbor approach, since cos x is -1 at the endpoints of the
interval.

Your hopes for good interpolation are just that -- hopes. Even for
interpolation you can't be sure in general, because all the learning
algorithm takes into account is the error on the training points, and
it is oblivious to the harm it could do by not interpolating as
smoothly as possible. Keeping the weights small, or using a training
test set might help to prevent some wild deviations in networks beyond
toy-problem size, but in high dimensional spaces, you can't get enough
data points to be sure that the function interpolates or extrapolates
well *everywhere*.

If you want to be sure about NN performance after training, you have
to do it by using an appropriate NN design technique that doesn't
depend on the luck of the draw.

Bill

--
***************************************************
Prof. William W. Armstrong, Computing Science Dept.
University of Alberta; Edmonton, Alberta, Canada T6G 2H1
ar...@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071

Henry Choy

nepřečteno,
26. 7. 1993 18:52:4826.07.93
komu:
Bill Armstrong (ar...@cs.UAlberta.CA) wrote:
: choy@dvinci (Henry Choy) writes:

: >Do neural networks guarantee that they have learned a particular
: >function? I think they can interpolate well, but extrapolation may
: >be a problem. For instance, a neural network may be taugh the
: >function y = cos x for values of x between -pi and pi, but how
: >can we be sure that the output AFTER TEACHING is cos x for x
: >outside this range?

: You can't. In fact, you can be pretty sure that it won't be close to
: cos x. Your best hope would be that the function would be close to a
: constant -1 outside the interval. Then it would be extrapolating via
: a nearest neighbor approach, since cos x is -1 at the endpoints of the
: interval.

: Your hopes for good interpolation are just that -- hopes. Even for
: interpolation you can't be sure in general, because all the learning
: algorithm takes into account is the error on the training points, and
: it is oblivious to the harm it could do by not interpolating as
: smoothly as possible.

I think about how people learn the meaning of words. Just because
we have meaning doesn't really give us the ability to express
ideas discretely (paint everything black and white).

Nevertheless, we can conceive the ideas "extrapolate" and "pattern".
Therefore a neural network also has a chance.

5251g...@vms.csd.mu.edu

nepřečteno,
1. 8. 1993 17:54:4701.08.93
komu:
Extrapolation via ANN is indeed a problem. But if this can be done successfully
it will be very useful. Generalization vs memorization by ANN plays a role
how well an ANN will interpolte or extrapolate. On the other hand network
size and training pattern space contribute on whether or not network is
memorizing or learning by generalizing. Fininding optimum network size and
training pattern vectors (or perhaps a suitable algorithm) for ANN to
learn purely by generalizing is a challenge and can lead to the better
extrapolation by ANN. I encountered the very same problem you had. I would
be happy if you are willing to exchange informations about this problem.

Louis Denger

nepřečteno,
6. 8. 1993 6:30:1906.08.93
komu:
>>Bill Armstrong (ar...@cs.UAlberta.CA) wrote:
>>: choy@dvinci (Henry Choy) writes:
>>
>>: >Do neural networks guarantee that they have learned a particular
>>: >function? I think they can interpolate well, but extrapolation may
>>: >be a problem. For instance, a neural network may be taugh the
>>: >function y = cos x for values of x between -pi and pi, but how
>>: >can we be sure that the output AFTER TEACHING is cos x for x
>>: >outside this range?
>>
>>: You can't. In fact, you can be pretty sure that it won't be close to
>>: cos x.

This is rather rash. For it is well known that cos(x+pi) = cos(x).
Thus any sec. school student will find that if the NN is
properly instructed, it can compute outside (-pi<=x<=pi).

>>
>>: Your hopes for good interpolation are just that -- hopes. Even for
>>: interpolation you can't be sure in general, because all the learning
>>: algorithm takes into account is the error on the training points, and
>>: it is oblivious to the harm it could do by not interpolating as
>>: smoothly as possible.
>>


I never experimented with interpolation.
But I think that the accuracy of the result should depend of
the number of:
- input units
- hidden unit
- output units
- patterns
- the error.
I would associate extrapolation to forecasting.
I suggest the paper of F.S. Wong: Time series forecasting
using etc... Published in: Neurocomputing 2(1990/91) 147-149.
which could be much help.

Regards

Louis

Warren Sarle

nepřečteno,
6. 8. 1993 13:41:4206.08.93
komu:

In article <1993Aug6.1...@trl.oz.au>, lo...@medici.trl.OZ.AU (Louis Denger) writes:
|> >>Bill Armstrong (ar...@cs.UAlberta.CA) wrote:
|> >>: choy@dvinci (Henry Choy) writes:
|> >>
|> >>: >Do neural networks guarantee that they have learned a particular
|> >>: >function? I think they can interpolate well, but extrapolation may
|> >>: >be a problem. For instance, a neural network may be taugh the
|> >>: >function y = cos x for values of x between -pi and pi, but how
|> >>: >can we be sure that the output AFTER TEACHING is cos x for x
|> >>: >outside this range?
|> >>
|> >>: You can't. In fact, you can be pretty sure that it won't be close to
|> >>: cos x.
|>
|> This is rather rash. For it is well known that cos(x+pi) = cos(x).
|> Thus any sec. school student will find that if the NN is
|> properly instructed, it can compute outside (-pi<=x<=pi).

No, it's not rash, it's realistic. The only way I know of to get a NN
to extrapolate a periodic function is to use a periodic activation
function, which is really cheating if you're concerned about
extrapolation in any general sense.

--

Warren S. Sarle SAS Institute Inc. The opinions expressed here
sas...@unx.sas.com SAS Campus Drive are mine and not necessarily
(919) 677-8000 Cary, NC 27513 those of SAS Institute.

Charles Benjamin Roosen

nepřečteno,
6. 8. 1993 3:51:2906.08.93
komu:
In article <1993Aug6.1...@trl.oz.au> lo...@medici.trl.OZ.AU (Louis Denger) writes:

> >>Bill Armstrong (ar...@cs.UAlberta.CA) wrote:
> >>: choy@dvinci (Henry Choy) writes:
> >>
> >>: >Do neural networks guarantee that they have learned a particular
> >>: >function? I think they can interpolate well, but extrapolation may
> >>: >be a problem. For instance, a neural network may be taugh the
> >>: >function y = cos x for values of x between -pi and pi, but how
> >>: >can we be sure that the output AFTER TEACHING is cos x for x
> >>: >outside this range?
> >>
> >>: You can't. In fact, you can be pretty sure that it won't be close to
> >>: cos x.

> This is rather rash. For it is well known that cos(x+pi) = cos(x).
> Thus any sec. school student will find that if the NN is
> properly instructed, it can compute outside (-pi<=x<=pi).

First off, you must not know very many sec. students. Second, it is clear
that if you model is of the form \hat{y}=cos(\alpha x + \beta) then you
can fit the model. However, if you are using a net which uses polynomial
or Gaussian or sigmoidal transfer functions they are highly unlikely to
capture the periodic nature of the function outside the range of the
training data.

> >>: Your hopes for good interpolation are just that -- hopes. Even for
> >>: interpolation you can't be sure in general, because all the learning
> >>: algorithm takes into account is the error on the training points, and
> >>: it is oblivious to the harm it could do by not interpolating as
> >>: smoothly as possible.

> I never experimented with interpolation.

I'm not sure what you use a net for if you haven't used them for
interpolation. If you aren't interpolating, you are just getting the
fit at the training values, so why build a net? (I suppose you could
argue that pattern recognition is more of a clustering problem rather than
interpolation.)

> But I think that the accuracy of the result should depend of
> the number of:
> - input units
> - hidden unit
> - output units
> - patterns
> - the error.

The accuracy will depend on how close the true function is to the
class of implementable functions given your particular net
architecture, and to how close to the true function your training algorithm
is able to get you. (Which includes the factors you list.)

Regards,

Charles Roosen

0 nových zpráv