In the interpX family there is an additionnal parameter to
precise the kind of evaluation outside the domain.
hth
Bruno
I've faced a similar problem not so long ago, and what I did to solve
it was making my own version of interpolation without using any of
Scilab existing function, that, had the recurring problem of having a
limited range of action, when I was facing a problem with few data
points and great distances between them.
So what I did was basically this: (though there are probably more
efficent ways of doing this)
I had a distribution of the data points I knew in a list, coupled with
their x-y positions in a matrix like this:
datapoints=[value1, value2, value3; posx1, posx2, posx3; posy1, posy2,
posy3];
Then I define a function to get the distance between those points and
any arbitrary point I want to extrapolate their value. Something like
this: (xp & yp, are the positions of the extrapolate point)
p=4; // the higher the 'p', the less influence from distant points.
function out=dist(xp,yp,n)
out=abs( sqrt( ((xp-datapoint(2,n))^2+(yp-datapoint(3,n))^2)^(p)) );
endfunction
So now what?
I have a matrix whatever size I want (the bigger, the slower though)
that I want to extrapolate starting out from the points that I do know
the real value. So I make a cycle that starts, point by point, to
measure the distance to every known point of data, and then it makes a
weighted average of 'intensities' of them. So each known data point
contributes with a percentage to the extrapolation value depending on
how closer it is to the point to which we are extrapolating.
Something like this:
// Starts scanning the square matrix of dimensions msize*msize (it
doesn't need to be // a square matrix though)
data=ndgrid(zeros(1:msize),zeros(1:msize));
for n=1:nsize
data(datapoint(2,n),datapoint(3,n))=datapoint(1,n); // places the
known data points in
//to a blank matrix where we want to extrapolate the other points
end;
dataf=data; //the final matrix containing the extrapolated data
contrib=0;
ncontrib=0;
for xp=1:msize
for yp=1:msize
// Resets contrib counters
contrib=0;
ncontrib=0;
// Starts scanning the known data points (you need to supply the
number of data
// points you have)
for n=1:nsize
// checks if point is not itself
if dist(xp,yp,n)>0 then // (ie: the extrapolate point is not one
of the data points)
// sums up each different contribution from each different data
point with
// weighted inversely porpotional to distance and normalized
calc=(1/(dist(xp,yp,n))^2);
contrib=contrib+calc*Vp(n);
ncontrib=ncontrib+calc;
end;
end;
if data(xp,yp)==0 then
dataf(xp,yp)=dataf(xp,yp)+contrib/ncontrib; // this is the
extrapolate value
end;
end;
end;
And that's it basically. Hope I was clear enough, otherwise, ask again.
Hope it helps you out.
Pedro
// interpolation point
x = linspace(-0.5,0.5,5);
y = x.^2;
// spline coef
d = splin(x,y);
// evaluation in and out the domain
xx = linspace(-1,1,80)';
yy = interp(xx, x,y,d); // default outmode = "C0"
yy2 = interp(xx, x,y,d,"natural");
yy3 = interp(xx, x,y,d,"linear");
clf();
plot(x,y,"ko",xx,yy,"b",xx,yy2,"r",xx,yy3,"g");
legend("interp point","C0","natural","linear")
hth
Bruno
may be the couple cshep2d and eval_cshep2d could have
been useful for you, have you tried it ? The help page
shows an example.
hth
Bruno