sage: D[0](f)(x, y)/x + D[0, 1](f)(x, y)
If you think you want derivatives right now, you're probably better off
using aptly-named symbolic variables, and performing the differentiation
later on once you're done manipulating the "derivatives."
sage: f = function('f', x)
sage: f_prime = f.diff(x)
sage: f_prime.simplify()
D[0](f)(x)
sage: f_prime(0).simplify()
...
NotImplementedError: arguments must be distinct variables
Most symbolic stuff blows up the same way:
sage: solve([f_prime(0) == 0], f_prime(0))
...
NotImplementedError: arguments must be distinct variables
Substitution doesn't even work pre-evaluation:
sage: g = function('g', x)
sage: f_prime.substitute_function(f,g)
D[0](f)(x)
A symbolic var('D0fx') on the other hand will work just fine.
I sort of buy that, but if,
sage: f = function('f', x)
doesn't make `f` a function, that's a user-interface WTF =)
It wouldn't matter if I could substitute them as expressions, but that
doesn't work either:
sage: f = function('f', x)
sage: g = function('g', x)
sage: f.diff(x).substitute_expression(f==g)
D[0](f)(x)
sage: f.diff(x).subs(f=g)
D[0](f)(x)
sage: f.diff(x)(f=g)
D[0](f)(x)
I reported a lot of these at,
http://trac.sagemath.org/sage_trac/ticket/11842
and later duped it to,
http://trac.sagemath.org/sage_trac/ticket/6480
which seems to be the first report of the problem.
> For that, you would have to use a different substitution function and
> it wouldn't have any effect anyway because the expression f(x) does
> not occur in f_prime.
>
> If you fix this, the substitution does work:
>
> sage: f = f.operator()
> sage: g = g.operator()
> sage: f,g
> (f, g)
> sage: f_prime.substitute_function(f,g)
> D[0](g)(x)
>
> There is a different issue, which is fixed by http://trac.sagemath.org/sage_trac/ticket/12801
>
After #12801, I'm having trouble reconciling these two examples:
sage: f = function('f', x)
sage: g = function('g')
sage: f.diff(x).substitute_function(f,g)
D[0](f)(x)
versus,
sage: f = function('f')
sage: g = function('g')
sage: f(x).diff(x).substitute_function(f,g)
D[0](g)(x)
It would be slightly better I think if the first example worked, but I
can live with using the second (although I never would have discovered
it on my own).
What I would /really/ like to be able to do is,
midpoint = (1/2)*( f(a) + f(b) )
and then approximate multiple functions by swapping out the symbolic `f`
for a real function like sine.
That was a too-simple example. You can't create e.g. a cubic spline
because of the evaluated derivatives. In general the form over [-1,1]
would look like,
s(f;x) = a(x)*f(-1) + b(x)*f'(-1) + c(x)*f(1) + d(x)*f'(1)
Swapping out `f` after evaluating it at the endpoints is what causes the
biggest problems.
If you make `s` a function that takes `f` as an argument, it works, but
I need to be able to swap out `f` later for two reasons:
1) With higher order approximations, optimal splines can take a long
time to compute.
2) To calculate error bounds, I need to know the coeffients of f,
f.diff(),... What's the coefficient of sin(pi)?
>
> That said, I think your confusion shows that it's probably better if
> function('f',x) were to be deprecated in favour of function('f')(x).
> The relevant bit of information to glean from the notation is that f
> takes one argument. You can already indicate the valid number of
> arguments:
>
This does work better, I'll use it from now on, thanks.
Yes, I didn't realize that #12796 fixed this the first time I looked at
it. I've reviewed it, thanks!