Having just retired, and having some free time on my hands, and
remembering how much I enjoyed discovering LISP so many, many years ago,
[ it was release 1.5, then...], I decided to work my way through SICM.
So, pardon this low level beginner's question:
Based on the scheme functions available to us in sections 1.1 -1.4, do
we have any recommended techniques that will clue us about potential
impossible situations !?
For Exercise 1.6, I dutifully modified find-path, make-path, and
parametric-path-action so that I may use additional pre-initial t00 q00
[ where t00 < t0 ] and post final t11 q11 [ with t11 > t1 ] so that I
could fix initial and final velocities at time t0 and t1 and produce
impossible situations for a free-particle path. Alas, the find-path
continues to produce lovely, smooth, polynomial graphs.
This, of course, is entirely expected, given that we use
linear-interpolants and then Lagrangean interpolation polynomials
constrained by fixed finite degrees.
What did others do? !?
-sincerely,
Bill Margolis.
For Exercise 1.6, [ ... snipped ... ]What did others do? !?
I was stumped by that one. I reckoned I would need a function to
produce a polynomial interpolation of the path through given points but
holding the derivatives constant at the endpoints. I don't think there
is a such a tool in the smutils toolbox. I thought that a cubic spline
interpolation would do the trick, but implementing it is non-trivial.
There is a section on cubic spline interpolation in Press et al
"Numerical Recipes" or
http://en.wikipedia.org/wiki/Spline_interpolation (but Wikipedia is
slow these days, or perhaps it's just that it knows I haven't made a
donation ...)
I have typed up my solutions for all the exercises up to 1.12. Drope me
a line if you want to see them.
Please let me know when you get to Exercise 1.14 - I have been stuck on
that for about 4 years now ...
Cheers
Ian
Thanks for the reference, Ian ! Yup, the Press book is outstanding.
But,
maybe getting a better algorithm won't make much of a difference? After
all,
one possibility is that the minimization problem across continuous
functions
with fixed endpoints and a fixed velocity may not always be well-posed:
the
minimum path may not be differentiable there[!]
>
> I have typed up my solutions for all the exercises up to 1.12. Drope me
> a line if you want to see them.
Thanks, but so far I have been enjoying working them, allbeit slowly.
>
> Please let me know when you get to Exercise 1.14 - I have been stuck on
> that for about 4 years now ...
[!] When I get there, I will let you know[!]
>
> Cheers
> Ian
One aside: although I am enjoying working through the notions about
differentiating complex structures and using the fixed positional
notation in
place of the traditional dx's and dy's, it seems to me that Sussman and
Wisdom have lost an opportunity to easily clarify the traditional
notations,
by making use of simple notions within programming language theory.
Operationally one could retain the x's y's and z's, or q's and qdot's
as names
of variables within the parameter lists of function definitions, where
they
are unambiguously defined as local variables. Then, differentiation
wrt
those variables would be special forms that require identifying the
differentiation argument variable with the local variable in the
parameter
list of the defun? ( Of course, perhaps this only makes sense within
Common
Lisp where the variable namespace is kept disjoint from function
namespace ... ) Still, after all, there is a lot that can be said for
the
advantages of being able to use labels for positional parameters, which
we
have been doing successfully for the past 300 years or so...
-bill.
> I have typed up my solutions for all the exercises up to 1.12.
Drope me
> a line if you want to see them.
Thanks, but so far I have been enjoying working them, allbeit
slowly.
I wasn't proposing you stop doing them yourself, I was proposing you
compare my solutions to yours. I would be interested in the result of
the comparison.
> Please let me know when you get to Exercise 1.14 - I have been
stuck on
> that for about 4 years now ...
[!] When I get there, I will let you know[!]
I look forward to it.
I need to think more about your other comments.
Regards
Ian
Exercise 1.7 wasn't so bad -- just lifting similar proofs from calculus
results.
In Exercise 1.8, I presume everyone uses (1.22) as the implementation.
I was not able to get Exercise 1.8(b) to my satisfaction, however,
since I did want to be able to make comparisons at a higher, functional
level, rather than having to translate everything into expressions with
't'. (Maybe you had better luck?)
Exercise 1.9 was pretty straight forward, getting me used to
manipulating partials.
Exercise 1.10 needs us to add the assumption that ((D eta) t_1) =
((D_eta) t_2)
which obviously needs to be generalized for 1.12(c) in order to derive
the alternating sums expression with n-th term: (* (expt -1 n) (expt D
n) (compose ((partial (+1 n) L) (gamma q)))
So, I did get to 1.14:
> > Please let me know when you get to Exercise 1.14 - I have been
> stuck on
> > that for about 4 years now ...
It was hard! I was getting nowhere!
After a bit of frustration, I tried out a counter-example: let L be the
simplest possible lagrangian, say the free particle example L(t,x,v) =
constant v dot v. and the resulting q(t) = a t + b. Choose the
simplest C(t',x',v'), say with x = h(t'). (i.e., no dependence on x').
The result turned out to allow us arbitrary assignment for q'(t) [!].
[ ie, L'(t',x',v') = constant ((D h) t') ((D h) t') which has no
dependence on the x' or v', so the partials on left hand side and right
hand side of the lagrange equations are both zero, no matter what
function h we choose ]. So, that clued me to the obvious but
unmentioned requirement that to have q = F(t', q') invertible so that
we can solve q' in terms of q, we need the implicit function theorem
which requires:
((partial 1) F) not 0 in the domain of F containing the
investigated path, yet to be determined[!]
Anyway, assuming that we add that extra condition, I do seem to be able
to satisfactorily prove the 1.14 result. The overall strategy that
worked for me? After a good deal of frustration: I essentially used
the concept of lazy evaluation as my guide post -- Don't expand
differential evaluations until absolutely necessary[!].
First I proved 1.14 in a simplified version assuming ((partial 0)F) was
zero, so that I could use x = f(x') in place of x = F(t',x'). That way
there is a simple relation between q'(t) and q(t) -- q(t)=f(q'(t)).
Next, I found it easiest to think of the lagrange equations as an
equality between D(partial 2)L...) = (partial 1)L... This allows me to
talk about the lhs (left hand side) and the rhs (right hand side) as I
expand out, simplify expressions, and try to conclude that lhs = rhs.
I noticed that here's what happens: on the lhs, as you begin to
evaluate the (partial 2)L' expression, you end up with a product of two
terms, the first being ((partial 2)L) term and the second being the
consequence of using the chain rule. Then, when you apply D to the
product, you will end up with a sum (-)D(-) + D(-)(-). Now expand the
rhs the same amount into (partial 0)L(-) + (partial 1)L(-). At that
point you will notice that if you start looking at the expressions in
terms of expressions from the original lagrange equation for L, by
using q(t)=f(q'(t)), you can substitute from the original lagrange
expressions for lhs and rhs of L, and terms start to fall out.
After I was happy that, indeed I could prove the result if F(t',x') was
strictly a function of x' only, I did the proof over again, this time
properly allowing for (partial 0)F to be nonzero, and thus having some
extra summands to worry about, and [ using a wider sheet of paper and
spending a bit more time on the extra terms] was able to deduce an
equality.
If you like, and if these hints aren't sufficiently clear, I will be
happy to try to neaten up my handwriting, scan in my yellow pad sheets,
and email them to you...
-regards,
Bill.