[racket] Function composition in Racket

956 views
Skip to first unread message

Gregory Woodhouse

unread,
Oct 14, 2012, 7:00:19 PM10/14/12
to Racket Mailing List
I wrote a small recursive function to convert a list (a0 a1 ... an) coefficients into a polynomial function

;;Given a list (a0 a1 ... an) return a function that computes
;;p(x) = a0 + a1*x + ... + an*x^n
(define (polynomial coeffs)
(lambda (x)
(cond
[(= (length coeffs) 0) 0]
[(= (length coeffs ) 1) (first coeffs)]
[else (+ (first coeffs) (* x ((polynomial (rest coeffs)) x)))])))

and it seems to work fairly well. The problem is that it only works for numeric functions. If I define an operator (i.e., a function, usually linear,
that maps functions to functions) such as the difference operator

(delta f)(x) = f(x) - f(x - 1)

or

(define (delta f)
(lambda (x)
(- (f x) (f (- x 1)))))

and p is polynomial, I ought to be able to compute a new operator

p(delta) = a0 + a1*delta + a2*delta^2 + ... + an*delta^n

(where delta^k is just delta composed with itself or, if you prefer, applied k times).

Now, my question is: is there a notation in Racket for representing composition that I should use, or am I better off writing a function that maps an operator
A to p(A) where p is a polynomial?
____________________
Racket Users list:
http://lists.racket-lang.org/users

Gregory Woodhouse

unread,
Oct 14, 2012, 7:49:58 PM10/14/12
to Racket Mailing List
Uh... never mind. I should have looked for the obvious

> (define (f x) (+ x 1))
> (define (g x) (* x x))
> ((compose f g) 1)
2
> ((compose f g)  2)
5

Justin R. Slepak

unread,
Oct 14, 2012, 7:57:56 PM10/14/12
to Racket Mailing List
Instead of trying to peek inside a lambda, you could implement polynomials as (transparent) structs and use prop:procedure to allow them to be applied to numbers:

(struct polynomial (coeffs)
#:transparent
#:property prop:procedure
(lambda (poly num)
(define-values (result x*)
(for/fold ([sum 0]
[x 1])
([c (polynomial-coeffs poly)])
(values (+ sum (* c x))
(* x num))))
result))

This would let you implement functions that crunch on polynomials. As for using existing operators' names, maybe generics can get you that?

---
Justin Slepak
PhD student, Computer Science dept.

Justin R. Slepak

unread,
Oct 14, 2012, 10:28:33 PM10/14/12
to us...@racket-lang.org
To use prop:procedure, just give a function which will handle the application of the structure to some arguments. The define-values is only there because the for/fold has two accumulators (sum and x) and will therefore return two values (the values of those accumulators). This means its context has to expect two values, even though we don't really care about the second value. The x* is just a name.

---
Justin Slepak
PhD student, Computer Science dept.

----- Original Message -----
From: Gregory Woodhouse <gregwo...@me.com>
To: Justin R. Slepak <jrsl...@ccs.neu.edu>
Sent: Sun, 14 Oct 2012 22:07:59 -0400 (EDT)
Subject: Re: [racket] Function composition in Racket

Thanks! This does what I want. To tell you the truth, I've shied away from prop:procedure (probably more due to my own confusion than anything else!). The define-values here seems a bit mysterious, but I assume the point is to support the recursive polynomial evaluation function? Is x* a special syntax here or just a name. I see sum in both for/fold and values, but the x* in define-values is mysterious.

On Oct 14, 2012, at 4:57 PM, "Justin R. Slepak" <jrsl...@ccs.neu.edu> wrote:

> Instead of trying to peek inside a lambda, you could implement polynomials as (transparent) structs and use prop:procedure to allow them to be applied to numbers:
>
> (struct polynomial (coeffs)
> #:transparent
> #:property prop:procedure
> (lambda (poly num)
> (define-values (result x*)
> (for/fold ([sum 0]
> [x 1])
> ([c (polynomial-coeffs poly)])
> (values (+ sum (* c x))
> (* x num))))
> result))
>
> This would let you implement functions that crunch on polynomials. As for using existing operators' names, maybe generics can get you that?
>
> ---
> Justin Slepak
> PhD student, Computer Science dept.


Matthias Felleisen

unread,
Oct 15, 2012, 10:02:17 AM10/15/12
to Justin R. Slepak, us...@racket-lang.org

Do you want to try for/sum here?

Justin R. Slepak

unread,
Oct 15, 2012, 10:44:16 AM10/15/12
to us...@racket-lang.org
Ah, I forgot about for/sum. This version is probably clearer:

(struct polynomial (coeffs)
#:transparent
#:property prop:procedure
(lambda (poly num)
(for/sum ([x (length (polynomial-coeffs poly))]
[c (polynomial-coeffs poly)])
(* c (expt num x)))))

Stephen Bloch

unread,
Oct 15, 2012, 10:57:54 AM10/15/12
to Justin R. Slepak, us...@racket-lang.org

On Oct 15, 2012, at 10:44 AM, Justin R. Slepak wrote:

> Ah, I forgot about for/sum. This version is probably clearer:
>
> (struct polynomial (coeffs)
> #:transparent
> #:property prop:procedure
> (lambda (poly num)
> (for/sum ([x (length (polynomial-coeffs poly))]
> [c (polynomial-coeffs poly)])
> (* c (expt num x)))))

But probably slower, at least for exact numbers. If "expt" were implemented naively as "for i = 1 to num", the total number of multiplications would be quadratic in degree; if it were implemented by repeated squaring, the total number of multiplications would be O(n log(n)); with Horner's algorithm or your "values" approach, it's linear.

Horner's algorithm gives us

(lambda (poly x)
(for/fold ([sum 0])
([c (polynomial-coeffs poly)])
(+ c (* sum x))))



Stephen Bloch
sbl...@adelphi.edu

Robby Findler

unread,
Oct 15, 2012, 11:35:23 AM10/15/12
to Stephen Bloch, us...@racket-lang.org
What degree of polynomial, I wonder, would it take to find a
noticeable difference between these?

Stephen Bloch

unread,
Oct 15, 2012, 11:45:27 AM10/15/12
to Robby Findler, us...@racket-lang.org, Stephen Bloch

On Oct 15, 2012, at 11:35 AM, Robby Findler wrote:

> What degree of polynomial, I wonder, would it take to find a
> noticeable difference between these?

To distinguish between linear and quadratic, probably thousands to millions (depending on whether "noticeable" means "to a human being" or "to the time form"). To distinguish between linear and n*log(n), I would think millions or tens of millions.

But the Horner's algorithm solution is also shorter code than either of the others :-)

Jens Axel Søgaard

unread,
Oct 15, 2012, 11:49:52 AM10/15/12
to Stephen Bloch, us...@racket-lang.org
2012/10/15 Stephen Bloch <bl...@adelphi.edu>:

> But probably slower, at least for exact numbers. If "expt" were implemented naively as "for i = 1 to num", the total number of multiplications would be quadratic in degree; if it were implemented by repeated squaring, the total number of multiplications would be O(n log(n)); with Horner's algorithm or your "values" approach, it's linear.
>
> Horner's algorithm gives us
>
> (lambda (poly x)
> (for/fold ([sum 0])
> ([c (polynomial-coeffs poly)])
> (+ c (* sum x))))

If I recall correctly, Horner's algorithm also gives more precise results,
when used with pseudo real numbers.

--
Jens Axel Søgaard

Neil Toronto

unread,
Oct 15, 2012, 1:15:50 PM10/15/12
to us...@racket-lang.org
On 10/15/2012 11:49 AM, Jens Axel Søgaard wrote:
> 2012/10/15 Stephen Bloch <bl...@adelphi.edu>:
>
>> But probably slower, at least for exact numbers. If "expt" were implemented naively as "for i = 1 to num", the total number of multiplications would be quadratic in degree; if it were implemented by repeated squaring, the total number of multiplications would be O(n log(n)); with Horner's algorithm or your "values" approach, it's linear.
>>
>> Horner's algorithm gives us
>>
>> (lambda (poly x)
>> (for/fold ([sum 0])
>> ([c (polynomial-coeffs poly)])
>> (+ c (* sum x))))
>
> If I recall correctly, Horner's algorithm also gives more precise results,
> when used with pseudo real numbers.

Especially true when coefficients alternate signs, producing massive
cancellation. That's the main reason to use it, since floating-point
exponentiation runs in constant time. (It's also a tad faster, requiring
at least 1-2 fewer flops per coefficient... but probably many more,
because hardware `pow' is usually broken for non-rational inputs and
implementations have to work around that.)

What I'd really like, by the time the math library has polynomials, is a
Horner-like algorithm for sparse polynomials of the form

c0 + ... + cn * x^(i_n) * y^(j_n) + ...

The sequence of (i_n,j_n) can only be partially ordered, which makes it
tricky.

Neil ⊥

Gregory Woodhouse

unread,
Oct 15, 2012, 10:46:23 PM10/15/12
to Neil Toronto, us...@racket-lang.org
I suppose I'm just stating the obvious here, but R[x, y] is naturally isomorphic to R[x][y]. That is, polynomials in x and y over the ring R have a natural interpretation as polynomials in y over the ring R[x] of polynomials over R. So, if you had a good library for working with polynomials (of one variable) you could just evaluate p(a, b) by evaluating Horner's rule twice.

Sent from my iPhone

Neil Toronto

unread,
Oct 16, 2012, 12:08:56 PM10/16/12
to Gregory Woodhouse, us...@racket-lang.org
I hadn't thought of making two passes. Thanks!

I'd have to have the terms indexed by two different orderings
(nondecreasing in x's degree and nondecreasing in y's), or be willing to
sort. That seems tricky or slowish, but much better than what I've had
in mind. It should also work with other orthogonal bases, like the
Chebyshev polynomials, using Clenshaw's algorithm (of which Horner's is
a special case).

To let you know where I'm going with this, I'm considering designs for
`math/polynomial'. I want a data type that can represent sparse
polynomials over any ring, with any orthogonal basis in that ring. Basic
polynomial ops would be +, *, and conversion to another basis or another
ring type (e.g. from Exact-Rational to Flonum). A quotient polynomial
type would lift polynomial rings to fields.

I'd also like another basic op to be generating the syntax of a function
that evaluates the polynomial. Polynomials could then be built at
expansion time - say, by building a Taylor, Chebyshev, or minimax
approximation - and then evaluated quickly at runtime.

It looks like you know what you're talking about (you probably
understood all the preceeding text and can even point out errors :D), so
I'd love to hear any ideas you have along these lines.

Neil ⊥

Gregory Woodhouse

unread,
Oct 16, 2012, 1:49:16 PM10/16/12
to Neil Toronto, us...@racket-lang.org
I'm intrigued. I suppose pattern based macros could be used to implement operations like + and *, and passing to the field of quotients should formally be no different from rational arithmetic. Are you interested in Chebyshev polynomials for a particular reason (e.g, applications to differential equations) or as part of a more general mathematics library?

I may as well throw in a big item on my "wish list". It would really be nice to have a tool for visualizing surfaces defined either explicitly

z = f (x, y)

or implicitly

g(x, y, z) = c

using wire mesh diagrams or a comparable technique.

But back to polynomials, I think I'll try writing something simple. What would be interesting is yo have s nice way of representing algebraic structures in Racket so that standard constructs like algebraic extensions such as Q(i) don't have to be strongly coupled with the definition of the base ring/field. Maybe I'm asking for functors (in the mathematical sense) and categories.

Sent from my iPhone

On Oct 16, 2012, at 9:08 AM, Neil Toronto <neil.t...@gmail.com> wrote:

>
> I'd have to have the terms indexed by two different orderings (nondecreasing in x's degree and nondecreasing in y's), or be willing to sort. That seems tricky or slowish, but much better than what I've had in mind. It should also work with other orthogonal bases, like the Chebyshev polynomials, using Clenshaw's algorithm (of which Horner's is a special case).
>
> To let you know where I'm going with this, I'm considering designs for `math/polynomial'. I want a data type that can represent sparse polynomials over any ring, with any orthogonal basis in that ring. Basic polynomial ops would be +, *, and conversion to another basis or another ring type (e.g. from Exact-Rational to Flonum). A quotient polynomial type would lift polynomial rings to fields.
>
> I'd also like another basic op to be generating the syntax of a function that evaluates the polynomial. Polynomials could then be built at expansion time - say, by building a Taylor, Chebyshev, or minimax approximation - and then evaluated quickly at runtime.
>
> It looks like you know what you're talking about (you probably understood all the preceeding text and can even point out errors :D), so I'd love to hear any ideas you have along these lines.
>
> Neil ⊥

____________________

Michael Wilber

unread,
Oct 16, 2012, 2:02:54 PM10/16/12
to Gregory Woodhouse, Neil Toronto, us...@racket-lang.org
Does surface3d and isosurface3d from racket/plot do what you want?

file:///usr/share/racket/doc/plot/renderer3d.html?q=isosurface#(def._((lib._plot/main..rkt)._isosurface3d))

Neil Toronto

unread,
Oct 16, 2012, 4:35:41 PM10/16/12
to Michael Wilber, us...@racket-lang.org
On 10/16/2012 12:02 PM, Michael Wilber wrote:
> Does surface3d and isosurface3d from racket/plot do what you want?
>
> file:///usr/share/racket/doc/plot/renderer3d.html?q=isosurface#(def._((lib._plot/main..rkt)._isosurface3d))

In particular:

#lang racket

(require plot)

(define (f x y)
(+ 2 (* 2 x) (* 5 y) (* 1/2 x y y) (* 6 x x)))

(plot3d (surface3d f -1 1 -1 1))
(plot3d (contour-intervals3d f -1 1 -1 1 #:label "f(x,y)"))

(define (g x y z)
(+ (* x y) (* y z) (* z x)))

(define c 0.25)

(plot3d (isosurface3d g c -1 1 -1 1 -1 1))
(plot3d (isosurfaces3d g -1 1 -1 1 -1 1 #:label "g(x,y,z)"))


Isosurfaces are visualized using marching cubes, whose patent finally
ran out a few years ago.

> Gregory Woodhouse <gregwo...@me.com> writes:
>> I'm intrigued. I suppose pattern based macros could be used to implement

>> operations like + and * [...]

I was thinking more generics. Generic `+' and `*' could evaluate
polynomials at runtime, and generic `syntax+' and `syntax*' could emit
code that does the same.

>> and passing to the field of quotients should formally be no different
>> from rational arithmetic.

Exactly. Reducing polynomial fractions in a generic way might be tricky,
but it looks fun. :)

>> Are you interested in Chebyshev polynomials for a particular reason
>> (e.g, applications to differential equations) or as part of a more
>> general mathematics library?

Both. Chebyshev polynomials have some nice properties when used to
approximate non-polynomials; in particular, they're really close to
minimax-error approximations. `math/special-functions' already uses a
private Chebyshev polynomial implementation extensively. For many R->R
functions, I only have to chop up their domains and plonk a Chebyshev
approximation down for each piece.

Chebyshev and other orthogonal polynomial bases come up repeatedly in
solutions to differential equations. Multidimensional polynomials can be
represented by sparse polynomials over other polynomial rings. When I
considered making the private polynomial API public, I studied up on
these things a bit, and realized it would be cleaner, more likely
correct, and more generally useful if I parameterized polynomials over
arbitrary rings and bases.

>> What would be interesting is yo have s nice way of representing
>> algebraic structures in Racket so that standard constructs like
>> algebraic extensions such as Q(i) don't have to be strongly coupled
>> with the definition of the base ring/field. Maybe I'm asking for
>> functors (in the mathematical sense) and categories.

That's exactly what I need. I had been thinking I'd wait until generics
found their way into Typed Racket. But I might try defining them untyped
(where generics work right now) and importing them using
`require/typed'. I have no idea whether it will work. :D

Gregory Woodhouse

unread,
Oct 16, 2012, 8:03:13 PM10/16/12
to Neil Toronto, us...@racket-lang.org
Indeed it does! These plots are very nice.
Reply all
Reply to author
Forward
0 new messages