data Element = Element {
elementOrigin :: V,
elementSubs :: [Element]
}
and this important bit of code that operates on it:
transform :: T -> Element -> Element
transform t e = Element {
elementOrigin = tmulv t (elementOrigin e),
elementSubs = map (transform t) (elementSubs e)
}
Where V is a vertex and T is a matrix transformation. The following is true:
transform a (transform b e) == transform (a * b) e
I have read about rewrite rules, such would efficiently rewrite the obvious
transformations. However, given that the program will be applying *many*
transformations to very deep and wide Element trees, and applying those
transformations at many levels, I don't see how the optimizer would be able to
handle all cases, for instance a thunk getting left over from perhaps
evaluation prior to some IO function.
FWIW here's an example "tree-builder" I'm using to test with:
mkElems 0 _ = Element {
elementOrigin = V 0 0,
elementSubs = []
}
mkElems depth width = Element {
elementOrigin = V 0 0,
elementSubs = replicate width $ transform (rotation (pi / (fromIntegral width))) $ transform (translation $ V 1 1) $ mkElems (depth - 1) width
}
with depth = width = 5
If I could somehow arrange for the transform function to detect when it
is being applied to a unevaluated thunk, and then modify the thunk in
place, that would basically be the behavior I need. Any suggestions?
--
http://petertodd.org 'peter'[:-1]@petertodd.org
> If I could somehow arrange for the transform function to detect when it
> is being applied to a unevaluated thunk, and then modify the thunk in
> place, that would basically be the behavior I need. Any suggestions?
That is exactly what is already happening. Perhaps you want what you think
is happening: if a value *is* evaluated, you want to apply the
transformation right then instead of making a thunk.
By the way, this is very operational thinking for Haskell. Haskell tries
quite hard to abstract away operational thinking, so thinking on this level
will be difficult and more work than you should be doing to write a program.
May I ask why you want this behavior? Have you just tested it and observed
that it is running too slowly and you are trying to speed it up?
Luke
Not quite. If I have a thunk, at the low level somewhere it must refer
to the transform function, the transform matrix, and the element that is
to be transformed. If I apply another transform to that unevaluated
thunk, my understanding is that haskell will represent it as such:
thunk transform Ta (thunk transform Tb e)
When I want the following:
thunk transform (Ta * Tb) e
This alternate definition of the Element structure might make more
sense:
data Element = Element {
elementOrigin :: V,
elementSubs :: [Element]
}
| ElementThunk T Element
transform :: T -> Element -> Element
transform t (ElementThunk t2 e) = ElementThunk (tmul t t2) e
transform t e = ElementThunk t e
transform t e = Element {
elementOrigin = tmulv t (elementOrigin e),
elementSubs = map (transform t) (elementSubs e)
}
This gives a behavior close to what I want, applying a transform to an
element results in a "thunk", and subsequent transforms simply modify
the thunk, the problem is that I don't see a way to implicitly coerce an
ElementThunk into an Element on demand for all the other code in the
program that expects elements. (such as the elementOrigin function)
> By the way, this is very operational thinking for Haskell. Haskell tries
> quite hard to abstract away operational thinking, so thinking on this level
> will be difficult and more work than you should be doing to write a program.
Yes, but this part of the program is library code, that will be used by
a whole pile of other stuff, so if I can get the right behavior
"magically" the rest of the program will be a lot easier to write.
FWIW this is EDA software, those "elements" refer to elements of a
printed circuit board layout, and the transform operations correspond to
stuff like moving a chip on the board. The representation of a simple IC
would consist of something like an element, with a bunch of sub-elements
for each pin, all of which have geometry data.
> May I ask why you want this behavior? Have you just tested it and observed
> that it is running too slowly and you are trying to speed it up?
I've written a previous version of this program in Python actually,
where I explicitly modeled the lazy evaluation behavior that I wanted.
It all worked great, but the overall speed was still quite slow. I was
hoping that Haskell, built around lazy evaluation already, would be a
better fit.
That, and in any case, other aspects of the program that I've re-written
in Haskell saw about a 5-1 redunction in code length... :)
>
> data Element = Element {
> elementOrigin :: V,
> elementSubs :: [Element]
> }
> | ElementThunk T Element
>
> transform :: T -> Element -> Element
> transform t (ElementThunk t2 e) = ElementThunk (tmul t t2) e
> transform t e = ElementThunk t e
> transform t e = Element {
> elementOrigin = tmulv t (elementOrigin e),
> elementSubs = map (transform t) (elementSubs e)
> }
>
> This gives a behavior close to what I want, applying a transform to an
> element results in a "thunk", and subsequent transforms simply modify
> the thunk, the problem is that I don't see a way to implicitly coerce an
> ElementThunk into an Element on demand for all the other code in the
> program that expects elements. (such as the elementOrigin function)
Oh! Okay, studying your code a bit, I think I finally understand what
you're trying to do. Tell me if I'm wrong. You have a list of elements es
and a composition of transforms t1,t2,t3, and instead of applying t1, t2,
and t3 to each element, you want to collapse them together into a single
matrix and apply that matrix to each element.
This is definitely a modeling level thing to do; you don't need to go
anywhere near rewrite rules or thinking about internal representations or
anything like that. A bit of old fashioned abstraction should do the trick.
Your Thunk representation is not that bad. I can't really weigh the
trade-offs for you. Keep the data type abstract and only allow access
through functions (like elementOrigin). Then I don't see the problem with
redefining elementOrigin as:
elementOrigin (Element v subs) = v
elementOrigin (ElementThunk t e) = tmulv t (elementOrigin e)
Keep the number of operations which need to know the representation
(constructors) of Element as small as you can.
Another possibility is this:
data Element = Element
{ elementOrigin :: V
, elementSubs :: [(T,Element)]
}
Where, of course, the transform for each sub-element is relative.
Overall I think your "thunk" solution is a very nice trade-off. (Minor
rhetorical note: I would probably name your ElementThunk constructor
Transform or ElementTransform instead)
Hmm, you have an invariant that the pattern ElementThunk t (ElementThunk t
e) never occurs. It would be good style to encode this:
data PrimElement = PrimElement { elementOrigin :: V, elementSubs ::
[Element] }
data Element = Element (Maybe T) PrimElement
That Maybe bugs me. You could factor that out into the T type with a
special optimization for the identity transform. Hmm, then the
[(T,Element)] encoding and this one are identical. Fun. :-)
Short answer: *abstract data type!*
Luke
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
>
> iD8DBQFJTnx03bMhDbI9xWQRAhWvAJoD8JeQg/3Q3Oy5FNEAaVjbNDbg3QCfe5jJ
> Ob2IGxR4YDfiVpoTeOFcnBM=
> =RS6B
> -----END PGP SIGNATURE-----
>
>
Is this an example of a Thunktor?
_______________________________________________
Haskell-Cafe mailing list
Haskel...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe