Nested functions in algebra

13 views
Skip to first unread message

Waldek Hebisch

unread,
Mar 18, 2009, 5:10:43 PM3/18/09
to fricas...@googlegroups.com
For long time were mishandling nested function. The patch
below modifies the compiler to signal error when nested
functions are encountered. With the patch the compiler
discovered a few cases where we used nested functions.
Generated Lisp code looks bogus (AXIOMsys should crash when
executing it). Strange thing is that ATM I am not able to
produce a testcase: the code seem to work as expected.
I really do not know _how_ current code works: attempts
to trace current code either gave no effect or infinite
recursion. OTOH the code below gives expected trace.

For testing I used the following example:

Tr := Tree(Integer)
mytree := tree(1, [tree(2, [])$Tr, tree(3, [])$Tr])$Tr
member?(1, mytree)
member?(3, mytree)
member?(4, mytree)

The patch (including removing nested functions from algebra)
is below:

Index: src/algebra/aggcat.spad.pamphlet
===================================================================
--- src/algebra/aggcat.spad.pamphlet (revision 548)
+++ src/algebra/aggcat.spad.pamphlet (working copy)
@@ -526,13 +526,13 @@
-- x

s = t ==
- eq?(s,t) => true
- #s ~= #t => false
--- I'd rather write
--- every?(member?(#1, t), parts s)
--- but the compiler complains
- every?(mem?, parts s) where
- mem?(x: S): Boolean == member?(x, t)
+ eq?(s,t) => true
+ #s ~= #t => false
+ --- I'd rather write
+ --- every?(member?(#1, t), parts s)
+ --- but the compiler complains
+ mem? : S->Boolean := member?(#1, t)
+ every?(mem?, parts s)

remove_!(f:S->Boolean, t:%) ==
for m in parts t repeat if f m then remove_!(m, t)
@@ -693,13 +693,7 @@
cardinality s == #s
construct l == (s := set(); for x in l repeat insert_!(x,s); s)
count(x:S, s:%) == (member?(x, s) => 1; 0)
--- I'd rather write
--- subset?(s, t) == #s < #t and every?(x +-> member?(x, t), parts s)
--- but the compiler complains
- subset?(s, t) ==
- #s >= #t => false
- every?(mem?, parts s) where
- mem?(x: S): Boolean == member?(x, t)
+ subset?(s, t) == #s <= #t and s = intersect(s,t)


coerce(s:%):OutputForm ==
Index: src/algebra/tree.spad.pamphlet
===================================================================
--- src/algebra/tree.spad.pamphlet (revision 548)
+++ src/algebra/tree.spad.pamphlet (working copy)
@@ -117,8 +117,9 @@
node?(t1, t2) ==
t1 = t2 => true
t case empty => false
- any?(inode?, children t2) where
- inode?(t: %): Boolean == node?(t1, t)
+ inode?:% -> Boolean := node?(t1, #1)
+ any?(inode?, children t2)
+
leaf? t ==
t case empty => false
empty? children t
@@ -146,8 +147,9 @@
true
member?(n, t) ==
t case empty => false
- n = value t or any?(mem?, children t) where
- mem?(c: %): Boolean == member?(n, c)
+ mem?: % -> Boolean := member?(n, #1)
+ n = value t or any?(mem?, children t)
+
members t == parts t
parts t == --buggy?
t case empty => empty()
Index: src/interp/define.boot
===================================================================
--- src/interp/define.boot (revision 548)
+++ src/interp/define.boot (working copy)
@@ -47,6 +47,8 @@
(sig:= getSignatureFromMode(lhs,e)) =>
-- here signature of lhs is determined by a previous declaration
compDefine1(['DEF,lhs,[first sig,:rest signature],specialCases,rhs],m,e)
+ $insideCapsuleFunctionIfTrue =>
+ stackAndThrow ['"Nested functions unsupported:", form]
if signature.target=$Category then $insideCategoryIfTrue:= true
--?? following 3 lines seem bogus, BMT 6/23/93
--? if signature.target is ['Mapping,:map] then
--
Waldek Hebisch
heb...@math.uni.wroc.pl

Ralf Hemmecke

unread,
Mar 18, 2009, 5:42:14 PM3/18/09
to fricas...@googlegroups.com
> --- I'd rather write
> --- subset?(s, t) == #s < #t and every?(x +-> member?(x, t), parts s)
> --- but the compiler complains
> - subset?(s, t) ==
> - #s >= #t => false
> - every?(mem?, parts s) where
> - mem?(x: S): Boolean == member?(x, t)
> + subset?(s, t) == #s <= #t and s = intersect(s,t)

Were you saying that anonymous functions do not work inside a function body?

> member?(n, t) ==
> t case empty => false
> - n = value t or any?(mem?, children t) where
> - mem?(c: %): Boolean == member?(n, c)
> + mem?: % -> Boolean := member?(n, #1)
> + n = value t or any?(mem?, children t)

I don't know why, but I really hate the #1 notation. If you just take
the right hand side

member?(n, #1)

what type must #1 have. One can only figure that out (as in your case)
if one looks at the left hand side.

The #1 notation looks like a nice shorthand for untyped lamda calculus,
but don't we have typed lamda calculus in SPAD?

Ralf

Bill Page

unread,
Mar 18, 2009, 6:35:23 PM3/18/09
to fricas...@googlegroups.com
On Wed, Mar 18, 2009 at 5:10 PM, Waldek Hebisch wrote:
>
> For long time were mishandling nested function.  The patch
> below modifies the compiler to signal error when nested
> functions are encountered.  With the patch the compiler
> discovered a few cases where we used nested functions.
> Generated Lisp code looks bogus (AXIOMsys should crash when
> executing it).  Strange thing is that ATM I am not able to
> produce a testcase: the code seem to work as expected.
> I really do not know _how_ current code works: attempts
> to trace current code either gave no effect or infinite
> recursion.  OTOH the code below gives expected trace.
>
> For testing I used the following example:
>
> Tr := Tree(Integer)
> mytree := tree(1, [tree(2, [])$Tr, tree(3, [])$Tr])$Tr
> member?(1, mytree)
> member?(3, mytree)
> member?(4, mytree)
>

Waldek,

One possible testcase might be the following:

(1) -> Tr := Tree(Integer)

(1) Tree(Integer)
Type: Domain
(2) -> n3:=tree(3, [])$Tr

(2) 3
Type: Tree(Integer)
(3) -> n2:=tree(2, [])$Tr

(3) 2
Type: Tree(Integer)
(4) -> mytree := tree(1, [n2, n3])$Tr

(4) 1(2,3)
Type: Tree(Integer)
(5) -> node?(mytree,n2)
Internal Error
The function node? with signature hashcode is missing from domain
Tree(Integer)

---

The code for 'node?' is contained in 'src/algebra/tree.spad.pamphlet'

node?(t1, t2) ==
t1 = t2 => true
t case empty => false

any?(inode?, children t2) where


inode?(t: %): Boolean == node?(t1, t)

however in addition to the nested function in also contains an
undefined variable 't'. Presumably this should be 't2'. After applying
your patch and the change

t2 case empty => false

The compiler silently *appears* to correctly compile 'node?' (no
errors or warnings), but the function is still missing from the
resulting domain!!

Here is a version of 'node?' that avoids the the use of a local
function. It compiles and works as expected:

node?(t1:%, t2:%):Boolean ==


t1 = t2 => true

t2 case empty => false
for t in children t2 repeat
node?(t1,t) => return true
false

---

So I suppose that your patch

> @@ -117,8 +117,9 @@
> node?(t1, t2) ==
> t1 = t2 => true
> t case empty => false
> - any?(inode?, children t2) where
> - inode?(t: %): Boolean == node?(t1, t)
> + inode?:% -> Boolean := node?(t1, #1)
> + any?(inode?, children t2)
> +

still is somehow compiled incorrectly. Is 'inode?' still considered
as "nested function" even though it does not appear in a 'where'
clause?

Regards,
Bill Page.

---

(1) -> )co tree2.spad
Compiling FriCAS source code from file /home/wspage/tree2.spad using
old system compiler.
TREE2 abbreviates domain Tree2
------------------------------------------------------------------------
initializing NRLIB TREE2 for Tree2
compiling into NRLIB TREE2
processing macro definition cycleTreeMax ==> 5
compiling exported empty? : $ -> Boolean
TREE2;empty?;$B;1 is replaced by QEQCARt1

;;; *** |TREE2;empty?;$B;1| REDEFINED
Time: 0 SEC.

compiling exported empty : () -> $
TREE2;empty;$;2 is replaced by CONS1empty

;;; *** |TREE2;empty;$;2| REDEFINED
Time: 0 SEC.

compiling exported children : $ -> List $

;;; *** |TREE2;children;$L;3| REDEFINED
Time: 0.01 SEC.

compiling exported setchildren! : ($,List $) -> $

;;; *** |TREE2;setchildren!;$L$;4| REDEFINED
Time: 0 SEC.

compiling exported setvalue! : ($,S) -> S

;;; *** |TREE2;setvalue!;$2S;5| REDEFINED
Time: 0 SEC.

compiling exported count : (S,$) -> NonNegativeInteger

;;; *** |TREE2;count;S$Nni;6| REDEFINED
Time: 0 SEC.

compiling exported count : (S -> Boolean,$) -> NonNegativeInteger

;;; *** |TREE2;count;M$Nni;7| REDEFINED
Time: 0.01 SEC.

compiling exported map : (S -> S,$) -> $

;;; *** |TREE2;map;M2$;8| REDEFINED
Time: 0 SEC.

compiling exported map! : (S -> S,$) -> $

;;; *** |TREE2;map!;M2$;9| REDEFINED
Time: 0 SEC.

compiling exported tree : (S,List $) -> $

;;; *** |TREE2;tree;SL$;10| REDEFINED
Time: 0 SEC.

compiling exported tree : S -> $

;;; *** |TREE2;tree;S$;11| REDEFINED
Time: 0 SEC.

compiling exported tree : List S -> $

;;; *** |TREE2;tree;L$;12| REDEFINED
Time: 0 SEC.

compiling exported value : $ -> S

;;; *** |TREE2;value;$S;13| REDEFINED
Time: 0.01 SEC.

compiling exported child? : ($,$) -> Boolean

;;; *** |TREE2;child?;2$B;14| REDEFINED
Time: 0 SEC.

compiling local distance1 : ($,$) -> Integer

;;; *** |TREE2;distance1| REDEFINED
Time: 0 SEC.

compiling exported distance : ($,$) -> Integer

;;; *** |TREE2;distance;2$I;16| REDEFINED
Time: 0 SEC.

compiling exported node? : ($,$) -> Boolean
Time: 0 SEC.

compiling exported leaf? : $ -> Boolean
Time: 0.01 SEC.

compiling exported leaves : $ -> List S
Time: 0 SEC.

compiling exported less? : ($,NonNegativeInteger) -> Boolean
Time: 0 SEC.

compiling exported more? : ($,NonNegativeInteger) -> Boolean
Time: 0 SEC.

compiling exported nodes : $ -> List $
Time: 0.01 SEC.

compiling exported size? : ($,NonNegativeInteger) -> Boolean
Time: 0 SEC.

compiling exported any? : (S -> Boolean,$) -> Boolean
Time: 0 SEC.

compiling exported every? : (S -> Boolean,$) -> Boolean
Time: 0 SEC.

compiling exported member? : (S,$) -> Boolean
compiling local mem? : $ -> Boolean

;;; *** |TREE2,member?;mem?| REDEFINED
Time: 0 SEC.

Time: 0 SEC.

compiling exported members : $ -> List S
Time: 0 SEC.

compiling exported parts : $ -> List S
Time: 0 SEC.

compiling exported = : ($,$) -> Boolean
Time: 0.01 SEC.

compiling local equal? : ($,$,$,$,Integer) -> Boolean

;;; *** |TREE2;equal?| REDEFINED
Time: 0 SEC.

compiling exported # : $ -> NonNegativeInteger
Time: 0 SEC.

compiling local treeCount : ($,$,NonNegativeInteger) -> NonNegativeInteger

;;; *** |TREE2;treeCount| REDEFINED
Time: 0.01 SEC.

compiling exported copy : $ -> $
Time: 0 SEC.

compiling local copy1 : ($,$,Integer) -> $

;;; *** |TREE2;copy1| REDEFINED
Time: 0 SEC.

compiling exported coerce : $ -> OutputForm
Time: 0.01 SEC.

compiling local coerce1 : ($,List $,List $) -> OutputForm

;;; *** |TREE2;coerce1| REDEFINED
Time: 0.01 SEC.

compiling local multipleOverbar : (OutputForm,Integer,List $) -> OutputForm

;;; *** |TREE2;multipleOverbar| REDEFINED
Time: 0 SEC.

compiling exported cyclic? : $ -> Boolean
Time: 0.01 SEC.

compiling local cyclic2? : ($,List $) -> Boolean

;;; *** |TREE2;cyclic2?| REDEFINED
Time: 0 SEC.

compiling exported cyclicCopy : $ -> $
Time: 0 SEC.

compiling local cyclicCopy2 : ($,List $) -> $

;;; *** |TREE2;cyclicCopy2| REDEFINED
Time: 0 SEC.

compiling local copyCycle2 : ($,List $) -> $

;;; *** |TREE2;copyCycle2| REDEFINED
Time: 0 SEC.

compiling local copyCycle4 : ($,$,$,List $) -> $

;;; *** |TREE2;copyCycle4| REDEFINED
Time: 0.01 SEC.

compiling exported cyclicEntries : $ -> List $
Time: 0 SEC.

compiling local cyclicEntries3 : ($,List $,List $) -> List $

;;; *** |TREE2;cyclicEntries3| REDEFINED
Time: 0 SEC.

compiling exported cyclicEqual? : ($,$) -> Boolean
Time: 0 SEC.

compiling local cyclicEqual4? : ($,$,List $,List $) -> Boolean

;;; *** |TREE2;cyclicEqual4?| REDEFINED
Time: 0.01 SEC.

compiling exported cyclicParents : $ -> List $
Time: 0 SEC.

compiling local cyclicParents3 : ($,List $,List $) -> List $

;;; *** |TREE2;cyclicParents3| REDEFINED
Time: 0 SEC.

compiling local insert : ($,List $) -> List $

;;; *** |TREE2;insert| REDEFINED
Time: 0 SEC.

compiling local lastNode : List $ -> List $

;;; *** |TREE2;lastNode| REDEFINED
Time: 0.01 SEC.

compiling local eqMember? : ($,List $) -> Boolean

;;; *** |TREE2;eqMember?| REDEFINED
Time: 0 SEC.

compiling local eqMemberIndex : ($,List $,Integer) -> Integer

;;; *** |TREE2;eqMemberIndex| REDEFINED
Time: 0 SEC.

compiling local eqUnion : (List $,List $) -> List $

;;; *** |TREE2;eqUnion| REDEFINED
Time: 0 SEC.

****** Domain: S already in scope
augmenting S: (Evalable S)
(time taken in buildFunctor: 0)

;;; *** |Tree2| REDEFINED

;;; *** |Tree2| REDEFINED
Time: 0 SEC.


Warnings:
[1] children: node has no value
[2] setchildren!: node has no value
[3] setchildren!: pretend$ -- should replace by @
[4] setvalue!: node has no value
[5] count: signature of lhs not unique: (NonNegativeInteger)S$ chosen
[6] value: node has no value
[7] multipleOverbar: The conditional modes (String) and S conflict


Cumulative Statistics for Constructor Tree2
Time: 0.13 seconds

finalizing NRLIB TREE2
Processing Tree2 for Browser database:
--------(tree (% S (List %)))---------
--------(tree (% (List S)))---------
--------(tree (% S))---------
--------(cyclic? ((Boolean) %))---------
--------(cyclicCopy (% %))---------
--------(cyclicEntries ((List %) %))---------
--------(cyclicEqual? ((Boolean) % %))---------
--------(cyclicParents ((List %) %))---------
--------constructor---------
; (DEFUN |TREE2,member?;mem?| ...) is being compiled.
;; The variable |n| is undefined.
;; The compiler will assume this variable is a global.
; (DEFUN |Tree2| ...) is being compiled.
;; The variable |$ConstructorCache| is undefined.
;; The compiler will assume this variable is a global.
------------------------------------------------------------------------
Tree2 is already explicitly exposed in frame frame0
Tree2 will be automatically loaded when needed from
/home/wspage/TREE2.NRLIB/TREE2

(1) -> n3:=tree(3, [])$Tr

The function tree is not implemented in NIL .
(1) -> n2:=tree(2, [])$Tr

The function tree is not implemented in NIL .
(1) -> mytree := tree(1, [n2, n3])$Tr

The function tree is not implemented in NIL .
(1) -> Tr := Tree2(Integer)

(1) Tree2(Integer)
Type: Domain
(2) -> n3:=tree(3, [])$Tr
Loading /home/wspage/TREE2.NRLIB/TREE2 for domain Tree2

(2) 3
Type: Tree2(Integer)
(3) -> n2:=tree(2, [])$Tr

(3) 2
Type: Tree2(Integer)
(4) -> mytree := tree(1, [n2, n3])$Tr

(4) 1(2,3)
Type: Tree2(Integer)
(5) -> node?(mytree,n2)$Tr

(5) false
Type: Boolean
(6) -> node?(n2,mytree)$Tr

(6) true
Type: Boolean
(7) ->

Waldek Hebisch

unread,
Mar 18, 2009, 7:07:44 PM3/18/09
to fricas...@googlegroups.com
Ralf Hemmecke wrote:
>
> > --- I'd rather write
> > --- subset?(s, t) == #s < #t and every?(x +-> member?(x, t), parts s)
> > --- but the compiler complains
> > - subset?(s, t) ==
> > - #s >= #t => false
> > - every?(mem?, parts s) where
> > - mem?(x: S): Boolean == member?(x, t)
> > + subset?(s, t) == #s <= #t and s = intersect(s,t)
>
> Were you saying that anonymous functions do not work inside a function body?
>

Well, above you have _named_ nested function, and such functions
do not work (sometimes one gets weird error message, sometimes one
silently gets wrong code). Anonymous functions using #1, etc more
or less work (there are bugs, but simple cases should work OK).
AFAICS anonymous functions introduced by '+->' are unsupported in
Spad.

> > member?(n, t) ==
> > t case empty => false
> > - n = value t or any?(mem?, children t) where
> > - mem?(c: %): Boolean == member?(n, c)
> > + mem?: % -> Boolean := member?(n, #1)
> > + n = value t or any?(mem?, children t)
>
> I don't know why, but I really hate the #1 notation. If you just take
> the right hand side
>
> member?(n, #1)
>
> what type must #1 have. One can only figure that out (as in your case)
> if one looks at the left hand side.
>
> The #1 notation looks like a nice shorthand for untyped lamda calculus,
> but don't we have typed lamda calculus in SPAD?
>

If you look at compiler internals you will notice that currently
the _only_ way to handle nested functions in to have context
which requires function type. So without "left hand side"
there in no nested functions.

But actually, this is a minor drawback. More serius one is
that #1 get confusing if there are multiple nestiong levels,
then it is easy to lost track to which level given #1 belongs.

More remarks:

- definitions of the form:

f(x1 : T1, x2 : T2) : S == ...

and

f : (T1, T2) -> S := ...

should be more or less equivalent

- in some places compiler is supposed to infer types. I think
nobody wants to write ((x + y)@Integer * z)@Integer, rather,
compiler is supposed to propagate types. Inference for #1
is comparable to example above.

To clarify: I would like to have better support for nested
functions. I have a patch which actually tries to something
sensible with nested definitions. Unfortunatly, while it
allows nicer notation than #1, the other drawbacks remain
(and currently my patch adds its own problems). So ATM
I prefer to only disable wrong behaviour...

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Ralf Hemmecke

unread,
Mar 18, 2009, 7:05:39 PM3/18/09
to fricas...@googlegroups.com
> The code for 'node?' is contained in 'src/algebra/tree.spad.pamphlet'
>
> node?(t1, t2) ==
> t1 = t2 => true
> t case empty => false
> any?(inode?, children t2) where
> inode?(t: %): Boolean == node?(t1, t)
>
> however in addition to the nested function in also contains an
> undefined variable 't'. Presumably this should be 't2'. After applying
> your patch and the change
>
> t2 case empty => false
>
> The compiler silently *appears* to correctly compile 'node?' (no
> errors or warnings), but the function is still missing from the
> resulting domain!!
>
> Here is a version of 'node?' that avoids the the use of a local
> function. It compiles and works as expected:
>
> node?(t1:%, t2:%):Boolean ==
> t1 = t2 => true
> t2 case empty => false
> for t in children t2 repeat
> node?(t1,t) => return true
> false
>

... and is basically the same code as appears in
BinaryRecursiveAggregate in aggcat.spad.pamphlet.

if % has SetAggregate(S) and S has SetCategory then
node?(u,v) ==
empty? v => false
u = v => true
for y in children v repeat node?(u,y) => return true
false

I think, it is even easier to understand than the "any?(..)" version.

Ralf

Ralf Hemmecke

unread,
Mar 18, 2009, 7:26:53 PM3/18/09
to fricas...@googlegroups.com
Dear Waldek,

On 03/19/2009 12:07 AM, Waldek Hebisch wrote:
> Ralf Hemmecke wrote:
>>> --- I'd rather write
>>> --- subset?(s, t) == #s < #t and every?(x +-> member?(x, t), parts s)
>>> --- but the compiler complains
>>> - subset?(s, t) ==
>>> - #s >= #t => false
>>> - every?(mem?, parts s) where
>>> - mem?(x: S): Boolean == member?(x, t)
>>> + subset?(s, t) == #s <= #t and s = intersect(s,t)
>> Were you saying that anonymous functions do not work inside a function body?

> Well, above you have _named_ nested function, and such functions
> do not work (sometimes one gets weird error message, sometimes one
> silently gets wrong code). Anonymous functions using #1, etc more
> or less work (there are bugs, but simple cases should work OK).
> AFAICS anonymous functions introduced by '+->' are unsupported in
> Spad.

Sorry, I just was looking that the commented line that contained the +->.

But you made me wonder...

Indeed

)abbrev domain AAA Aaa
Aaa(): with
foo: (Integer -> Integer, Integer) -> Integer
bar: Integer -> Integer
== add
foo(f: Integer->Integer, i: Integer): Integer == f i
bar(i: Integer): Integer == foo((x: Integer): Integer +-> x+1, i)

does currently not compile. Wheras entering the last two lines into the
interpreter and calling bar(5) seems to work fine.

> So ATM I prefer to only disable wrong behaviour...

I'm on your side. But that +-> does not work in the compiler is somewhat
surprising for me. Well, I don't use it that often anyway.

Ralf

Waldek Hebisch

unread,
Mar 18, 2009, 8:10:21 PM3/18/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> On Wed, Mar 18, 2009 at 5:10 PM, Waldek Hebisch wrote:
> >
> > For long time were mishandling nested function. ?The patch

> > below modifies the compiler to signal error when nested
> > functions are encountered. ?With the patch the compiler

> > discovered a few cases where we used nested functions.
> > Generated Lisp code looks bogus (AXIOMsys should crash when
> > executing it). ?Strange thing is that ATM I am not able to

> > produce a testcase: the code seem to work as expected.
> > I really do not know _how_ current code works: attempts
> > to trace current code either gave no effect or infinite
> > recursion. ?OTOH the code below gives expected trace.

Yes. Unfortunatly, t has global declaration, so the compiler did
not complain.

> After applying
> your patch and the change
>
> t2 case empty => false
>
> The compiler silently *appears* to correctly compile 'node?' (no
> errors or warnings), but the function is still missing from the
> resulting domain!!
>

Thanks, Bill. I think that this explains why member? worked:
we were not using version from Tree, but version from
HomogeneousAggregate. So, we probably should just remove
member? from Tree...

> Here is a version of 'node?' that avoids the the use of a local
> function. It compiles and works as expected:
>
> node?(t1:%, t2:%):Boolean ==
> t1 = t2 => true
> t2 case empty => false
> for t in children t2 repeat
> node?(t1,t) => return true
> false
>
> ---
>
> So I suppose that your patch
>
> > @@ -117,8 +117,9 @@
> > node?(t1, t2) ==
> > t1 = t2 => true
> > t case empty => false
> > - any?(inode?, children t2) where
> > - inode?(t: %): Boolean == node?(t1, t)
> > + inode?:% -> Boolean := node?(t1, #1)
> > + any?(inode?, children t2)
> > +
>
> still is somehow compiled incorrectly. Is 'inode?' still considered
> as "nested function" even though it does not appear in a 'where'
> clause?
>

inode? is a nested function. The difference is that in my version it
is anonymous function, which has some chance of working. I have checked
generated Lisp code and the code for this function looks OK. So
we probably have a compiler bug which omits node? from tables
generated by compiler and used for runtime search.


--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 18, 2009, 10:53:39 PM3/18/09
to fricas...@googlegroups.com

Actually, one can hack at least some support for '+->' rather
quickly. Withe the patch below your program, and a version
where the anonymous function uses arguments of bar seem to work.


Index: src/interp/property.lisp
===================================================================
--- src/interp/property.lisp (revision 548)
+++ src/interp/property.lisp (working copy)
@@ -411,6 +411,7 @@
(\@ |compAtSign|)
(|:| |compColon|)
(\:\: |compCoerce|)
+ (|+->| |compLambda|)
(QUOTE |compQuote|)

(|add| |compAdd|)
Index: src/interp/compiler.boot
===================================================================
--- src/interp/compiler.boot (revision 548)
+++ src/interp/compiler.boot (working copy)
@@ -143,7 +143,38 @@
ScanOrPairVec('hasone?,x) where
hasone? x == MEMQ(x,$formalMapVariables)

+argsToSig(args) ==
+ args is [":", v, t] => [[v], [t]]
+ sig1 := []
+ arg1 := []
+ bad := false
+ for arg in args repeat
+ arg is [":", v, t] =>
+ sig1 := [t, :sig1]
+ arg1 := [v, :arg1]
+ bad := true
+ bad => [nil, nil]
+ [REVERSE(arg1), REVERSE(sig1)]
+
+compLambda(x is ["+->", vl, body], m, e) ==
+ -- SAY(["compLambda", x, m])
+ vl is [":", args, target] =>
+ args :=
+ args is ["Tuple", :a1] => a1
+ args
+ LISTP(args) =>
+ [arg1, sig1] := argsToSig(args)
+ sig1 =>
+ ress := compAtSign(["@", ["+->", arg1, body],
+ ["Mapping", target, :sig1]], m, e)
+ -- SAY(["compLambda returning:", ress.0, ress.1])
+ ress
+ stackAndThrow ["compLambda", x]
+ stackAndThrow ["compLambda", x]
+ stackAndThrow ["compLambda", x]
+
compWithMappingMode(x,m is ["Mapping",m',:sl],oldE) ==
+ -- SAY(["compWithMappingMode", x, m])
$killOptimizeIfTrue: local:= true
e:= oldE
isFunctor x =>
@@ -151,7 +182,21 @@
(and/[extendsCategoryForm("$",s,mode) for mode in argModeList for s in sl]
) and extendsCategoryForm("$",target,m') then return [x,m,e]
if STRINGP x then x:= INTERN x
- for m in sl for v in (vl:= take(#sl,$FormalMapVariableList)) repeat
+ ress := nil
+ if x is ["+->", vl, nx] then
+ vl is [":", :.] =>
+ ress := compLambda(x,m,oldE)
+ -- In case Boot gets fixed
+ ress
+ vl :=
+ SYMBOLP(vl) => [vl]
+ LISTP(vl) and (and/[SYMBOLP(v) for v in vl])=> vl
+ stackAndThrow ["bad +-> arguments:", vl]
+ x := nx
+ else
+ vl:= take(#sl,$FormalMapVariableList)
+ ress => ress
+ for m in sl for v in vl repeat
[.,.,e]:= compMakeDeclaration([":",v,m],$EmptyMode,e)
not null vl and not hasFormalMapVariable(x, vl) => return
[u,.,.] := comp([x,:vl],m',e) or return nil
@@ -237,6 +282,7 @@
uu:=
frees => ['CONS,fname,vec]
['LIST,fname]
+ -- SAY(["compWithMappingMode returning", uu, m])
[uu,m,oldE]

extractCodeAndConstructTriple(u, m, oldE) ==

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 18, 2009, 11:21:24 PM3/18/09
to fricas...@googlegroups.com
> Ralf wrote:
>> )abbrev domain AAA Aaa
>> Aaa(): with
>>    foo: (Integer -> Integer, Integer) -> Integer
>>    bar: Integer -> Integer
>>   == add
>>    foo(f: Integer->Integer, i: Integer): Integer == f i
>>    bar(i: Integer): Integer == foo((x: Integer): Integer +-> x+1, i)
>>
>> does currently not compile. Wheras entering the last two lines into the
>> interpreter and calling bar(5) seems to work fine.
> ...

On Wed, Mar 18, 2009 at 10:53 PM, Waldek Hebisch wrote:
> Actually, one can hack at least some support for '+->' rather
> quickly.  Withe the patch below your program, and a version
> where the anonymous function uses arguments of bar seem
> to work.
>

Great! I would really strongly prefer to write anonymous functions
this way in SPAD. Does this also work with anonymous functions having
multiple arguments?

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 19, 2009, 7:01:29 AM3/19/09
to fricas...@googlegroups.com
Bill Page:

>
> > Ralf wrote:
> >> )abbrev domain AAA Aaa
> >> Aaa(): with
> >> ? ?foo: (Integer -> Integer, Integer) -> Integer
> >> ? ?bar: Integer -> Integer
> >> ? == add
> >> ? ?foo(f: Integer->Integer, i: Integer): Integer == f i
> >> ? ?bar(i: Integer): Integer == foo((x: Integer): Integer +-> x+1, i)

> >>
> >> does currently not compile. Wheras entering the last two lines into the
> >> interpreter and calling bar(5) seems to work fine.
> > ...
>
> On Wed, Mar 18, 2009 at 10:53 PM, Waldek Hebisch wrote:
> > Actually, one can hack at least some support for '+->' rather
> > quickly. ?Withe the patch below your program, and a version

> > where the anonymous function uses arguments of bar seem
> > to work.
> >
>
> Great! I would really strongly prefer to write anonymous functions
> this way in SPAD. Does this also work with anonymous functions having
> multiple arguments?
>

Yes:

((x: Integer, y : Float): Integer) +-> x+i

or

(x,z) +-> x+1

ATM the only problem I know is a syntax nitpick: when you
specify types you have to put argument group in parenthesis.
We should probably adjust proprity of '+->' so that it binds
less tightly than ':'.

The patch is little tested, but if it works fine I would
commit it (or a better version).

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Ralf Hemmecke

unread,
Mar 19, 2009, 7:13:08 AM3/19/09
to fricas...@googlegroups.com
>> )abbrev domain AAA Aaa
>> Aaa(): with
>> foo: (Integer -> Integer, Integer) -> Integer
>> bar: Integer -> Integer
>> == add
>> foo(f: Integer->Integer, i: Integer): Integer == f i
>> bar(i: Integer): Integer == foo((x: Integer): Integer +-> x+1, i)
>>
>> does currently not compile. Wheras entering the last two lines into the
>> interpreter and calling bar(5) seems to work fine.

> Actually, one can hack at least some support for '+->' rather


> quickly. Withe the patch below your program, and a version
> where the anonymous function uses arguments of bar seem to work.

That makes me think of a test suite for the compiler. Do we have that at
the moment? Of course, src/algebra is one thing, but it's not really a
test suite. There should also be some programs which are supposed not to
compile. Additionally, I would like to see some tests that incorporate
features that SPAD does not yet support but should in the future, like
Generator, for example.

Ralf

Waldek Hebisch

unread,
Mar 19, 2009, 9:04:16 AM3/19/09
to fricas...@googlegroups.com
Ralf Hemmecke wrote:
>
> That makes me think of a test suite for the compiler. Do we have that at
> the moment? Of course, src/algebra is one thing, but it's not really a
> test suite. There should also be some programs which are supposed not to
> compile. Additionally, I would like to see some tests that incorporate
> features that SPAD does not yet support but should in the future, like
> Generator, for example.
>

Most changes to the compiler were intended to be compatible, that
is produce the same executable code. To make sure that this is
the case I normally compare Lisp output from the Spad compiler.

However, once we start adding new constructs to Spad we will
need real testsuite...


--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 19, 2009, 9:23:41 AM3/19/09
to fricas...@googlegroups.com
On Thu, Mar 19, 2009 at 7:01 AM, Waldek Hebisch wrote:
>
> Bill Page:

>>
>> Great! I would really strongly prefer to write anonymous functions
>> this way in SPAD. Does this also work with anonymous functions
>> having multiple arguments?
>>
>
> Yes:
>
> ((x: Integer, y : Float): Integer) +-> x+i
>
> or
>
> (x,z) +-> x+1
>
> ATM the only problem I know is a syntax nitpick: when you
> specify types you have to put argument group in parenthesis.
> We should probably adjust prioprity of '+->' so that it binds
> less tightly than ':'.
>

Yes.

> The patch is little tested, but if it works fine I would
> commit it (or a better version).
>

Yes, please! :-) I think this is an excellent addition to SPAD.

Here is a little test example:

(1) -> )sys cat anonymous.spad


)abbrev domain AAA Aaa
Aaa(): with

foo: ((String,Integer) -> String, Integer) -> String
bar: Integer -> String
== add
foo(f: (String,Integer)->String, i: Integer):String == f("xxx", i)
bar(i: Integer): String == foo(((s:String,x: Integer): String) +->
concat(s,string(x+1)), i)

(1) -> )co anonymous.spad
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/apply
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/c-doc
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/c-util
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/profile
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/category
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/compiler
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/define
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/functor
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/info
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/iterator
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/modemap
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/nruncomp
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/package
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/htcheck
Compiling FriCAS source code from file /home/wspage/anonymous.spad
using old system compiler.
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/parsing
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/bootlex
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/def
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/fnewmeta
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/parse
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/postpar
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/preparse
AAA abbreviates domain Aaa
------------------------------------------------------------------------
initializing NRLIB AAA for Aaa
compiling into NRLIB AAA
compiling exported foo : ((String,Integer) -> String,Integer) -> String
AAA;foo;MIS;1 is replaced by SPADCALLxxxif
Time: 0 SEC.

compiling exported bar : Integer -> String
Time: 0.01 SEC.

(time taken in buildFunctor: 0)

;;; *** |Aaa| REDEFINED

;;; *** |Aaa| REDEFINED
Time: 0 SEC.


Warnings:
[1] bar: s has no value
[2] bar: x has no value


Cumulative Statistics for Constructor Aaa
Time: 0.01 seconds

Loading
/usr/local/lib/fricas/target/i686-pc-linux/autoload/bc-matrix
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/bc-misc
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/bc-solve
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/bc-util
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/ht-util
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/htsetvar
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/ht-root
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/br-con
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/br-data
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/showimp
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/br-op1
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/br-op2
Loading
/usr/local/lib/fricas/target/i686-pc-linux/autoload/br-search
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/br-util
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/topics
Loading /usr/local/lib/fricas/target/i686-pc-linux/autoload/br-prof
Loading
/usr/local/lib/fricas/target/i686-pc-linux/autoload/br-saturn
finalizing NRLIB AAA
Processing Aaa for Browser database:
--->-->Aaa((foo ((String) (Mapping (String) (String) (Integer))
(Integer)))): Not documented!!!!
--->-->Aaa((bar ((String) (Integer)))): Not documented!!!!
--->-->Aaa(constructor): Not documented!!!!
--->-->Aaa(): Missing Description
; (DEFUN |Aaa| ...) is being compiled.


;; The variable |$ConstructorCache| is undefined.
;; The compiler will assume this variable is a global.
------------------------------------------------------------------------

Aaa is now explicitly exposed in frame frame0
Aaa will be automatically loaded when needed from
/home/wspage/AAA.NRLIB/AAA

(1) -> bar(5)
Loading /home/wspage/AAA.NRLIB/AAA for package Aaa
Loading /usr/local/lib/fricas/target/i686-pc-linux/algebra/SEX.o for
domain SExpression
Loading /usr/local/lib/fricas/target/i686-pc-linux/algebra/A1AGG-.o
for domain OneDimensionalArrayAggregate&

Loading /usr/local/lib/fricas/target/i686-pc-linux/algebra/SRAGG-.o
for domain StringAggregate&
Loading /usr/local/lib/fricas/target/i686-pc-linux/algebra/FLAGG-.o
for domain FiniteLinearAggregate&
Loading /usr/local/lib/fricas/target/i686-pc-linux/algebra/LNAGG-.o
for domain LinearAggregate&

(1) "xxx6"
Type: String
(2) ->


Regards,
Bill Page

Martin Rubey

unread,
Mar 19, 2009, 12:21:51 PM3/19/09
to fricas...@googlegroups.com
Ralf Hemmecke <ra...@hemmecke.de> writes:

> I don't know why, but I really hate the #1 notation.

Me too. I *really* hate it. Mostly, because:

> But actually, this is a minor drawback. More serius one is that #1
> get confusing if there are multiple nestiong levels, then it is
> easy to lost track to which level given #1 belongs.

and that's not minor for me!

Martin

Waldek Hebisch

unread,
Mar 19, 2009, 11:22:19 PM3/19/09
to fricas...@googlegroups.com

Actually I discoverd anoter related drawback. As I wrote I did a
hack to support named nested function. I thought that there is
problem with my hack. Actually, it seems that real problem is
in implementation of #1 support. Namely, currently when an
expression is expected to have functional type, then Spad compiler
treats it as an anonymous function using #1 notation. This
means that it is hard to write function returning functions
in other way as a single #1 expression.

Also using functional values becames harder: normally they are
used as is, but if type is known (for example due to proper @)
than #1 creasiness kicks in and value may be mishandled. To
say the truth, if there is no #1 (and #2,..., depending on
the number of arguments), then Spad compiler tries to treat
expression just as functional value, but in rather strange
way, which may fail.

So, I think that it would be reasonable to completly switch
to +-> notation in the future.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 20, 2009, 12:02:27 AM3/20/09
to fricas...@googlegroups.com
Attached is my newest patch to support nested functions and a test
file. Compared to previous version I fixed a few bugs and added
support for named functions.


--
Waldek Hebisch
heb...@math.uni.wroc.pl

nested.spad
clo5a.diff

Martin Rubey

unread,
Mar 20, 2009, 3:42:26 AM3/20/09
to fricas...@googlegroups.com
Waldek Hebisch <heb...@math.uni.wroc.pl> writes:

> Attached is my newest patch to support nested functions and a test
> file. Compared to previous version I fixed a few bugs and added
> support for named functions.

I won't have time to test it until next week, but I think that this
development is very exciting.

> So, I think that it would be reasonable to completly switch to +->
> notation in the future.

Absolutely! The earlier, the better. I also think that this should
be a release.

Martin

Bill Page

unread,
Mar 20, 2009, 8:31:26 AM3/20/09
to fricas...@googlegroups.com
On Fri, Mar 20, 2009 at 3:42 AM, Martin Rubey wrote:

+1

This patch builds and the test runs fine on my system. I would be glad
to help with converting existing code that uses the #1 notaiton to +->
. How would you like to divide the work?

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 20, 2009, 9:12:18 AM3/20/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> On Fri, Mar 20, 2009 at 3:42 AM, Martin Rubey wrote:
> >
> > Waldek Hebisch writes:
> >
> >> Attached is my newest patch to support nested functions and a test
> >> file. ?Compared to previous version I fixed a few bugs and added

> >> support for named functions.
> >
> > I won't have time to test it until next week, but I think that this
> > development is very exciting.
> >
> >> So, I think that it would be reasonable to completly switch to +->
> >> notation in the future.
> >
> > Absolutely! ?The earlier, the better. ?I also think that this should

> > be a release.
> >
>
> +1
>
> This patch builds and the test runs fine on my system. I would be glad
> to help with converting existing code that uses the #1 notaiton to +->
> . How would you like to divide the work?
>

I have now commited the patch. Almost surely it will more fixes, but
since it mostly adds new functionality there is little risk to break
old things.

Concerning convertion from #1: there is about 1200 places in algebra
which use this convention. It seems that we neeed to do change by
hand, so we probably should split work on file by file basis. We
should probably start from most used files like expr, int*, kl, poly,
mpoly, since in this files testing is likely to quickly discover
problems.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 20, 2009, 10:34:25 AM3/20/09
to fricas...@googlegroups.com
On Fri, Mar 20, 2009 at 9:12 AM, Waldek Hebisch wrote:
> ...

> I have now commited the patch.  Almost surely it will more fixes,
> but since it mostly adds new functionality there is little risk to break
> old things.
>
> Concerning convertion from #1: there is about 1200 places in algebra
> which use this convention.  It seems that we neeed to do change by
> hand, so we probably should split work on file by file basis.  We
> should probably start from most used files like expr, int*, kl, poly,
> mpoly, since in this files testing is likely to quickly discover
> problems.
>

Ok, I can work on the following 19 files over the next 1-2 days:

wspage@debian:~/fricas-src/src/algebra$ ls expr* int* kl* poly* multpoly*

expr2ups.spad.pamphlet intef.spad.pamphlet intrf.spad.pamphlet
exprode.spad.pamphlet integer.spad.pamphlet kl.spad.pamphlet
expr.spad.pamphlet integrat.spad.pamphlet multpoly.spad.pamphlet
intaf.spad.pamphlet interval.as.pamphlet polycat.spad.pamphlet
intalg.spad.pamphlet interval.spad.pamphlet poly.spad.pamphlet
intaux.spad.pamphlet intfact.spad.pamphlet
intclos.spad.pamphlet intpm.spad.pamphlet

I see about 834 uses of #1 in 139

wspage@debian:~/fricas-src/src/algebra$ grep '#1' expr* int* kl* poly*
multpoly* | wc
139 834 10946

----

Does anyone else want to commit to another group?

---

Waldek, what procedure would you recommend for regression testing? Can
you please be specific so that we all follow the same method?

Regards,
Bill Page.

Ralf Hemmecke

unread,
Mar 20, 2009, 10:49:37 AM3/20/09
to fricas...@googlegroups.com
> I see about 834 uses of #1 in 139
> Does anyone else want to commit to another group?

Yes, I would like to contribute, but only if we open up a branch for
that. It seems that it is going to change quite a lot of files.

There should be some regression tests added at the same time, maybe even
before branching is done. Anyway, it would be good to first commit the
tests and then do the conversion.

Ralf

Bill Page

unread,
Mar 20, 2009, 12:32:03 PM3/20/09
to fricas...@googlegroups.com
On Fri, Mar 20, 2009 at 9:12 AM, Waldek Hebisch wrote:
>
> Concerning convertion from #1: there is about 1200 places in algebra
> which use this convention.  It seems that we neeed to do change by
> hand, so we probably should split work on file by file basis.  We
> should probably start from most used files like expr, int*, kl, poly,
> mpoly, since in this files testing is likely to quickly discover
> problems.
>

As my first attempt, in 'exprode.spad.pamphlet' I changed:

wspage@debian:~/fricas-src/src/algebra$ svn diff
Index: exprode.spad.pamphlet
===================================================================
--- exprode.spad.pamphlet (revision 549)
+++ exprode.spad.pamphlet (working copy)
@@ -112,7 +112,7 @@
kernel(op, [div2exquo f for f in argument k]$List(F))

smp2exquo p ==
- map(k2exquo,#1::F,p)$PolynomialCategoryLifting(IndexedExponents K,
+ map(k2exquo,(x:R):F+->(x::F),p)$PolynomialCategoryLifting(IndexedExponents
K,
K, R, P, F)

div2exquo f ==

---

Question 1:

When I notangle and compile the file 'exprode.spad' I get the warning:

....
[4] smp2exquo: x has no value

Is such warning to be expected? Perhaps 'x' should be treated as only
a formal parameter and no warning generated?


Question 2:

How explicit must I be with the types? For example should

+ map(k2exquo, x+->x::F, p)$PolynomialCategoryLifting(IndexedExponents K,

be sufficient?

Regards,
Bill Page.

Ralf Hemmecke

unread,
Mar 20, 2009, 12:56:38 PM3/20/09
to fricas...@googlegroups.com
> Question 2:
>
> How explicit must I be with the types? For example should
>
> + map(k2exquo, x+->x::F, p)$PolynomialCategoryLifting(IndexedExponents K,
>
> be sufficient?

I would be for a "no".

I think more redundancy helps. Perhaps during that process some bugs may
be revealed. Additionally, I don't think that "x+->x::F" is proper
Aldor. It might be that the SPAD compiler is smart enough to figure out
from the type of "map" what the type of x and the result of this lambda
expression must be, but even though it is much longer... I am much for
adding explicit type information.

Ralf

Bill Page

unread,
Mar 20, 2009, 1:18:05 PM3/20/09
to fricas...@googlegroups.com
On Fri, Mar 20, 2009 at 12:56 PM, Ralf Hemmecke wrote:
>
>> Question 2:
>>
>> How explicit must I be with the types? For example should
>>
>> +      map(k2exquo, x+->x::F, p)$PolynomialCategoryLifting(IndexedExponents K,
>>
>> be sufficient?
>
> I would be for a "no".
>
> I think more redundancy helps. Perhaps during that process some bugs
> may be revealed.

But conversely, redundancy sometimes makes things a little harder to maintain.

> Additionally, I don't think that "x+->x::F" is proper Aldor.

That is not a very strong argument for FriCAS, but could you please check?

> It might be that the SPAD compiler is smart enough to figure out
> from the type of "map" what the type of x and the result of this
> lambda expression must be, but even though it is much longer...
> I am much for adding explicit type information.
>

In general I agree, however in this particular case just the function
name 'coerce' seems to be sufficient for the compiler.

wspage@debian:~/fricas-src/src/algebra$ svn diff
Index: exprode.spad.pamphlet
===================================================================
--- exprode.spad.pamphlet (revision 549)
+++ exprode.spad.pamphlet (working copy)
@@ -112,7 +112,7 @@
kernel(op, [div2exquo f for f in argument k]$List(F))

smp2exquo p ==
- map(k2exquo,#1::F,p)$PolynomialCategoryLifting(IndexedExponents K,

+ map(k2exquo,coerce,p)$PolynomialCategoryLifting(IndexedExponents K,
K, R, P, F)

div2exquo f ==
---

This compiles fine with no errors or warnings.

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 20, 2009, 4:02:01 PM3/20/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> On Fri, Mar 20, 2009 at 9:12 AM, Waldek Hebisch wrote:
> >
> > Concerning convertion from #1: there is about 1200 places in algebra
> > which use this convention. ?It seems that we neeed to do change by
> > hand, so we probably should split work on file by file basis. ?We

> > should probably start from most used files like expr, int*, kl, poly,
> > mpoly, since in this files testing is likely to quickly discover
> > problems.
> >
>
> As my first attempt, in 'exprode.spad.pamphlet' I changed:
>
> wspage@debian:~/fricas-src/src/algebra$ svn diff
> Index: exprode.spad.pamphlet
> ===================================================================
> --- exprode.spad.pamphlet (revision 549)
> +++ exprode.spad.pamphlet (working copy)
> @@ -112,7 +112,7 @@
> kernel(op, [div2exquo f for f in argument k]$List(F))
>
> smp2exquo p ==
> - map(k2exquo,#1::F,p)$PolynomialCategoryLifting(IndexedExponents K,
> + map(k2exquo,(x:R):F+->(x::F),p)$PolynomialCategoryLifting(IndexedExponents
> K,
> K, R, P, F)
>
> div2exquo f ==
>
> ---
>
> Question 1:
>
> When I notangle and compile the file 'exprode.spad' I get the warning:
>
> ....
> [4] smp2exquo: x has no value
>
> Is such warning to be expected? Perhaps 'x' should be treated as only
> a formal parameter and no warning generated?
>

Thanks for information. The warning is harmless. There is code
in the compile which supresses such warnings for #1 parameters,
I need to add code to handle parameters to '+->'

>
> Question 2:
>
> How explicit must I be with the types? For example should
>
> + map(k2exquo, x+->x::F, p)$PolynomialCategoryLifting(IndexedExponents K,
>
> be sufficient?
>

Technically 'x+->x::F' almost always should be enough -- if compiler
can infer types for #1, then also should be able to infer types
for '+->'. Theoretically, there is possibility that the type
is discovered only after backtracking, and the new code may raise
error in the wrong branch, but this should happen rarely.

In short: if using #1 file worked and after conversion to '+->'
without types it compiles, then the code should work.

In general, you either should add no types or both types for
arguments and result types. If the compiler knows the type it
will check that the type you gave is consistent. However, the
compiler will not attempt to check partially specified type
(say ony types for arguments) and (currently) can not use partial
information for inference.

There is question if we should add type annotations even if they
are not required by the compiler. In case like above I would say
no: PolynomialCategoryLifting uniquely specifies types for map.

smp2exquo : P -> F

smp2exquo p ==
map(k2exquo,#1::F,p)$PolynomialCategoryLifting(IndexedExponents


K, K, R, P, F)

Ralf Hemmecke wrote:

> I think more redundancy helps. Perhaps during that process some bugs
> may be revealed.

I general I agree that some redundancy helps. However, in most
cases anonymous functions are quite small and type is clear from
the context. Also

(x : R): F +-> x::F

does not add much information. Compiler can check the type of x
(note that :: determines result type). Human readers still has to
look up what R and F is. OTOH in this case it is hard to say more,
(exceprt for comments) we have no way to say inside smp2exquo that
R and F are parameters of ExpressionSpaceODESolver. And even if we
could we probably do not want: this tiny and almost trivial function
would get quite bulky.

So I would say: add types only in puzzling cases, otherwise keep
them without types.

One more remark: since #1 construct works only if type is known
to the compiler annotation mostly check that human idea of type
agrees with the compiler knowledge.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 20, 2009, 4:50:19 PM3/20/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> Ok, I can work on the following 19 files over the next 1-2 days:
>
> wspage@debian:~/fricas-src/src/algebra$ ls expr* int* kl* poly* multpoly*
>
> expr2ups.spad.pamphlet intef.spad.pamphlet intrf.spad.pamphlet
> exprode.spad.pamphlet integer.spad.pamphlet kl.spad.pamphlet
> expr.spad.pamphlet integrat.spad.pamphlet multpoly.spad.pamphlet
> intaf.spad.pamphlet interval.as.pamphlet polycat.spad.pamphlet
> intalg.spad.pamphlet interval.spad.pamphlet poly.spad.pamphlet
> intaux.spad.pamphlet intfact.spad.pamphlet
> intclos.spad.pamphlet intpm.spad.pamphlet
>
> I see about 834 uses of #1 in 139
>
> wspage@debian:~/fricas-src/src/algebra$ grep '#1' expr* int* kl* poly*
> multpoly* | wc
> 139 834 10946
>

I think that there are 172 uses:

$ grep -h '#[1-9]' expr* int* kl* poly* multpoly* | tr -cd '#' | wc
0 1 172

> ----
>
> Does anyone else want to commit to another group?
>

I will probably spent most of time improving the patch, but for
testing I will do elemntry.spad.pamphlet and fraction.spad.pamphlet
(that is 38 places).

> ---
>
> Waldek, what procedure would you recommend for regression testing? Can
> you please be specific so that we all follow the same method?
>

If we just do the conversion changes to Lisp files should be trivial.
For example, the change to exprode.spad produced:

--- ax-build16/src/algebra/EXPRODE.NRLIB/EXPRODE.lsp 2009-03-20 11:07:26.000000000 -0400
+++ EXPRODE.NRLIB/EXPRODE.lsp 2009-03-20 17:41:17.000000000 -0400
@@ -38,7 +38,7 @@
(SPADCALL (CONS (|function| |EXPRODE;k2exquo|) $)
(CONS #'|EXPRODE;smp2exquo!0| $) |p| (QREFELT $ 33)))

-(DEFUN |EXPRODE;smp2exquo!0| (|#1| $) (SPADCALL |#1| (QREFELT $ 28)))
+(DEFUN |EXPRODE;smp2exquo!0| (|x| $) (SPADCALL |x| (QREFELT $ 28)))

(DEFUN |EXPRODE;div2exquo| (|f| $)
(PROG (|d|)

That is, only name of parameter changed. If all changes are of this
sort (of course since new name have different length there may
be formatting changes) I will be able to manually inspect them in
2-3 hours. If there are more complicated changes we will have to
look at them more carefully. Only seeing changes I will be able
to say what extra test are required.

Of course this tactic will only work if we avoid mixing
unrelated changes (so if in the process you see something
that should be clarified/fixed fix should go in separate
patch).

Ralf Hemmecke wrote:

> Yes, I would like to contribute, but only if we open up a branch for
> that. It seems that it is going to change quite a lot of files.
>
> There should be some regression tests added at the same time, maybe even
> before branching is done. Anyway, it would be good to first commit the
> tests and then do the conversion.

Yes, it make sense to created a short-lived branch for this and
merge all changes in one step. Concerning tests: adding then
is a long process. Practice of other projects shows that
most effective test came form bug reports. Writing test by
developers takes a lot of effort and usually give not so
good tests -- typically coverage is too small. Random tests
may improve coverage, but may require too much time to run
systematically. For example Vladimir Bondarenko wrote about
hundreds of hours for tests runs -- fine if you do bug hunt,
but completly inpractical for everyday (or rather everycommit)
use.

Since I plan to compare generated Lisp code, I do not think
we should wait for extra test. If we see some breakup in
generated code, then we could add apropriate source tests.


I will be out of town for the next two days. Please either
wait with commiting patches or create a branch.


--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 22, 2009, 3:17:51 PM3/22/09
to fricas...@googlegroups.com
>
> Bill Page wrote:
> ...

>> Question 2:
>>
>> How explicit must I be with the types? For example should
>>
>> +      map(k2exquo, x+->x::F, p)$PolynomialCategoryLifting(IndexedExponents K,
>>
>> be sufficient?
>>
>

On Fri, Mar 20, 2009 at 4:02 PM, Waldek Hebisch wrote:
> Technically 'x+->x::F' almost always should be enough -- if compiler
> can infer types for #1, then also should be able to infer types
> for '+->'.  Theoretically, there is possibility that the type
> is discovered only after backtracking, and the new code may raise
> error in the wrong branch, but this should happen rarely.
>
> In short: if using #1 file worked and after conversion to '+->'
> without types it compiles, then the code should work.
>

I have a problem converting the following function definition:

- kereval(k, lk, lv) == match(lk, lv, k, map(eval(#1, lk, lv), #1))

found at line 451 in 'src/algebra/expr.spad.pamphlet' to the new +->
lambda notation. A straightforward conversion to

+ kereval(k, lk, lv) == match(lk, lv, k, y+->map(x+->eval(x,
lk, lv), y))

produces the following error message:

compiling local kereval : (Kernel $,List Kernel $,List $) -> $
****** comp fails at level 1 with expression: ******
error in function kereval

((|match| |lk| |lv| |k|
(+-> |y| (|map| (+-> |x| (|eval| |x| |lk| |lv|)) |y|))))
****** level 1 ******
$x:= (match lk lv k (+-> y (map (+-> x (eval x lk lv)) y)))
$m:= $
$f:=
((((|last| #) (|rest| #) (|first| #) (|value| #) ...)))

>> Apparent user error:
compLambda
(+-> y (map (+-> x (eval x lk lv)) y))

---

The only alternate interpretation that I can think of is:

+ kereval(k, lk, lv) == match(lk, lv, k, x+->map(eval(x, lk, lv), x))

But this gives the same error message and doesn't make much sense
anyway (The first argument to map should be a function and the 2nd
argument normally is a list.)

The more I look at this expression and the definitions of 'match',
'map', and 'eval', the less I think it makes sense. Can anyone else
see what this definition is supposed to mean and provide a translation
using +-> ?

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 22, 2009, 5:45:46 PM3/22/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> I have a problem converting the following function definition:
>
> - kereval(k, lk, lv) == match(lk, lv, k, map(eval(#1, lk, lv), #1))
>
> found at line 451 in 'src/algebra/expr.spad.pamphlet' to the new +->
> lambda notation. A straightforward conversion to
>
> + kereval(k, lk, lv) == match(lk, lv, k, y+->map(x+->eval(x,
> lk, lv), y))
>
> produces the following error message:
>
> compiling local kereval : (Kernel $,List Kernel $,List $) -> $
> ****** comp fails at level 1 with expression: ******
> error in function kereval
>
> ((|match| |lk| |lv| |k|
> (+-> |y| (|map| (+-> |x| (|eval| |x| |lk| |lv|)) |y|))))
> ****** level 1 ******
> $x:= (match lk lv k (+-> y (map (+-> x (eval x lk lv)) y)))
> $m:= $
> $f:=
> ((((|last| #) (|rest| #) (|first| #) (|value| #) ...)))
>
> >> Apparent user error:
> compLambda
> (+-> y (map (+-> x (eval x lk lv)) y))
>

This message means that when compiling outer '+->' type
of the map is not known. The old code handling '#1' simply
notices that compilation was unsuccessful, backtracks and
tries other variants. ATM new code for '+->' raises
error in such situation. AFAICS there are two possible
candidates for match:

match : (List(A), List(B), A, B)
match : (List(A), List(B), A, A -> B)

Apparently the compiler tries the first to compile arguments
to see what types one gets, and notices that the map part does
not compile. Old version than backtracked and tried the second
possiblility for match , which works. The new code
signals error on first attampt...

ATM the new error message is rather enigmatic, but actually
it means very specific error. To be more precise, the
'compLambda' part is name of routine which signals the
error. The 'compLambda' routine is called when type of
map is unknown or if arguments types or return type is
specified. It signals error if the argument types or
return type is not specified (or if argument list is
malformed).

One remark: in principle we could try to emulate old
behaviour. However, it is rather hard to say what exactly
happens and it is hard to give sensible error message.
The error message from new code is bad, but at least we
have possiblity to improve it. With bactracking used
by the old code we know that all alternatives that the
compiler tried failed, but have little chance to say
anything more specific.

> ---
>
> The only alternate interpretation that I can think of is:
>
> + kereval(k, lk, lv) == match(lk, lv, k, x+->map(eval(x, lk, lv), x))
>
> But this gives the same error message and doesn't make much sense
> anyway (The first argument to map should be a function and the 2nd
> argument normally is a list.)
>
> The more I look at this expression and the definitions of 'match',
> 'map', and 'eval', the less I think it makes sense. Can anyone else
> see what this definition is supposed to mean and provide a translation
> using +-> ?
>

This is one of tricky cases where we have to specify argument types
(and return type) of '+->' expression:

--- ax-build2/src/algebra/EXPR.spad 2009-03-22 20:38:32.000000000 +0100
+++ EXPR.spad 2009-03-22 21:57:58.000000000 +0100
@@ -435,7 +435,9 @@
coerce(x:R):% == (zero? x => 0; constantKernel(x)::%)
retract(x:%):R == (zero? x => 0; retNotUnit x)
coerce(x:%):OutputForm == coerce(x)$Rep


- kereval(k, lk, lv) == match(lk, lv, k, map(eval(#1, lk, lv), #1))

+ kereval(k, lk, lv) ==

+ match(lk, lv, k, (x : K) : % +->
+ map(y +-> eval(y, lk, lv), x))

subeval(k, lk, lv) ==
match(lk, lv, k,


--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 22, 2009, 6:02:05 PM3/22/09
to fricas...@googlegroups.com
On Sun, Mar 22, 2009 at 5:45 PM, Waldek Hebisch wrote:
> ...

> This is one of tricky cases where we have to specify argument types
> (and return type) of '+->' expression:
>
> --- ax-build2/src/algebra/EXPR.spad     2009-03-22 20:38:32.000000000 +0100
> +++ EXPR.spad   2009-03-22 21:57:58.000000000 +0100
> @@ -435,7 +435,9 @@
>           coerce(x:R):%  == (zero? x => 0; constantKernel(x)::%)
>           retract(x:%):R == (zero? x => 0; retNotUnit x)
>           coerce(x:%):OutputForm == coerce(x)$Rep
> -          kereval(k, lk, lv) == match(lk, lv, k, map(eval(#1, lk, lv), #1))
> +          kereval(k, lk, lv) ==
> +             match(lk, lv, k, (x : K) : %  +->
> +                                 map(y +-> eval(y, lk, lv), x))
>
>           subeval(k, lk, lv) ==
>             match(lk, lv, k,
>
>
> --

Thanks! After a lot of trial and error and at almost the same time as
your email arrived I finally tried this:

- kereval(k, lk, lv) == match(lk, lv, k, map(eval(#1, lk, lv), #1))

+ kereval(k, lk, lv) == match(lk, lv, k, (x2:K):% +->
map(x1+->eval(x1,lk,lv), x2))

I agree that a more specific error message is very desirable here! :-)

I do not mind if the compiler does not try so hard to infer the proper
types by backtracking. I think it is sufficient to improve the error
message so that the solution to this problem is more obvious.

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 22, 2009, 9:04:04 PM3/22/09
to fricas...@googlegroups.com
I wrote:
> Ralf Hemmecke wrote:
>
> > Yes, I would like to contribute, but only if we open up a branch for
> > that. It seems that it is going to change quite a lot of files.
> >
> > There should be some regression tests added at the same time, maybe even
> > before branching is done. Anyway, it would be good to first commit the
> > tests and then do the conversion.
>
> Yes, it make sense to created a short-lived branch for this and
> merge all changes in one step.

> Since I plan to compare generated Lisp code, I do not think


> we should wait for extra test. If we see some breakup in
> generated code, then we could add apropriate source tests.
>
>
> I will be out of town for the next two days. Please either
> wait with commiting patches or create a branch.

I have now create a new branch. To get it use:

svn checkout https://fricas.svn.sourceforge.net/svnroot/fricas/branches/new-lambda

I have also commited converted elemntry and fraction. In elemntry
I had to add explicit types to compile. As I hoped, changes to
generated Lisp files are trivial: only argument names in anonymous
functions changed.

Please commit converted files to the branch. The criterion for
commit is that the files should compile OK and there should be
no other changes which my affect Lisp output (that is formatting
changes or spelling corrections in comments are OK).

As I wrote I will compare generated Lisp before merging to the
trunk, but if you wish you can also do this. FYI I use the
command like this:

for A in $(cd ax-build1/src/algebra; ls -d *.NRLIB | sed 's,.NRLIB,,');
do B=src/algebra/$A.NRLIB/$A.lsp ; diff -u ../axp1/ax-build1/$B
ax-build1/$B ; done | less

where ax-build1 is the build directory of the new version and
old version is in ../axp1/ax-build1.


--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 22, 2009, 10:07:39 PM3/22/09
to fricas...@googlegroups.com
I wrote:
> I have also commited converted elemntry and fraction. In elemntry
> I had to add explicit types to compile. As I hoped, changes to
> generated Lisp files are trivial: only argument names in anonymous
> functions changed.

I am now converting all files with names beginning on 'a'.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 23, 2009, 12:22:07 AM3/23/09
to fricas...@googlegroups.com
> ...

> Ralf wrote:
>> It might be that the SPAD compiler is smart enough to figure out
>> from the type of "map" what the type of x and the result of this
>> lambda expression must be, but even though it is much longer...
>> I am much for adding explicit type information.
>>
>

On Fri, Mar 20, 2009 at 1:18 PM, I wrote:
> In general I agree, however in this particular case just the function
> name 'coerce' seems to be sufficient for the compiler.
>
> wspage@debian:~/fricas-src/src/algebra$ svn diff
> Index: exprode.spad.pamphlet
> ===================================================================
> --- exprode.spad.pamphlet       (revision 549)
> +++ exprode.spad.pamphlet       (working copy)
> @@ -112,7 +112,7 @@
>       kernel(op, [div2exquo f for f in argument k]$List(F))
>
>     smp2exquo p ==
> -      map(k2exquo,#1::F,p)$PolynomialCategoryLifting(IndexedExponents K,
> +      map(k2exquo,coerce,p)$PolynomialCategoryLifting(IndexedExponents K,
>                                                              K, R, P, F)
>
>     div2exquo f ==
> ---
>
> This compiles fine with no errors or warnings.
>

Here is a more specific example from expr.spad.pamphlet:

)abbrev package PAN2EXPR PolynomialAN2Expression
++ Author: Barry Trager
++ Date Created: 8 Oct 1991
++ Description: This package provides a coerce from polynomials over
++ algebraic numbers to \spadtype{Expression AlgebraicNumber}.
PolynomialAN2Expression():Target == Implementation where
EXPR ==> Expression(Integer)
AN ==> AlgebraicNumber
PAN ==> Polynomial AN
SY ==> Symbol
Target ==> with
coerce: Polynomial AlgebraicNumber -> Expression(Integer)
++ coerce(p) converts the polynomial \spad{p} with algebraic number
++ coefficients to \spadtype{Expression Integer}.
coerce: Fraction Polynomial AlgebraicNumber -> Expression(Integer)
++ coerce(rf) converts \spad{rf}, a fraction of polynomial \spad{p} with
++ algebraic number coefficients to \spadtype{Expression Integer}.
Implementation ==> add
coerce(p:PAN):EXPR ==
- map(#1::EXPR, #1::EXPR, p)$PolynomialCategoryLifting(
+ map(coerce, coerce, p)$PolynomialCategoryLifting(
IndexedExponents SY, SY, AN, PAN, EXPR)

IndexedExponents SY, SY, AN, PAN, EXPR)
coerce(rf:Fraction PAN):EXPR ==
numer(rf)::EXPR / denom(rf)::EXPR

---

The lisp generated for this is shown below:

+++ /home/wspage/new-lambda-build/src/algebra/PAN2EXPR.NRLIB/PAN2EXPR.lsp
2009-03-22 2
3:29:39.000000000 -0400
@@ -2,14 +2,7 @@
(/VERSIONCHECK 2)

(DEFUN |PAN2EXPR;coerce;PE;1| (|p| $)
- (SPADCALL (CONS #'|PAN2EXPR;coerce;PE;1!0| $)
- (CONS #'|PAN2EXPR;coerce;PE;1!1| $) |p| (QREFELT $ 15)))
-
-(DEFUN |PAN2EXPR;coerce;PE;1!1| (|#1| $)
- (SPADCALL |#1| (QREFELT $ 10)))
-
-(DEFUN |PAN2EXPR;coerce;PE;1!0| (|#1| $)
- (SPADCALL |#1| (QREFELT $ 8)))
+ (SPADCALL (ELT $ 8) (ELT $ 10) |p| (QREFELT $ 15)))

(DEFUN |PAN2EXPR;coerce;FE;2| (|rf| $)
(SPADCALL (SPADCALL (SPADCALL |rf| (QREFELT $ 18)) (QREFELT $ 16))

----

Using 'coerce' directly instead of the trivial lambda expression seems
to generate significantly better code. It seems unlikely to me that
any Lisp compiler would be able to optimize this difference away.
Should we also consider making this change?

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 23, 2009, 8:11:26 AM3/23/09
to fricas...@googlegroups.com

Version using coerce here gives both smaller and faster code.
However, when making optimizations one question is if the
change matters. Without hard data from profiling reasonable
guess is that this change does not matter. So (unless we get
extra information) our criterion here should be code readability.
I must say that I find the first version (using lambda expressions)
more readable than the second one. First, the first version
contains target type. Given that coerce is highly overloaded
it makes a difference. Second, we typically use :: for coercions,
so it is not obvious why coerce is used this time.

So I would be (weakly) against such change.

Concerning compilers: yes, Lisp compiler has almost no chance
to optimize the resulting code. Such optimization is job
for Spad compiler. Our current compiler is organized in a
way which makes implementing this optimization hard. But
techniques for doing such optimization are standard, and
sooner or later we will implement them.


--
Waldek Hebisch
heb...@math.uni.wroc.pl

Ralf Hemmecke

unread,
Mar 23, 2009, 8:16:27 AM3/23/09
to fricas...@googlegroups.com
Hello Bill,

That is a good catch. BUT...

I think in new-lambda we should have one common denominator and that is:

The generated LISP code should not change.

Keep the collection of such 'coerce' or similar things in your local
repository and apply them later if we all agree.

Currently, I think it is much wiser just eliminating #1 and friends to
lambda expressions generating the same LISP code, so that we can be sure
that there is no regression by that huge change.

I also read Waldek's remark. I somehow must agree that 'coerce' is
almost always bad and hard to understand, but since it generates better
code, I'd be much more for coerce than Waldek. Maybe introduction of a
local macro

liftToPer ==> coerce

or a function

lift2Per(k: K): % == k::%

and then using lift2Per in 'map' instead of coerce would be much more
readable and this one function indirection can certainly be compiled
away. But all of that should be done after everything is in lambda-notation.

Additionally, I must say that in some places I even don't find the
lambda notation very readable.

Ralf

Bill Page

unread,
Mar 23, 2009, 5:37:05 PM3/23/09
to fricas...@googlegroups.com
On Mon, Mar 23, 2009 at 8:16 AM, Ralf Hemmecke wrote:
>
> That is a good catch. BUT...
>
> I think in new-lambda we should have one common denominator and that is:
>
>   The generated LISP code should not change.
>
> Keep the collection of such 'coerce' or similar things in your local
> repository and apply them later if we all agree.
>

Ok, I agree that we should not mix two different kinds of changes in
the new-lambda branch - it is just that when starting the changes for
new-lambda branch the first thing I saw was something like '#1::EXPR'
and it occurred to me that really there was no need for a nested
function at all in this case.

> Currently, I think it is much wiser just eliminating #1 and friends to
> lambda expressions generating the same LISP code, so that we can
> be sure that there is no regression by that huge change.
>

Sure.

> I also read Waldek's remark. I somehow must agree that 'coerce' is
> almost always bad and hard to understand, but since it generates better
> code, I'd be much more for coerce than Waldek.

I do not see how using 'coerce' can be bad - after all when we define
the coercion somewhere else in the code, 'coerce' is the name that we
are required to use. To me it is more direct than requiring the user
parse the expression 'x+->x::EXPR' and to realize that internally this
just generates a call to 'coerce'.

Perhaps the difficulty is that although we are writing in SPAD and
functions are available as first-order objects, really we are not so
used to passing functions as values ... so we are looking for some
kind of syntactical reminder that this argument really is a function.
Maybe this could be helped by adhering to certain naming conventions
that give hints as to the type. 'coerce' is a special case but we do
have some naming conventions for functions - for example names that
end in ! and ?.

> Maybe introduction of a local macro
>
> liftToPer ==> coerce
>
> or a function
>
> lift2Per(k: K): % == k::%
>
> and then using lift2Per in 'map' instead of coerce would be much more
> readable and this one function indirection can certainly be compiled
> away. But all of that should be done after everything is in lambda-notation.
>

No, I do not like introducing macros just for creating synonyms. We
can just as easily provide this information in a short comment. And as
far as I can see introducing a named function will have the same
effect as the anonymous version. It wont be "compiled away".

> Additionally, I must say that in some places I even don't find the
> lambda notation very readable.
>

Could you explain? Do you mean that you would rather not used
functions as values if this is not absolutely necessary to the code?

Regards,
Bill Page.

Bill Page

unread,
Mar 23, 2009, 6:06:38 PM3/23/09
to fricas...@googlegroups.com
> Bill Page wrote:
>> Using 'coerce' directly instead of the trivial lambda expression seems
>> to generate significantly better code. It seems unlikely to me that
>> any Lisp compiler would be able to optimize this difference away.
>> Should we also consider making this change?
>

On Mon, Mar 23, 2009 at 8:11 AM, Waldek Hebisch wrote:
> Version using coerce here gives both smaller and faster code.
> However, when making optimizations one question is if the
> change matters.  Without hard data from profiling reasonable
> guess is that this change does not matter.

You are probably right. Unless these coercions are done frequently,
the different in the overall performance of the system is likely to be
very small.

> So (unless we get extra information) our criterion here should be
> code readability. I must say that I find the first version (using lambda
> expressions) more readable than the second one.

I would like to understand your reasons for saying that

x+->x::EXPR

is "more readable" than just 'coerce'. To me it is just extra syntax.
The only essential part of the expressions is :: and this is normally
read as "coerce" (or perhaps "convert" in the interpreter). After all,
when actually defining the coercion we are forced to use the name
'coerce'. Why should we use something else when calling it?

> First, the first version contains target type.  Given that coerce is
> highly overloaded it makes a difference.

Can you think of a case in SPAD where it would make sense to pass a
function-value of a different type than the type of the argument? If
not, then this extra target type information is simply redundant.

In OpenAxiom Gaby has introduced the syntax

f@(A->B)

to disambiguate function names. This makes it possible to use fullly
overloaded names in domains and packages. E.g. If we export both

f:A->C

and

f:B->C

from the same package P, without the context of a a function
application to what does f$P refer?

If we really want such redundant information, then it make sense to me
that it be more complete, e.g.

map(coerce@(Symbol->EXPR), coerce@(AN->EXPR), p)$PolynomialCategoryLifting

> Second, we typically use :: for coercions, so it is not obvious why
> coerce is used this time.

:-) I just wrote essentially the opposite statement above. To me it is
not obvious why we should prefer :: when calling a 'coerce' function
in SPAD. Perhaps I would feel differently if the symbol :: was also
used when defining a coercion.

>
> So I would be (weakly) against such change.
>

Not a problem. I am just raising this as a coding issue.

> Concerning compilers: yes, Lisp compiler has almost no chance
> to optimize the resulting code.  Such optimization is job
> for Spad compiler.  Our current compiler is organized in a
> way which makes implementing this optimization hard.  But
> techniques for doing such optimization are standard, and
> sooner or later we will implement them.
>

After the compiler internally converts :: to 'coerce', perhaps it
would not be so hard to recognize at least the very simple case of
transforming

x+->f(x)

to just

f

??

Regards,
Bill Page.

Bill Page

unread,
Mar 23, 2009, 6:19:46 PM3/23/09
to fricas...@googlegroups.com
Waldek, Ralf,

What do you suppose we should write in this case in 'intalg.spad.pamphlet'?

UP2SUP p ==
- map(#1, p)$UnivariatePolynomialCategoryFunctions2(F, UP, F, SUP)
+ map(coerce, p)$UnivariatePolynomialCategoryFunctions2(F, UP, F, SUP)

SUP2UP p ==
- map(#1, p)$UnivariatePolynomialCategoryFunctions2(F, SUP, F, UP)
+ map(coerce, p)$UnivariatePolynomialCategoryFunctions2(F, SUP, F, UP)

My original proposal is shown as +. I suppose that we could write:

map(x+->x, p)$UnivariatePolynomialCategoryFunctions2(F, UP, F, SUP)

Is there any point to this construction?

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 23, 2009, 6:40:15 PM3/23/09
to fricas...@googlegroups.com

AFAICS the original version passes identity function to map,
so I do not see why do you want to pass coerce. Even if
coerce works OK, it would be obscuring the code -- the version
with identity makes clear that we leave coefficients unchanged,
while coerce typically transforms them.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 23, 2009, 7:16:33 PM3/23/09
to fricas...@googlegroups.com
[ Charset ISO-8859-1 unsupported, converting... ]

>
> > Bill Page wrote:
> >> Using 'coerce' directly instead of the trivial lambda expression seems
> >> to generate significantly better code. It seems unlikely to me that
> >> any Lisp compiler would be able to optimize this difference away.
> >> Should we also consider making this change?
> >
>
> On Mon, Mar 23, 2009 at 8:11 AM, Waldek Hebisch wrote:
> > So (unless we get extra information) our criterion here should be
> > code readability. I must say that I find the first version (using lambda
> > expressions) more readable than the second one.
>
> I would like to understand your reasons for saying that
>
> x+->x::EXPR
>
> is "more readable" than just 'coerce'. To me it is just extra syntax.
> The only essential part of the expressions is :: and this is normally
> read as "coerce" (or perhaps "convert" in the interpreter). After all,
> when actually defining the coercion we are forced to use the name
> 'coerce'. Why should we use something else when calling it?
>

In another message Bill wrote:

> I do not see how using 'coerce' can be bad - after all when we define
> the coercion somewhere else in the code, 'coerce' is the name that we
> are required to use. To me it is more direct than requiring the user
> parse the expression 'x+->x::EXPR' and to realize that internally this
> just generates a call to 'coerce'.

I would say that '::' is how do we use coercions. One reason to
prefer it over coerce is that '::' is much shorter. Another
thing is that I consider '::' as a primitve: what compiler does
to implement '::' is just an implementation detail. Currently
of course compiler mainly calls 'coerce' but in priciple it
can do something smarter.

> >?First, the first version contains target type. ?Given that coerce is


> > highly overloaded it makes a difference.
>
> Can you think of a case in SPAD where it would make sense to pass a
> function-value of a different type than the type of the argument? If
> not, then this extra target type information is simply redundant.
>

> If we really want such redundant information, then it make sense to me


> that it be more complete, e.g.
>
> map(coerce@(Symbol->EXPR), coerce@(AN->EXPR), p)$PolynomialCategoryLifting
>

Yes, in tricky cases version above would be better.

> > Concerning compilers: yes, Lisp compiler has almost no chance

> > to optimize the resulting code. ?Such optimization is job
> > for Spad compiler. ?Our current compiler is organized in a
> > way which makes implementing this optimization hard. ?But


> > techniques for doing such optimization are standard, and
> > sooner or later we will implement them.
> >
>
> After the compiler internally converts :: to 'coerce', perhaps it
> would not be so hard to recognize at least the very simple case of
> transforming
>
> x+->f(x)
>
> to just
>
> f
>
> ??
>

We could probably do this. But it is not strightforward
because we may get Lisp expression containing macros or special
forms. Simply, I am not sure if effect from such optimization
is worth the effort.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 23, 2009, 7:29:54 PM3/23/09
to fricas...@googlegroups.com
Ok, thanks. Now I see the reason for the identity function. We are
actually literally converting SUP to UP and vice-versa. I wonder if
this would not be better accomplished by actual simple coercions?

In fact, right now all IntegralDomains export

coerce: % -> %

Perhaps this is an error. In 'catdef.spad.pamphlet' I see:

IntegralDomain(): Category ==
-- Join(CommutativeRing, Algebra(%:CommutativeRing), EntireRing) with
Join(CommutativeRing, Algebra(%), EntireRing) with
...

I do not understand why we have 'Algebra(%)' here. That is where
coerce:%->% comes from. What do you suppose was the intention of the
version that is commented out?

Regards,
Bill Page.

Bill Page

unread,
Mar 23, 2009, 7:39:44 PM3/23/09
to fricas...@googlegroups.com
On Mon, Mar 23, 2009 at 7:16 PM, Waldek Hebisch wrote:
> I would say that '::' is how do we use coercions.  One reason to
> prefer it over coerce is that '::' is much shorter.  Another
> thing is that I consider '::' as a primitive: what compiler does

> to implement '::' is just an implementation detail.  Currently
> of course compiler mainly calls 'coerce' but in principle it
> can do something smarter.
>

This is exactly what worries me. If "what compiler does to implement
'::' is just an implementation detail", then I worry what exactly is
the meaning of

x+->x::EXPR

In that sense simply referring to 'coerce' is more specific and as a
library programmer I am happy to know that it is necessarily calling
some function named 'coerce' of the required type, rather than doing
something "smarter" - which might not be what I really want.

Regards,
Bill Page.

Ralf Hemmecke

unread,
Mar 23, 2009, 8:03:56 PM3/23/09
to fricas...@googlegroups.com
> IntegralDomain(): Category ==
> -- Join(CommutativeRing, Algebra(%:CommutativeRing), EntireRing) with
> Join(CommutativeRing, Algebra(%), EntireRing) with
> ...
>
> I do not understand why we have 'Algebra(%)' here. That is where
> coerce:%->% comes from. What do you suppose was the intention of the
> version that is commented out?

Algebra(%) sounds like what Martin wanted in a recent thread.

As for the "%: CommutativeRing" I can only guess. But it looks like
giving the compiler some help. Maybe at some point it didn't find out by
itself that % fulfils the requirements for being an argument of Algebra.

This "--" line is no proper Aldor, because ":" has to do with
declaration and I don't see what kind of declaration that should be.
Would this make any sense in SPAD?

Ralf

Ralf Hemmecke

unread,
Mar 23, 2009, 8:04:16 PM3/23/09
to fricas...@googlegroups.com
> map(coerce@(Symbol->EXPR), coerce@(AN->EXPR), p)$PolynomialCategoryLifting

I like that style very much.

Ralf

Ralf Hemmecke

unread,
Mar 23, 2009, 8:25:04 PM3/23/09
to fricas...@googlegroups.com
>> I also read Waldek's remark. I somehow must agree that 'coerce' is
>> almost always bad and hard to understand, but since it generates better
>> code, I'd be much more for coerce than Waldek.
>
> I do not see how using 'coerce' can be bad - after all when we define
> the coercion somewhere else in the code, 'coerce' is the name that we
> are required to use. To me it is more direct than requiring the user
> parse the expression 'x+->x::EXPR' and to realize that internally this
> just generates a call to 'coerce'.

If I (as a human) see "coerce" in the code, then I have hard times to
figure out the source an target types. If I had an editor that would
show me the types if I hover over "coerce", it would be easier to
understand the sources. Just imagine you have the source code in printed
form. How much time does it take for you to figure out what some piece
of source code really means if it contains some "coerce", but the actual
types are hidden somewhere else in the source. Writing code is sometimes
easier than understanding what other people had in mind.

Your "coerce@(A->B)" style could cure that very well. I like it.

> Perhaps the difficulty is that although we are writing in SPAD and
> functions are available as first-order objects, really we are not so
> used to passing functions as values ... so we are looking for some
> kind of syntactical reminder that this argument really is a function.

I have certainly not a problem in passing functions as arguments. I have
a problem in understanding an identifier (here "coerce") that is soooo
heavily overloaded that its meaning can only be understood if the source
and target types can easily be figured out.

>> Maybe introduction of a local macro
>>
>> liftToPer ==> coerce
>>
>> or a function
>>
>> lift2Per(k: K): % == k::%
>>
>> and then using lift2Per in 'map' instead of coerce would be much more
>> readable and this one function indirection can certainly be compiled
>> away. But all of that should be done after everything is in lambda-notation.

> No, I do not like introducing macros just for creating synonyms. We
> can just as easily provide this information in a short comment.

Of course! How could I forget that one can add a comment? Stupid me.

>> Additionally, I must say that in some places I even don't find the
>> lambda notation very readable.

> Could you explain? Do you mean that you would rather not used
> functions as values if this is not absolutely necessary to the code?

It's more to do with the meaning of the parts and the length of the
construction. I cannot say that I find

kereval(k, lk, lv) ==
match(lk, lv, k, (x : K) : % +->


map(y +-> eval(y, lk, lv), x))

any more readable than the initial

kereval(k, lk, lv) == match(lk, lv, k, map(eval(#1, lk, lv), #1))

Without some comments or without investing a lot of time in figuring out
the involved functions, I have no chance of knowing what is going on.

Ralf

Ralf Hemmecke

unread,
Mar 23, 2009, 8:33:19 PM3/23/09
to fricas...@googlegroups.com

Well "a::X" can mean everything, but if (in library code) it means
anything else than "coerce(a)@X" then I am troubled. And I hope that
SPAD is not introducing some weird "smart" things for :: that I cannot
control. I'd consider :: as syntactic sugar, nothing more. For the
interpreter I'd be more relaxed, but I am currently mainly concerned
with library code.

Ralf

Waldek Hebisch

unread,
Mar 23, 2009, 9:10:39 PM3/23/09
to fricas...@googlegroups.com

The '--' is invalid in FriCAS. In old Spad ':' here meant
almost the same as 'pretend'. Given that I can guess that in
the past compiler had trouble with the current line and the
one commented out worked around the problem.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Martin Rubey

unread,
Mar 24, 2009, 1:30:50 AM3/24/09
to fricas...@googlegroups.com
Bill Page <bill...@newsynthesis.org> writes:

> map(coerce@(Symbol->EXPR), coerce@(AN->EXPR), p)$PolynomialCategoryLifting

although not complete, wouldn't

map(coerce@EXPR, coerce@EXPR, p)$PolynomialCategoryLifting

work already? (I believe it does, but I didn't double check.)

Martin

Martin Rubey

unread,
Mar 24, 2009, 2:17:19 AM3/24/09
to fricas...@googlegroups.com
Bill Page <bill...@newsynthesis.org> writes:

> IntegralDomain(): Category ==
> -- Join(CommutativeRing, Algebra(%:CommutativeRing), EntireRing) with
> Join(CommutativeRing, Algebra(%), EntireRing) with
> ...
>
> I do not understand why we have 'Algebra(%)' here. That is where
> coerce:%->% comes from.

Well, every INTDOM is an Algebra over itself, that's obvious. Of
course, one doesn't need this for to make exports of Algebra
available, but rather to get

R has Algebra R

to return true. For example, in library code, we frequently ask

R has Algebra Fraction Integer

If the line above wasn't there, this test wouldn't pass for Fraction
Integer itself.

What I wanted to add to INTDOM is

Algebra(Factored(%))

and, initially also Algebra Integer to the exports of Ring.

THis seems quite possible. In the end, it allows us to build things
like

LocalAlgebra(Integer, Factored Integer)

i.e., fractions with factored denominators, which can be useful in
practice. The idea is, of course, that if we export C(%), we should
also export C(D(%)) whenever % and D(%) are "isomorphic", i.e., D
"only" changes the representation of %. I guess that PartialFraction
could be another candidate to add to this list, but I didn't check.

Originally, I also wanted to have Ring export Algebra Integer.
However this is not possible, since it introduces a circular
dependency on the category level: Algebra exports Ring. However, as
(if I recall correctly) Ralf pointed out, we could/should simply
replace every occurrence where a domain or category exports "Ring" by
"Algebra Integer", and this would cure the problem without tricks.
Of course, it would make Ring almost synonymous with Algebra Integer,
but on the other hand, it is. Quoting Ralf:

> In fact, z*r is just syntactic sugar for r+r+...+r [...]
> Of course, one can turn every ring into a Z-module/algebra, by making
> that syntactic sugar explicit. But that is a *construction* and as such,
> I think, I find it natural to make that construction explicit in
> code.

I believe however, that we really want to turn every ring into an
Algebra INT... There are only very few places where we export Ring.
I found (using grep " Ring" *):

catdef.spad.pamphlet:244:CharacteristicNonZero():Category == Ring with
catdef.spad.pamphlet:264:CharacteristicZero():Category == Ring
catdef.spad.pamphlet:308:DifferentialRing(): Category == Ring with
catdef.spad.pamphlet:346:DifferentialExtension(R:Ring): Category == Ring with
catdef.spad.pamphlet:1045:LinearlyExplicitRingOver(R:Ring): Category == Ring with
catdef.spad.pamphlet:1424:PartialDifferentialRing(S:SetCategory): Category == Ring with
equation2.spad.pamphlet:101: Ring
fspace.spad.pamphlet:454: Ring
modring.spad.pamphlet:37: C == Ring with
taylor.spad.pamphlet:39: Exports ==> Ring with
wtpol.spad.pamphlet:35: Ring with
wtpol.spad.pamphlet:138: Ring with

However, I guess before modifying these, it would be necessary to
write testcases, like

testTrue("Fraction Integer has Algebra Factored Integer")

I argued in the past that such testcases really belong where the
category is defined, but Waldek prefers them in a separate file in
the input directory for other good reasons.

Thinking about this, maybe this could give a Bachelor thesis,
wouldn't it?

Martin

Bill Page

unread,
Mar 24, 2009, 2:19:51 AM3/24/09
to fricas...@googlegroups.com
On Tue, Mar 24, 2009 at 1:30 AM, Martin Rubey wrote:

>
> Bill Page writes:
>
>>    map(coerce@(Symbol->EXPR), coerce@(AN->EXPR), p)$PolynomialCategoryLifting
>
> although not complete, wouldn't
>
>    map(coerce@EXPR, coerce@EXPR, p)$PolynomialCategoryLifting
>
> work already?  (I believe it does, but I didn't double check.)
>

No coerce@EXPR expects a value of type EXPR, not just a function that
returns a value of this type. coerce@(Symbol->EXPR) refers to the
function itself.

Regards,
Bill Page

Bill Page

unread,
Mar 24, 2009, 3:34:07 AM3/24/09
to fricas...@googlegroups.com
On Tue, Mar 24, 2009 at 2:17 AM, Martin Rubey wrote:

>
> Bill Page writes:
>
>> IntegralDomain(): Category ==
>> --  Join(CommutativeRing, Algebra(%:CommutativeRing), EntireRing) with
>>   Join(CommutativeRing, Algebra(%), EntireRing) with
>> ...
>>
>> I do not understand why we have 'Algebra(%)' here. That is where
>> coerce:%->% comes from.
>
> Well, every INTDOM is an Algebra over itself, that's obvious.

After looking at this in more detail I guess I would have expected

CommutativeRing(): Category == Join(Algebra(%), ...

Since in fact any commutative ring is an Algebra over itself, not just INTDOM.

> Of course, one doesn't need this for to make exports of Algebra
> available, but rather to get
>
>    R has Algebra R
>
> to return true.

One of the exports of Algebra(R) is coerce:R->% as a result we have

coerce: %->%

which looks strange at first but since by most definitions a coercion
is a 1-1 map, any such coercion must in fact be an identity (or at
least an isomorphism). As far as I can see, this is actually built-in
to the SPAD compiler.

> For example, in library code, we frequently ask
>
>    R has Algebra Fraction Integer
>
> If the line above wasn't there, this test wouldn't pass for Fraction
> Integer itself.
>
> What I wanted to add to INTDOM is
>
>    Algebra(Factored(%))
>

So in this case we will have

coerce: Factored(%) -> %

where is this coercion implemented? In every domain that has INTDOM or
is a generic implementation possible?

> and, initially also Algebra Integer to the exports of Ring.
>

I am a little concerned that Ring is isomorphic to Z-algebra in a
slightly different way than commutative rings are algebras over
themselves. There is a different definition of multiplication. No?

> This seems quite possible.  In the end, it allows us to build things


> like
>
>    LocalAlgebra(Integer, Factored Integer)
>
> i.e., fractions with factored denominators, which can be useful in
> practice.  The idea is, of course, that if we export C(%), we should
> also export C(D(%)) whenever % and D(%) are "isomorphic", i.e., D
> "only" changes the representation of %.  I guess that PartialFraction
> could be another candidate to add to this list, but I didn't check.
>

Yes, I agree that this is a very important idea.

> Originally, I also wanted to have Ring export Algebra Integer.
> However this is not possible, since it introduces a circular
> dependency on the category level: Algebra exports Ring.  However,
> as (if I recall correctly) Ralf pointed out, we could/should simply
> replace every occurrence where a domain or category exports
> "Ring" by "Algebra Integer", and this would cure the problem
> without tricks.
>
> Of course, it would make Ring almost synonymous with Algebra
> Integer, but on the other hand, it is.  Quoting Ralf:
>
>> In fact, z*r is just syntactic sugar for r+r+...+r [...]
>> Of course, one can turn every ring into a Z-module/algebra, by
>> making that syntactic sugar explicit. But that is a *construction*
>> and as such, I think, I find it natural to make that construction
>> explicit in code.
>
> I believe however, that we really want to turn every ring into an
> Algebra INT...  There are only very few places where we export
> Ring. I found (using grep " Ring" *):
>

I am afraid that I agree with Ralf here. The abstract concept of Ring
without first constructing a Z-algebra is important in many parts of
mathematics.

>
> However, I guess before modifying these, it would be necessary to
> write testcases, like
>
>    testTrue("Fraction Integer has Algebra Factored Integer")
>
> I argued in the past that such testcases really belong where the
> category is defined, but Waldek prefers them in a separate file in
> the input directory for other good reasons.
>

I am not convinced that test cases add much to fundamental design
decisions. In this case I think one should practice a "correct by
design" strategy. But testcases make sense to me when this strategy
fails, e.g. in numerical computation and very complex symbolic
manipulations.

> Thinking about this, maybe this could give a Bachelor thesis,
> wouldn't it?
>

Perhaps. Do you have a student in mind? It would be great if someone
would actually begin serious work on this sort of thing.

Regards,
Bill Page.

Ralf Hemmecke

unread,
Mar 24, 2009, 4:24:40 AM3/24/09
to fricas...@googlegroups.com
> After looking at this in more detail I guess I would have expected
>
> CommutativeRing(): Category == Join(Algebra(%), ...
>
> Since in fact any commutative ring is an Algebra over itself, not just INTDOM.
>
>> Of course, one doesn't need this for to make exports of Algebra
>> available, but rather to get
>>
>> R has Algebra R
>>
>> to return true.
>
> One of the exports of Algebra(R) is coerce:R->% as a result we have
>
> coerce: %->%
>
> which looks strange at first ...

I just have some remark... As we probably all know, Aldor's libalgebra
has some bugs in it that seem to be really hard to track down. I faintly
remember one such bug that I could nearly assign to the coexistence of

*: (Integer, %) -> % and *: (R, %) -> %

where R is instantiated with Integer. The problem here is a hidden
overloading of implementations. Suppose you implement something like

MyWeirdPoly(R: Ring): Ring with {
*: (Integer, %) -> %;
*: (R, %) -> %;
...
} == add {
(i: Integer) * (x: %): % == ((i+i)::%)*x;
(r: R) * (x: %): % == (r::%) * x
...
}

That looks artificial, but one can certainly can come up with a
real-world example, where these multiplications are not as compatible as
one would like in this case.

Without a good convention, one cannot predict what the result of 3*p for
p in MyWeirdPoly(Integer) actually is.

If I am not completely wrong, such things are left unspecified by the
Aldor User Guide. But they are used in libalgebra and exactly happen if
one instantiates polynomials over Integer. There the result of 3*p is
clear from the source code (both versions return the same thing), but
there is still the problem that the compiler has to select one of the
implementations and the programmer has no idea which one it is.

Ralf

Waldek Hebisch

unread,
Mar 25, 2009, 10:18:34 AM3/25/09
to fricas...@googlegroups.com
I have now converted all files with names beginning from a. There
was one case which caused renumbering of slots in domain vector.
Namely domain vector of AlgebraicFunction when dvalg was using #1
contained unused slot for derivative from DifferentialExtension.
Version using '+->' without types did not compile, typed
version do not contain unused derivative slot.

I will be now doing vector.spad.pamphlet, weier.spad.pamphlet,
xpoly.spad.pamphlet, zerodim.spad.pamphlet. That is [v-z]*.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 25, 2009, 11:04:46 AM3/25/09
to fricas...@googlegroups.com
Ralf Hemmecke wrote:
>
> Well "a::X" can mean everything, but if (in library code) it means
> anything else than "coerce(a)@X" then I am troubled.

Well:

- Spad compiler may recognize that the expression alredy has
requested type and do nothing. Similarly for coercion
between % and Rep.
- For subdomains compiler simply inserts checks that element
belongs to the subdomain. Similarly for domains: when you
coerce from category A to B and A extends B then compiler
generates no code.
- For Unions compiler generates code to convert between union
and branches.

There are more cases and we probably will extend the list in the
future.

> And I hope that
> SPAD is not introducing some weird "smart" things for :: that I cannot
> control. I'd consider :: as syntactic sugar, nothing more. For the
> interpreter I'd be more relaxed, but I am currently mainly concerned
> with library code.
>

I think you did not get the point: '::' is part of Spad language.
It is integrated with the typechecker and code generator. While
you can do a lot at library level some things have to be built
into compiler.

In other words, '::' is higher level construct which should be used
when you want to perform coercions, 'coerce' is a low level
mechanizm to define them.

Also, normally you should not care what exactly '::' is doing
-- with sane 'coerce' 'x::T' and 'coerce(x)@T' will give the
same result (provided 'coerce' is defined...), but the
compiler have right to do them in different way.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 25, 2009, 11:27:53 AM3/25/09
to fricas...@googlegroups.com
Waldek,

Just a quick update. I have just completed changes to the following 14 files:

wspage@debian:~/fricas-src/src/algebra$ svn status ~/new-lambda | sort
M /home/wspage/new-lambda/src/algebra/exprode.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/expr.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/intaf.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/intalg.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/intaux.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/intef.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/integer.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/integrat.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/intpm.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/intrf.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/kl.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/multpoly.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/polycat.spad.pamphlet
M /home/wspage/new-lambda/src/algebra/poly.spad.pamphlet

derived from the original list of expr* int* kl* poly* multpoly*. I am
currently doing a full re-compile of the system and a quick look at
the generated lisp before I commit the changes to the repository.

Regards,
Bill Page.

Bill Page

unread,
Mar 25, 2009, 12:50:54 PM3/25/09
to fricas...@googlegroups.com
Waldek,

I am having a problem compiling the new lambda form in the context of SubDomain:

wspage@debian:~/new-lambda/src/algebra$ svn diff integer.spad.pamphlet
Index: integer.spad.pamphlet
===================================================================
--- integer.spad.pamphlet (revision 555)
+++ integer.spad.pamphlet (working copy)
@@ -217,7 +217,7 @@
leadingCoefficient pp = leadingCoefficient p =>
factor(p)$GaloisGroupFactorizer(ZP)
mergeFactors(factor(pp)$GaloisGroupFactorizer(ZP),
- map(#1::ZP,
+ map(x1+->x1::ZP,
factor((leadingCoefficient p exquo
leadingCoefficient pp)
::%))$FactoredFunctions2(%,ZP)
@@ -266,8 +266,7 @@
++ shift(a,i) shift \spad{a} by i bits.
random : % -> %
++ random(n) returns a random integer from 0 to \spad{n-1}.
-
- == SubDomain(Integer,#1 >= 0) add
+ == SubDomain(Integer, (x1:Integer):Boolean+->(x1 >= 0) ) add
x,y:%
sup(x,y) == MAX(x,y)$Lisp
shift(x:%, n:Integer):% == ASH(x,n)$Lisp
@@ -293,7 +292,7 @@
gcd: (%,%) -> %
++ gcd(a,b) computes the greatest common divisor of two
++ positive integers \spad{a} and b.
- == SubDomain(NonNegativeInteger,#1 > 0) add
+ == SubDomain(NonNegativeInteger,x1+->x1 > 0) add
x:%
y:%

wspage@debian:~/new-lambda/src/algebra$

Compiling produces the following error message:

NNI abbreviates domain NonNegativeInteger
------------------------------------------------------------------------
initializing NRLIB NNI for NonNegativeInteger
compiling into NRLIB NNI
****** comp fails at level 1 with expression: ******
((+-> (|:| (|:| |x1| (|Integer|)) (|Boolean|))
(IF (< |x1| 0) |false| |true|)))
****** level 1 ******
$x:= (+-> (: (: x1 (Integer)) (Boolean)) (IF (< x1 (Zero)) false true))
$m:= (Boolean)
$f:=
((((|#1| #) (* # # #) (+ # # #) (- # # #) ...)))

>> Apparent user error:
Cannot coerce (CONS (CLOSEDFN (LAMBDA (x1 $) (COND ((SPADCALL x1
(spadConstant $ 7) (QREFELT $ 9)) (spadConstant $ 10)) ((QUOTE T)
(spadConstant $ 11))))) $)
of mode (Mapping (Boolean) (Integer))
to mode (Boolean)

---

As far as I can see the construction:

(x1:Integer):Boolean+->(x1 >= 0)

is correct, but the compiler message suggests that it is just looking
for a Boolean.

Is this a bug in the compiler?

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 25, 2009, 2:10:11 PM3/25/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> Waldek,
>
> I am having a problem compiling the new lambda form in the context of SubDomain:
> @@ -266,8 +266,7 @@
> ++ shift(a,i) shift \spad{a} by i bits.
> random : % -> %
> ++ random(n) returns a random integer from 0 to \spad{n-1}.
> -
> - == SubDomain(Integer,#1 >= 0) add
> + == SubDomain(Integer, (x1:Integer):Boolean+->(x1 >= 0) ) add
> x,y:%
> sup(x,y) == MAX(x,y)$Lisp
> shift(x:%, n:Integer):% == ASH(x,n)$Lisp
...

Rather unimplemented feature: the '#1 >= 0' in subdomain definition
is not treated as function, but as expression which is expected to
contain '#1' as a parameter.

Given that SubDomain is mostly a fake I would in short term keep
the #1 form.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 25, 2009, 2:13:18 PM3/25/09
to fricas...@googlegroups.com
On Wed, Mar 25, 2009 at 2:10 PM, Waldek Hebisch wrote:
>
> Bill Page wrote:
>>
>> Waldek,
>>
>> I am having a problem compiling the new lambda form in the context of SubDomain:
>> -  == SubDomain(Integer,#1 >= 0) add
>> +  == SubDomain(Integer, (x1:Integer):Boolean+->(x1 >= 0) ) add
>> ...

>> Compiling produces the following error message:
>>
>> ---
>>
>> As far as I can see the construction:
>>
>>   (x1:Integer):Boolean+->(x1 >= 0)
>>
>> is correct, but the compiler message suggests that it is just looking
>> for a Boolean.
>>
>> Is this a bug in the compiler?
>>
>
> Rather unimplemented feature: the '#1 >= 0' in subdomain definition
> is not treated as function, but as expression which is expected to
> contain '#1' as a parameter.
>

I am having a hard time trying to understand what is meant by
"expression that is expected to contain a parameter". Isn't this just
the definition of a function? Or does it mean something like passing
the (uncompiled?) body of a function as a value?

> Given that SubDomain is mostly a fake I would in short term keep
> the #1 form.
>

Ok I'll do that but I think we should consider this to be a fairly
serious defect in FriCAS. :-( As you probably already know from
previous discussions I still think that SubDomain is an important (and
unfortunately mostly missing) feature of FriCAS.

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 25, 2009, 3:31:55 PM3/25/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> On Wed, Mar 25, 2009 at 2:10 PM, Waldek Hebisch wrote:
> >
> > Rather unimplemented feature: the '#1 >= 0' in subdomain definition
> > is not treated as function, but as expression which is expected to
> > contain '#1' as a parameter.
> >
>
> I am having a hard time trying to understand what is meant by
> "expression that is expected to contain a parameter". Isn't this just
> the definition of a function? Or does it mean something like passing
> the (uncompiled?) body of a function as a value?
>

Logically it is a function, but compiler uses just an ad-hoc
trick instead of using code intended to compile functions.
In particular, since compiler treats '#1 >= 0' as an
expression it expects the result to be of type Boolean
(and not of Integer -> Boolean).

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 25, 2009, 9:31:36 PM3/25/09
to fricas...@googlegroups.com

What do you expect from the compiler? When looking at overloading
for a while I considered making code like above illegal and
rejecting it at compile time. However, the example clearly
shows that such overloading appear quite naturally and it would
be hard to avoid them. One possibility is to request that
one of versions have a guard like:

if not(R is Integer) then


(r: R) * (x: %): % == (r::%) * x

But checking at compile time that guards exclude overlaps is
quite tricky. It would be easier to add runtime test. Still,
in natural examples both versions should do the same thing, so
it is not clear if we really want to forbid current behaviour.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 25, 2009, 10:05:51 PM3/25/09
to fricas...@googlegroups.com
I wrote:
>
> I will be now doing vector.spad.pamphlet, weier.spad.pamphlet,
> xpoly.spad.pamphlet, zerodim.spad.pamphlet. That is [v-z]*.
>

Done. zerodim contained #1 only in messages so actually it
was less work than I thought. I will be doing c*.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 25, 2009, 10:07:15 PM3/25/09
to fricas...@googlegroups.com
> Ralf Hemmecke wrote:
>>
>> I just have some remark... As we probably all know, Aldor's libalgebra
>> has some bugs in it that seem to be really hard to track down. I faintly
>> remember one such bug that I could nearly assign to the coexistence
>> of
>>
>> *: (Integer, %) -> % and *: (R, %) -> %
>>
>> where R is instantiated with Integer.
>

Note: I have put the example from Ralf's email on the Axiom-Wiki at

http://axiom-wiki.newsynthesis.org/SandBoxHiddenOverloading

On Wed, Mar 25, 2009 at 9:31 PM, Waldek Hebisch wrote:

> What do you expect from the compiler?  When looking at
> overloading for a while I considered making code like above
> illegal and rejecting it at compile time.  However, the example
> clearly shows that such overloading appear quite naturally and
> it would be hard to avoid them.  One possibility is to request
> that one of versions have a guard like:
>
>  if not(R is Integer) then
>    (r: R) * (x: %): % == (r::%) * x
>
> But checking at compile time that guards exclude overlaps is
> quite tricky.  It would be easier to add runtime test.  Still,
> in natural examples both versions should do the same thing,
> so it is not clear if we really want to forbid current behaviour.
>

I think Waldek is correct: "in natural examples both versions should
do the same thing". In principle, if this fails then it would be nice
for the compiler to reject the program. But it seems to me that
"natural" constructions like these are essential to mathematics and
can not be forbidden by the compiler in general.

"What is the most we can expect from the compiler?"

My answer is that I would expect the most complete SPAD compiler to
somehow check that the code generated for each version is in fact
equivalent. But of course this might be quite hard (undecidable?) in
the general case. Perhaps it would be at least be convenient to
receive a warning that the generated code for some "hidden" cases are
not syntactically identical. When writing generic programs, it might
be possible to arrange that in most cases the "hidden" definitions
yield the same code?

Regards,
Bill Page.

Bill Page

unread,
Mar 25, 2009, 10:17:38 PM3/25/09
to fricas...@googlegroups.com
On Wed, Mar 25, 2009 at 10:05 PM, Waldek Hebisch
<heb...@math.uni.wroc.pl> wrote:
>
> I wrote:
>>
>> I will be now doing vector.spad.pamphlet, weier.spad.pamphlet,
>> xpoly.spad.pamphlet, zerodim.spad.pamphlet.  That is [v-z]*.
>>
>
> Done.  zerodim contained #1 only in messages so actually it
> was less work than I thought.  I will be doing c*.
>

Waldek,

:~)

This is turning out to be a lot harder than I expected!

I am having a lot of trouble with the code in 'intef.spad.pamphlet'

wspage@debian:~/new-lambda/src/algebra$ svn diff intef.spad.pamphlet
Index: intef.spad.pamphlet
===================================================================
--- intef.spad.pamphlet (revision 555)
+++ intef.spad.pamphlet (working copy)
@@ -105,11 +105,14 @@

tanint(f, x, k) ==
eta' := differentiate(eta := first argument k, x)
- r1 := tanintegrate(univariate(f, k), differentiate(#1,
- differentiate(#1, x), monomial(eta', 2) + eta'::UP),
- rischDEsys(#1, 2 * eta, #2, #3, x, lflimitedint(#1, x, #2),
- lfextendedint(#1, x, #2)))
- map(multivariate(#1, k), r1.answer) + lfintegrate(r1.a0, x)
+ r1 := tanintegrate(univariate(f, k),
+ (x1:UP):UP+->differentiate(x1,
+ (x2:UP):UP+->differentiate(x2, x),
+ monomial(eta', 2) + eta'::UP),
+ (x1,x2,x3)+->rischDEsys(x1, 2 * eta, x2, x3, x,
+ (x1,x2)+->lflimitedint(x1, x, x2),
+ (x1,x2)+->lfextendedint(x1, x, x2)))
+ map((x1:UP):F+->multivariate(x1, k), r1.answer) + lfintegrate(r1.a0, x)

-- tries various tricks since the integrand contains something not elementary
unknownint(f, x) ==
@@ -155,7 +158,7 @@
has?(operator kx, ALGOP) =>
rec := primitiveElement(kx::F, k::F)
y := rootOf(rec.prim)
- map(eval(#1, retract(y)@K, rec.primelt),
+ map(x1+->eval(x1, retract(y)@K, rec.primelt),
lfintegrate(eval(f, [kx,k], [(rec.pol1) y, (rec.pol2) y]), x))
unknownint(f, x)

@@ -180,7 +183,7 @@
y := rootOf(rec.prim)
lrhs := [(rec.pol1) y, (rec.pol2) y]$List(F)
(u := lflimitedint(eval(f, [kx, k], lrhs), x,
- map(eval(#1, [kx, k], lrhs), lu))) case "failed" => "failed"
+ map(x1+->eval(x1, [kx, k], lrhs), lu))) case "failed" => "failed"
ky := retract(y)@K
r := u::Record(mainpart:F, limitedlogs:LLG)
[eval(r.mainpart, ky, rec.primelt),
@@ -201,7 +204,7 @@
xf := x::F
empty?(l := varselect(kernels f, x)) => (xf * f)::IR
symbolIfCan(k := kmax l) case SE =>
- map(multivariate(#1, k), integrate univariate(f, k))
+ map(x1+->multivariate(x1, k), integrate univariate(f, k))
is?(k, "tan"::SE) => addx(tanint(f, x, k), xf)
is?(k, "exp"::SE) => addx(expint(f, x, k), xf)
prim?(k, x) => addx(primint(f, x, k), xf)
@@ -217,17 +220,17 @@
z := new()$Symbol
g := subst(f / differentiate(t::F, x), [t], [z::F])
freeOf?(g, x) => -- can we do change of variables?
- map(eval(#1, kernel z, t::F), lfintegrate(g, z))
+ map(x1+->eval(x1, kernel z, t::F), lfintegrate(g, z))
"failed"

algexpint(f, t, y, x) ==
(u := tryChangeVar(f, t, x)) case IR => u::IR
- algint(f, t, y, differentiate(#1, differentiate(#1, x),
+ algint(f, t, y, x1+->differentiate(x1, x2+->differentiate(x2, x),
monomial(differentiate(first argument t, x), 1)))

algprimint(f, t, y, x) ==
(u := tryChangeVar(f, t, x)) case IR => u::IR
- algint(f, t, y, differentiate(#1, differentiate(#1, x),
+ algint(f, t, y, x1+->differentiate(x1, x2+->differentiate(x2, x),
differentiate(t::F, x)::UP))

lfextendedint(f, x, g) ==
@@ -238,7 +241,7 @@
empty?(l1 := varselect(kernels g, x)) => 0::F
kmax(l1) = k => g
0::F
- map(multivariate(#1, k), extendedint(univariate(f, k),
+ map(x1+->multivariate(x1, k), extendedint(univariate(f, k),
univariate(g1, k)))
is?(k, "exp"::SE) => expextint(f, x, k, g)
prim?(k, x) => primextint(f, x, k, g)
@@ -248,7 +251,7 @@
lflimitedint(f, x, lu) ==
empty?(l := varselect(kernels f, x)) => [x::F * f, empty()]
symbolIfCan(k := kmax(l)) case SE =>
- map(multivariate(#1, k), limitedint(univariate(f, k),
+ map(x1+->multivariate(x1, k), limitedint(univariate(f, k),
[univariate(u, k) for u in lu]))
is?(k, "exp"::SE) => explimint(f, x, k, lu)
prim?(k, x) => primlimint(f, x, k, lu)
@@ -262,22 +265,26 @@
primextint(f, x, k, g) ==
lk := varselect([a for a in tower f
| k ~= a and is?(a, "log"::SE)], x)
- (u1 := primextendedint(univariate(f, k), differentiate(#1,
- differentiate(#1, x), differentiate(k::F, x)::UP),
- lfextlimint(#1, x, k, lk), univariate(g, k))) case "failed"
- => "failed"
+ (u1 := primextendedint(univariate(f, k),
+ x1+->differentiate(x1,
+ x2+->differentiate(x2, x), differentiate(k::F, x)::UP),
+ x3+->lfextlimint(x3, x, k, lk), univariate(g, k))) case "failed"
+ => "failed"
u1 case FF =>
[multivariate(u1.ratpart, k), multivariate(u1.coeff, k)]
(u2 := lfextendedint(u1.a0, x, g)) case "failed" => "failed"
[multivariate(u1.answer, k) + u2.ratpart, u2.coeff]

expextint(f, x, k, g) ==
- (u1 := expextendedint(univariate(f, k), differentiate(#1,
- differentiate(#1, x),
+ (u1 := expextendedint(univariate(f, k),
+ x1+->differentiate(x1,
+ x2+->differentiate(x2, x),
monomial(differentiate(first argument k, x), 1)),
- rischDE(#1, first argument k, #2, x, lflimitedint(#1, x, #2),
- lfextendedint(#1, x, #2)), univariate(g, k)))
- case "failed" => "failed"
+ (x3,x4)+->rischDE(x3, first argument k, x4, x,
+ (x5,x6)+->lflimitedint(x5, x, x6),
+ (x7,x8)+->lfextendedint(x7, x, x8)),
+ univariate(g, k)))
+ case "failed" => "failed"
u1 case FF =>
[multivariate(u1.ratpart, k), multivariate(u1.coeff, k)]
(u2 := lfextendedint(u1.a0, x, g)) case "failed" => "failed"
@@ -286,10 +293,11 @@
primint(f, x, k) ==
lk := varselect([a for a in tower f
| k ~= a and is?(a, "log"::SE)], x)
- r1 := primintegrate(univariate(f, k), differentiate(#1,
- differentiate(#1, x), differentiate(k::F, x)::UP),
- lfextlimint(#1, x, k, lk))
- map(multivariate(#1, k), r1.answer) + lfintegrate(r1.a0, x)
+ r1 := primintegrate(univariate(f, k),
+ x1+->differentiate(x1,
+ x2+->differentiate(x2, x), differentiate(k::F, x)::UP),
+ x3+->lfextlimint(x3, x, k, lk))
+ map(x1+->multivariate(x1, k), r1.answer) + lfintegrate(r1.a0, x)

lfextlimint(f, x, k, lk) ==
not((u1 := lfextendedint(f, x, differentiate(k::F, x)))
@@ -312,18 +320,22 @@

expint(f, x, k) ==
eta := first argument k
- r1 := expintegrate(univariate(f, k), differentiate(#1,
- differentiate(#1, x), monomial(differentiate(eta, x), 1)),
- rischDE(#1, eta, #2, x, lflimitedint(#1, x, #2),
- lfextendedint(#1, x, #2)))
- map(multivariate(#1, k), r1.answer) + lfintegrate(r1.a0, x)
+ r1 := expintegrate(univariate(f, k),
+ x1+->differentiate(x1,
+ x2+->differentiate(x2, x),
+ monomial(differentiate(eta, x), 1)),
+ (x3,x4)+->rischDE(x3, eta, x4, x,
+ (x5,x6)+-> lflimitedint(x5, x, x6),
+ (x7,x8)+->lfextendedint(x7, x, x8)))
+ map(x1+->multivariate(x1, k), r1.answer) + lfintegrate(r1.a0, x)

primlimint(f, x, k, lu) ==
lk := varselect([a for a in tower f
| k ~= a and is?(a, "log"::SE)], x)
- (u1 := primlimitedint(univariate(f, k), differentiate(#1,
- differentiate(#1, x), differentiate(k::F, x)::UP),
- lfextlimint(#1, x, k, lk), [univariate(u, k) for u in lu]))
+ (u1 := primlimitedint(univariate(f, k),
+ x1+->differentiate(x1,
+ x2+->differentiate(x2, x), differentiate(k::F, x)::UP),
+ x3+->lfextlimint(x3, x, k, lk), [univariate(u, k) for u in lu]))
case "failed" => "failed"
l := [[multivariate(lg.coeff, k),multivariate(lg.logand, k)]
for lg in u1.answer.limitedlogs]$LLG
@@ -333,11 +345,13 @@

explimint(f, x, k, lu) ==
eta := first argument k
- (u1 := explimitedint(univariate(f, k), differentiate(#1,
- differentiate(#1, x), monomial(differentiate(eta, x), 1)),
- rischDE(#1, eta, #2, x,
- lflimitedint(#1, x, #2), lfextendedint(#1, x, #2)),
- [univariate(u, k) for u in lu])) case "failed" => "failed"
+ (u1 := explimitedint(univariate(f, k),
+ x1+->differentiate(x1, x2+->differentiate(x2, x),
+ monomial(differentiate(eta, x), 1)),
+ (x1,x2)+->rischDE(x1, eta, x2, x,
+ (x3,x4)+->lflimitedint(x3, x, x4),
+ (x5,x6)+->lfextendedint(x5, x, x6)),
+ [univariate(u, k) for u in lu])) case "failed" => "failed"
l := [[multivariate(lg.coeff, k),multivariate(lg.logand, k)]
for lg in u1.answer.limitedlogs]$LLG
(u2 := lflimitedint(u1.a0, x, lu)) case "failed" => "failed"
wspage@debian:~/new-lambda/src/algebra$

If you have some time to spare :-) could you please take a look at the
code for 'tanint' - the first chunk in the patch above. Can you find
a consistent set of types for the anonymous functions?

I am finding that in a lot of cases the code without explicit types
will compile without errors in a fully built FriCAS system, but when I
try to build everything from scratch I get hard to locate errors some
time in the later stages of the bootstrap. In most cases so far I have
been able to cure this by including more explict type information but
this is getting harder to do in certain cases.

As a result I am gaining a greater appreciation for the effectiveness
of the kind of backtracking that is done for the old #1 lambda
notation...

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 26, 2009, 12:18:03 AM3/26/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> On Wed, Mar 25, 2009 at 10:05 PM, Waldek Hebisch
> <heb...@math.uni.wroc.pl> wrote:
> >
> > I wrote:
> >>
> >> I will be now doing vector.spad.pamphlet, weier.spad.pamphlet,
> >> xpoly.spad.pamphlet, zerodim.spad.pamphlet. ?That is [v-z]*.
> >>
> >
> > Done. ?zerodim contained #1 only in messages so actually it
> > was less work than I thought. ?I will be doing c*.
...

> wspage@debian:~/new-lambda/src/algebra$
>
> If you have some time to spare :-) could you please take a look at the
> code for 'tanint' - the first chunk in the patch above. Can you find
> a consistent set of types for the anonymous functions?
>
> I am finding that in a lot of cases the code without explicit types
> will compile without errors in a fully built FriCAS system, but when I
> try to build everything from scratch I get hard to locate errors some
> time in the later stages of the bootstrap. In most cases so far I have
> been able to cure this by including more explict type information but
> this is getting harder to do in certain cases.
>
> As a result I am gaining a greater appreciation for the effectiveness
> of the kind of backtracking that is done for the old #1 lambda
> notation...
>


--- ax-build4/src/algebra/INTEF.spad 2009-03-25 23:56:20.000000000 -0400
+++ INTEF.spad 2009-03-26 02:45:15.000000000 -0400
@@ -92,11 +92,16 @@



tanint(f, x, k) ==
eta' := differentiate(eta := first argument k, x)
- r1 := tanintegrate(univariate(f, k), differentiate(#1,
- differentiate(#1, x), monomial(eta', 2) + eta'::UP),
- rischDEsys(#1, 2 * eta, #2, #3, x, lflimitedint(#1, x, #2),
- lfextendedint(#1, x, #2)))
- map(multivariate(#1, k), r1.answer) + lfintegrate(r1.a0, x)
+ r1 := tanintegrate(univariate(f, k),
+ (x1 : UP) : UP +-> differentiate(x1,

+ (x2 : F) : F +-> differentiate(x2, x),


+ monomial(eta', 2) + eta'::UP),

+ (x6 : Integer, x2 : F, x3 : F) : Union(List F, "failed") +->
+ rischDEsys(x6, 2 * eta, x2, x3, x,
+ (x4 : F, x5 : List F) : U3 +-> lflimitedint(x4, x, x5),
+ (x4 : F, x5 : F) : U2 +-> lfextendedint(x4, x, x5)))
+ map((x1 : RF) : F +-> multivariate(x1, k), r1.answer) + _


+ lfintegrate(r1.a0, x)

-- tries various tricks since the integrand contains something not elementary
unknownint(f, x) ==


How to get this: I got type of inner differentiate looking at generated
Lisp code:

(DEFUN |INTEF;tanint!0| (|#1| $$)
(PROG (|x| $)
(LETT |x| (QREFELT $$ 1) NIL)
(LETT $ (QREFELT $$ 0) NIL)
(RETURN (PROGN (SPADCALL |#1| |x| (QREFELT $ 21))))))

from which I see that the inner differentiate is slot 21 in
the domain vestor. Than I instantiated the vector and slot:

)lisp (setf *print-circle* t)
)lisp (setf *print-array* nil)
)lisp (setf dv (|ElementaryIntegration| (|Integer|) (|Expression| (|Integer|))))
)lisp (aref dv 21)
)lisp (|replaceGoGetSlot| (cdr (aref dv 21)))

Value = (#<FUNCTION |FS-;differentiate;SSS;99|>
. #<(SIMPLE-VECTOR 339) {1004B5D36F}>)

so we see that differentiate came from default package of FunctionSpace.
Comparing Lisp code and fspace.spad.pamphlet it was clear that
correct differentiate was:

differentiate(f:%, x:SY)

and the only FunctionSpace we have in sight is F. For the rest I
used HyperDoc browser and sources to find signatures. The lambda with
rischDEsys was tricky, because rischDEsys returns only List F,
but tanintegrate expects union.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Ralf Hemmecke

unread,
Mar 26, 2009, 4:49:00 AM3/26/09
to fricas...@googlegroups.com
> Note: I have put the example from Ralf's email on the Axiom-Wiki at
>
> http://axiom-wiki.newsynthesis.org/SandBoxHiddenOverloading

Very good. I also like that you produced the SPAD version with a result
that is different from the Aldor version. Thank you.

>> What do you expect from the compiler? When looking at
>> overloading for a while I considered making code like above
>> illegal and rejecting it at compile time.

I guess, that is either hard or not wanted. Maybe in other cases
instantiation with domain parameters that will make the function
signatures collaps are not wanted and will never occur at runtime of the
program. So rejecting is too much of a restriction.

>> However, the example
>> clearly shows that such overloading appear quite naturally and
>> it would be hard to avoid them. One possibility is to request
>> that one of versions have a guard like:
>>
>> if not(R is Integer) then
>> (r: R) * (x: %): % == (r::%) * x

I am strongly against this. Suppose you do *not* have

*: (Integer, %) -> %

in the beginning, but you have at other parts in the Algebra something like

*: (R, %) -> %

I you later introduce *: (Integer, %) -> % into the library, you have to
add "if not(R is Integer) then" in maybe *several* places of the library
sources. Additionally, it makes the source code look ugly.

>> But checking at compile time that guards exclude overlaps is
>> quite tricky. It would be easier to add runtime test. Still,
>> in natural examples both versions should do the same thing,
>> so it is not clear if we really want to forbid current behaviour.

Oh, I don't really agree. As MyWeirdPoly demonstrates, one can consider
a polynomial ring P (with integer coefficients) as a Z-module so that
multiplication from the left by an iteger acts by doubling whereas there
is an inclusion homomorphism Z -> P (you may call it "coerce") that maps
1:Z to 1:P.

One can also make Z into a Z-module where the module action is via
doubling. The above inclusion homomorphism is even a Z-module homomorphism.

As one can already see with the Z-module structure, all we want is a way
to distingues the two operations (i.e., the external and internal
multiplications).

> I think Waldek is correct: "in natural examples both versions should
> do the same thing". In principle, if this fails then it would be nice
> for the compiler to reject the program. But it seems to me that
> "natural" constructions like these are essential to mathematics and
> can not be forbidden by the compiler in general.
>
> "What is the most we can expect from the compiler?"

Well, maybe I don't expect anything from the compiler, but rather from
the language. The problem looks exactly like the one why some
object-oriented programming languages forbid multiple inheritance.

As one can see above, we need a language construct that distinguishes
two functions even in cases where (by giving domain parameters) the
function signatures are indistinguishable.

Currently (not even in Aldor) I could express Z being a Ring and a
Z-module (with doubling) where both operations use * as the operation
symbol.

So, one solution is the disallow *:(R,%)->%. Or have the compiler
generate a different name for * if R=%.

Additional note: I think that introducing the convention that both
function implementations must give the same result if the signatures
collaps, is still not acceptable. They might have different running
times. It's probably the programmer and not the compiler who knows
better which of the two functions to choose then.
I don't want to force equal results anyway, see above.

My analysis is certainly not done until the end, but I think having a
mechanism to change the name of a function would help. (Oh, ... this
reminds me of the Monoid-Problem.)

Ralf

Waldek Hebisch

unread,
Mar 26, 2009, 7:07:06 AM3/26/09
to fricas...@googlegroups.com
Ralf Hemmecke wrote:
> >> What do you expect from the compiler? When looking at
> >> overloading for a while I considered making code like above
> >> illegal and rejecting it at compile time.
>
> I guess, that is either hard or not wanted. Maybe in other cases
> instantiation with domain parameters that will make the function
> signatures collaps are not wanted and will never occur at runtime of the
> program. So rejecting is too much of a restriction.
>
> >> However, the example
> >> clearly shows that such overloading appear quite naturally and
> >> it would be hard to avoid them. One possibility is to request
> >> that one of versions have a guard like:
> >>
> >> if not(R is Integer) then
> >> (r: R) * (x: %): % == (r::%) * x
>
> I am strongly against this. Suppose you do *not* have
>
> *: (Integer, %) -> %
>
> in the beginning, but you have at other parts in the Algebra something like
>
> *: (R, %) -> %
>
> I you later introduce *: (Integer, %) -> % into the library, you have to
> add "if not(R is Integer) then" in maybe *several* places of the library
> sources. Additionally, it makes the source code look ugly.
>

If you have '*' defined in different places, then Spad has rules
which one will be chosen. It still may be not what you want
but at least the result is uniquely determined. And if Spad
resolution is incorrect it probably means wrong inheritance
hierarchy (trying to inherit too much). I think the
natural cases will have both '*' coming from single domain
or single category default.

> >> But checking at compile time that guards exclude overlaps is
> >> quite tricky. It would be easier to add runtime test. Still,
> >> in natural examples both versions should do the same thing,
> >> so it is not clear if we really want to forbid current behaviour.
>
> Oh, I don't really agree. As MyWeirdPoly demonstrates, one can consider
> a polynomial ring P (with integer coefficients) as a Z-module so that
> multiplication from the left by an iteger acts by doubling whereas there
> is an inclusion homomorphism Z -> P (you may call it "coerce") that maps
> 1:Z to 1:P.
>
> One can also make Z into a Z-module where the module action is via
> doubling. The above inclusion homomorphism is even a Z-module homomorphism.
>
> As one can already see with the Z-module structure, all we want is a way
> to distingues the two operations (i.e., the external and internal
> multiplications).

If you want two operations you probably should use different names,
otherwise the notation becomes too confusing. Also, if operations
come from different places you can use $ notation.

>
> > I think Waldek is correct: "in natural examples both versions should
> > do the same thing". In principle, if this fails then it would be nice
> > for the compiler to reject the program. But it seems to me that
> > "natural" constructions like these are essential to mathematics and
> > can not be forbidden by the compiler in general.
> >
> > "What is the most we can expect from the compiler?"
>
> Well, maybe I don't expect anything from the compiler, but rather from
> the language. The problem looks exactly like the one why some
> object-oriented programming languages forbid multiple inheritance.
>
> As one can see above, we need a language construct that distinguishes
> two functions even in cases where (by giving domain parameters) the
> function signatures are indistinguishable.
>
> Currently (not even in Aldor) I could express Z being a Ring and a
> Z-module (with doubling) where both operations use * as the operation
> symbol.
>
> So, one solution is the disallow *:(R,%)->%. Or have the compiler
> generate a different name for * if R=%.

If operations came from different places we can use $ notation.
And frequently default rules for choosing version will works
OK. Of course, if there is a problem means we may easily miss
it during testing...

Still I think that re-using symbol in case above is bad. In
mathematics such overloadings are in common practice because
readers have to do complex reasonings anyway and typically
one has very specific context which decides which one makes
sense. But in programming we have much less specific context,
so notation should be more systematic.

OTOH may be you want a way to eliminate old meaning -- there
is a standard OO construct called view to do exactly this.
So maybe you want support for views, that is ability to
easily eliminate _part_ of old meaning, preserving only
selected parts.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 26, 2009, 10:08:58 AM3/26/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> As a result I am gaining a greater appreciation for the effectiveness
> of the kind of backtracking that is done for the old #1 lambda
> notation...
>

The following patch should restore backtracking for '+->'. So
the question is what we prefer: backtracking or early abort.

Index: src/interp/compiler.boot
===================================================================
--- src/interp/compiler.boot (revision 554)
+++ src/interp/compiler.boot (working copy)
@@ -169,7 +169,8 @@
ress
stackAndThrow ["compLambda", x]
stackAndThrow ["compLambda", x]
- stackAndThrow ["compLambda", x]
+ nil
+ -- stackAndThrow ["compLambda", x]

compWithMappingMode(x, m, oldE) ==
compWithMappingMode1(x, m, oldE, $formalArgList)

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Ralf Hemmecke

unread,
Mar 26, 2009, 10:29:31 AM3/26/09
to fricas...@googlegroups.com
On 03/26/2009 03:08 PM, Waldek Hebisch wrote:
> Bill Page wrote:
>> As a result I am gaining a greater appreciation for the effectiveness
>> of the kind of backtracking that is done for the old #1 lambda
>> notation...

> The following patch should restore backtracking for '+->'. So
> the question is what we prefer: backtracking or early abort.

You may be all for it, but I find it more important that people who read
the code quickly find out about the types.

If the compiler is too good at figuring out the types then a person
might have hard times to understand the code, because this person
basically has to follow what the compiler does.

I don't mind if this patch is applied, but I somehow like explicit type
information.

Ralf

Martin Rubey

unread,
Mar 26, 2009, 12:57:32 PM3/26/09
to fricas...@googlegroups.com
Waldek Hebisch <heb...@math.uni.wroc.pl> writes:

> Bill Page wrote:
> >
> > As a result I am gaining a greater appreciation for the effectiveness
> > of the kind of backtracking that is done for the old #1 lambda
> > notation...
> >
>
> The following patch should restore backtracking for '+->'. So
> the question is what we prefer: backtracking or early abort.

I think I'm rather for early abort, but I'm not sure yet.

I'm guessing that, if it's difficult to find out which package to
use, there is a problem in the library design, not in the compiler.

One example is the multitude of mapping packages. I think there
should be a few generic functions map, not 80.

Martin

Bill Page

unread,
Mar 26, 2009, 3:06:25 PM3/26/09
to fricas...@googlegroups.com
On Thu, Mar 26, 2009 at 10:08 AM, Waldek Hebisch wrote:
>
> Bill Page wrote:
>>
>> As a result I am gaining a greater appreciation for the effectiveness
>> of the kind of backtracking that is done for the old #1 lambda
>> notation...
>>
>
> The following patch should restore backtracking for '+->'.  So
> the question is what we prefer: backtracking or early abort.
>

After my recent experience with trying to manual assign types I am
tempted just to say "yes" to have backtracking the way it was, but I
also agree with Ralf and others that adding explicit type information
to the code does indeed improve the readability in many cases.

So what I would propose is to include the backtracking patch to do the
type inference as before but in all cases when the lambda type
information is incompletely specified by the user, the compiler should
generate an explicit WARNING message that includes the actual types
chosen by the compiler with the suggestion that these types be
specified explicitly by the programmer.

Regards,
Bill Page.

Bill Page

unread,
Mar 27, 2009, 12:13:27 AM3/27/09
to fricas...@googlegroups.com

Thank you very much for including an explanation of your approach. I
am indeed surprised by what you had to do to find this information and
I am not sure that I could easily duplicate this procedure without
more help. One way to make all this easier might be to add a sort of
"Deprecated Usage" warning message to SPAD that is triggered when
compiling the old notation. If that warning message also included the
types that were actually chosen by the compiler the information could
be very useful when re-writing the code in the new lambda style.

I should be able to commit the rest of my changes to the new-lambda
branch some time in the next 24 hours.

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 27, 2009, 10:14:56 AM3/27/09
to fricas...@googlegroups.com

Great.

Below is a (lightly tested) patch to emit warnings for old style lambda
(with the type):


Index: src/interp/compiler.boot
===================================================================
--- src/interp/compiler.boot (revision 557)
+++ src/interp/compiler.boot (working copy)
@@ -200,6 +200,7 @@
$formalArgList := [:vl, :$formalArgList]
x := nx
else
+ stackWarning ["lambda using #1, type","%b", m,"%d"]
vl:= take(#sl,$FormalMapVariableList)
ress => ress
for m in sl for v in vl repeat

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 27, 2009, 10:46:09 AM3/27/09
to fricas...@googlegroups.com
Waldek,

I am trying to re-build fricas from the most recent sources in trunk

wspage@debian:~/fricas-src$ svn info
Path: .
URL: https://fricas.svn.sourceforge.net/svnroot/fricas/trunk
Repository Root: https://fricas.svn.sourceforge.net/svnroot/fricas
Repository UUID: b0c55286-4f34-0410-a049-a1e7e93b0762
Revision: 557
Node Kind: directory
Schedule: normal
Last Changed Author: whebisch
Last Changed Rev: 557
Last Changed Date: 2009-03-26 11:52:08 -0400 (Thu, 26 Mar 2009)

plus the patch from your email below

wspage@debian:~/fricas-src$ svn diff
Index: src/interp/compiler.boot
===================================================================
--- src/interp/compiler.boot (revision 557)
+++ src/interp/compiler.boot (working copy)
@@ -200,6 +200,7 @@
$formalArgList := [:vl, :$formalArgList]
x := nx
else
+ stackWarning ["lambda using #1, type","%b", m,"%d"]
vl:= take(#sl,$FormalMapVariableList)
ress => ress
for m in sl for v in vl repeat
wspage@debian:~/fricas-src$

---

but I get the following error early in the build:

...
End of Pass 2.
OPTIMIZE levels: Safety=1 (No runtime error checking), Space=0, Speed=3
Finished compiling /home/wspage/fricas-build/src/interp/slam.o.
Loading slam.o
start address -T 0x88493c0 Finished loading slam.o
Compiling g-boot.clisp.
; (DEFUN |compSLAM| ...) is being compiled.
;;; The function (LIST |g2|) is illegal.
; (DEFUN |compTran1| ...) is being compiled.
;; The variable X is undefined.
;; The compiler will assume this variable is a global.
No FASL generated.

Error: The function |systemError| is undefined.
Fast links are on: do (si::use-fast-links nil) for debugging
Error signalled by RETURN.
Broken at APPLY. Type :H for Help.
BOOT>>make[2]: ***
[/home/wspage/fricas-build/build/i686-pc-linux/bin/depsys] Error 255
make[2]: Leaving directory `/home/wspage/fricas-build/src/interp'
make[1]: *** [all-interpsys] Error 2
make[1]: Leaving directory `/home/wspage/fricas-build/src'
make: *** [all-src] Error 2

---

At first glance this error seems unrelated to the patch. Is there some
other problem with the current sources in trunk?

Any suggestions?

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 27, 2009, 11:23:36 AM3/27/09
to fricas...@googlegroups.com

We have a lot of domains (around 240 domain constructors)
and many need mapping functions. Currently categories (and
domains) can export only finite number of functions, so
if you have D1(T) and D2(S) and want mapping for all possible
S and T such mapping must be defined in a separate package.

In principle we could limit number of "different" signatures,
but even here are limits, for example mapping functions
for polynomials have quite different signatures then the
other. And given that several mapping functions must live
in special packages I am not sure how much would we gain
from re-using signatures from categories.

There is old saying "There is no such thing as a free lunch".
Mathematics is complex and we want to make fine distinctions
using types, so we need complex types. Consequently, choosing
types becomes complex. I am pretty sure that one could do
better library design than we have now. But I do not think
alternative design wold be _much_ better. One should also
take into account that library evolves: when making
unanticipated addition one usually chooses someting which
works and requires minimal changes instead of re-working
the whole design.

In general, we want to move as much work as possible to the
computer. Which means that things will get _less_ explicit.
And it will be harder to find out what is going on. But I
think that proper reaction is not to reject automation, but
rather to add explanation facility.

Also one tricky thing was that compiler can not use function
of signature S -> T when requested type is S -> Union(T, failed),
but value of type T can be used when requested type is
Union(T, failed). This restriction is due to the way the
comiler works and I think it is good if compiler can
automatically work out that the lambda expression
should have type S -> Union(T, failed).

When taking about problems, I think that biggest one is lack
of exceptions. Without exceptions we need types like
Union(T, failed) and functions like subtractIfCan. Using
exceptions we could make things simpler.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 27, 2009, 11:43:12 AM3/27/09
to fricas...@googlegroups.com

This looks like effect of recenet patch converting few functions
to Boot. AFAICS compSLAM is buggy and unused. sbcl printed
error message and produced object file. It seems that gcl
rejects the file because of bug I the function. I will remove
compSLAM, this should fix the problem.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 27, 2009, 1:00:41 PM3/27/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> Waldek,
>
> I am trying to re-build fricas from the most recent sources in trunk
>
>
> but I get the following error early in the build:
>
> ...
> End of Pass 2.
> OPTIMIZE levels: Safety=1 (No runtime error checking), Space=0, Speed=3
> Finished compiling /home/wspage/fricas-build/src/interp/slam.o.
> Loading slam.o
> start address -T 0x88493c0 Finished loading slam.o
> Compiling g-boot.clisp.
> ; (DEFUN |compSLAM| ...) is being compiled.
> ;;; The function (LIST |g2|) is illegal.
> ; (DEFUN |compTran1| ...) is being compiled.
> ;; The variable X is undefined.
> ;; The compiler will assume this variable is a global.
> No FASL generated.
>

I have now commited patch removing compSLAM. After that trunk
bootstaped without troubles using gcl.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 27, 2009, 5:27:59 PM3/27/09
to fricas...@googlegroups.com
> Bill Page wrote:
>>
>> At first glance this error seems unrelated to the patch. Is there some
>> other problem with the current sources in trunk?
>>
>> Any suggestions?
>>
>
On Fri, Mar 27, 2009 at 11:43 AM, Waldek Hebisch wrote:
> This looks like effect of recenet patch converting few functions
> to Boot.  AFAICS compSLAM is buggy and unused.  sbcl printed
> error message and produced object file.  It seems that gcl
> rejects the file because of bug I the function.  I will remove
> compSLAM, this should fix the problem.
>
> --

The patch to eliminate compSLAM worked fine and now the FriCAS build
completes normally. Now when compiling 'intef.spad.pamphlet' for
example, I get the following warnings:

[3] tanint: lambda using #1, type (Mapping
(SparseUnivariatePolynomial F) (SparseUnivariatePolynomial F))
[4] tanint: lambda using #1, type (Mapping F F)
[5] tanint: lambda using #1, type (Mapping (Union (List F)
failed) (Integer) F F)
[6] tanint: lambda using #1, type (Mapping (Union (Record (:
mainpart F) (: limitedlogs (List (Record (: coeff F) (: logand F)))))
failed) F (List F))
[7] tanint: lambda using #1, type (Mapping (Union (Record (:
ratpart F) (: coeff F)) failed) F F)
[8] tanint: lambda using #1, type (Mapping F (Fraction
(SparseUnivariatePolynomial F)))

which is exactly what I wanted! :-) That's great.

Thank you very much for implementing this. I strongly recommend that
you include it in the mainline.

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 27, 2009, 6:37:33 PM3/27/09
to fricas...@googlegroups.com
Bill Page wrote:
>
> Thank you very much for implementing this. I strongly recommend that
> you include it in the mainline.
>

Actually, I have noticed that the patch has a problem: when we need
backtracking to find correct type apparently each time we backtrack
we get a separate warning. Worse, are getting different types, and
only one is correct. In Ralf example we have:

[2] normPolynomial: lambda using #1, type (Mapping R S)
[3] normPolynomial: lambda using #1, type (Mapping R R)
[4] normPolynomial: lambda using #1, type (Mapping (Integer) (Integer))
[5] normPolynomial: lambda using #1, type (Mapping S R)
[6] normPolynomial: lambda using #1, type (Mapping S S)

AFAICS type number 5 is correct, the other are bogus.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Waldek Hebisch

unread,
Mar 27, 2009, 8:40:34 PM3/27/09
to fricas...@googlegroups.com
I wrote:
>
> Done. zerodim contained #1 only in messages so actually it
> was less work than I thought. I will be doing c*.
>

Done. I will be doing d*. AFAICS now we have less than 1000 places
to convert -- grep shows more, but some are false matches (mostly
in comments).

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Bill Page

unread,
Mar 27, 2009, 10:20:57 PM3/27/09
to fricas...@googlegroups.com

Oh, I am sorry but I thought that was the intended behaviour -
although of course like a lot of the output of the SPAD compiler, it
could certainly be presented in a much more "user friendly" manner. I
think it is potentially very useful to the programmer for a trace of
the backtracking results to be shown. It would be even better (but
probably too computationally intensive) if the backtracking process
continued until it exhausted all possibilies - after all, what the
compiler eventually chooses may not be what the programmer actually
had in mind.

Really, what I would like is for these same warnings to be generated
in the case of the +-> construction where types are only partially or
not specified at all.

Regards,
Bill Page.

Bill Page

unread,
Mar 28, 2009, 1:20:42 PM3/28/09
to fricas...@googlegroups.com
Waldek, Ralf;

I just committed my changes to the algebra files: expr* int* kl* poly*
multpoly* to new-lambda. I am sorry that I took longer than expected.
I found some of the integration domains especially difficult. I doubt
that I could have completed this without the help of the patch by
Waldek that displays the signatures chosen by the compiler by
backtracking on the original #1-type lambda expressions. I suppose
that it is fair to conclude that if the conversion had been easy, then
there might not have been much gained, but I would say that the
results after the conversion to the new lambda form with fully
specified types is definitely more "readable" than the old form.

Regards,
Bill Page.

Ralf Hemmecke

unread,
Mar 28, 2009, 1:43:30 PM3/28/09
to fricas...@googlegroups.com
> I suppose that it is fair to conclude that if the conversion had been
> easy, then there might not have been much gained, but I would say
> that the results after the conversion to the new lambda form with
> fully specified types is definitely more "readable" than the old
> form.

Ooops... looks like you understand my position now.

From what I have seen in your patch, you also added full types in some
(but not all cases).

Should we perhaps add full types in a second round? I think that will
take quite some time, but actually it makes code more readable.

Ralf

Bill Page

unread,
Mar 28, 2009, 2:59:49 PM3/28/09
to fricas...@googlegroups.com
On Sat, Mar 28, 2009 at 1:43 PM, Ralf Hemmecke wrote:
>
>> I suppose that it is fair to conclude that if the conversion had been
>> easy, then there might not have been much gained, but  I would say
>> that the results after the conversion to the new lambda form with
>> fully specified types is definitely more "readable" than the old
>> form.
>
> Ooops... looks like you understand my position now.
>
Yes, I now agree with you that (at least in most cases) full types
should be provided in lambda expressions.

>  From what I have seen in your patch, you also added full types in some
> (but not all cases).
>

That is true. In the first pass I compiled the modified source without
adding type information, using just ')compile ...' in a fully built
FriCAS. There were only a few cases where I had to add type
information for the compile to suceed. But after completing this
process, I discovered that some code that compiled in a fully-built
FriCAS did not compile when building FriCAS from scratch. Perhaps this
is because of differences in the database during the bootstrap. In the
cases the failed adding full type information solved the problem. So
far I have not gone back and added this additional type information to
those cases that built without error.

> Should we perhaps add full types in a second round? I think that
> will take quite some time, but actually it makes code more
> readable.
>

By using Waldek's #1 warning patch (modified to work with incompletely
specified +-> notation) it is actually quite easy to add this missing
information. The most time consuming part is reverse transcription of
type expressions back to the common abbreviations (macros) usually
defined elsewhere in the same source code file. I think this is
necessary both for brevity and for the logical consistency of the
source.

Although it isn't strictly necessary, I would be in favor of adding
full type information during this phase of the project.

Regards,
Bill Page.

Waldek Hebisch

unread,
Mar 30, 2009, 5:37:14 AM3/30/09
to fricas...@googlegroups.com
I wrote:
>
> Done. I will be doing d*.

d* is done. I will be doing remainig part of e* and f*.

--
Waldek Hebisch
heb...@math.uni.wroc.pl

Ralf Hemmecke

unread,
Apr 5, 2009, 5:35:49 PM4/5/09
to fricas...@googlegroups.com
>> >> However, the example
>>>> clearly shows that such overloading appear quite naturally and
>>>> it would be hard to avoid them. One possibility is to request
>>>> that one of versions have a guard like:
>>>>
>>>> if not(R is Integer) then
>>>> (r: R) * (x: %): % == (r::%) * x
>> I am strongly against this. Suppose you do *not* have
>>
>> *: (Integer, %) -> %
>>
>> in the beginning, but you have at other parts in the Algebra something like
>>
>> *: (R, %) -> %
>>
>> I you later introduce *: (Integer, %) -> % into the library, you have to
>> add "if not(R is Integer) then" in maybe *several* places of the library
>> sources. Additionally, it makes the source code look ugly.
>>
>
> If you have '*' defined in different places, then Spad has rules
> which one will be chosen.
> It still may be not what you want
> but at least the result is uniquely determined.

SPAD has rules? Or the SPAD compiler? That the SPAD compiler should not
be non-deterministic is somehow clear, but where do I find these rule
for the SPAD language?

> And if Spad
> resolution is incorrect it probably means wrong inheritance
> hierarchy (trying to inherit too much).

Oh, you probably don't want me to dig out an example where the
programmer has no control over the way the inheritance is done.
Category defaults can cause trouble if one is not careful enough.
Aldor is quite careful in this with inheritance of category defaults. It
simply leaves the inheritance order unspecified in conflicting cases,
which basically says, one should not write such programs.

> I think the
> natural cases will have both '*' coming from single domain
> or single category default.

You may be right, but somehow I would like to know what implementation
of an operation is chosen if setting a domain parameter causes two
signatures to collaps to one.

> If you want two operations you probably should use different names,
> otherwise the notation becomes too confusing.

OK. Now slowly I start wanting to be able to specify new Unicode operators.

Then I would have defined

\bullet: (Integer, %) -> % -- integer action on a ring
* : (%, %) -> % -- ring multiplication

And it would be nice if \bullet is infix (at least in the user
interface) and shows as a fat dot like in LaTeX.

> Still I think that re-using symbol in case above is bad. In
> mathematics such overloadings are in common practice because
> readers have to do complex reasonings anyway and typically
> one has very specific context which decides which one makes
> sense. But in programming we have much less specific context,
> so notation should be more systematic.

Exactly. We should do as in LaTeX and give good names for the operations
that are associated to their actual meaning. How they show in different
contexts (for a user sitting in front of the system) is a matter of tast
of the user.

Dream, dream... Some people like to see \cdot for multiplication, some
like \circ depending on context, so the user interface should be able to
switch contexts and show hand handle the same operation with different
symbols.

We want to build a system for doing mathematics and not to change
mathematicians to be able to use the system.

Reply all
Reply to author
Forward
0 new messages