9 views

Skip to first unread message

Jan 3, 2008, 12:52:55 PM1/3/08

to fricas-devel

I would like to commit the patch proposed in #183. Is this OK for everybody?

Martin

Jan 3, 2008, 1:09:14 PM1/3/08

to fricas...@googlegroups.com

Same for

#313 ie. restrict sqrt(%, NNI) to sqrt(%, PI)

#227 ie. remove random()$INS, and random()$QFCAT

(they do not make sense! and when something similar is needed, it is

provided by random()$ZMOD(largenumber).

Martin

Jan 3, 2008, 2:08:59 PM1/3/08

to fricas...@googlegroups.com

Martin Rubey wrote:

>

> I would like to commit the patch proposed in #183. Is this OK for everybody?

>

>

> I would like to commit the patch proposed in #183. Is this OK for everybody?

>

OK.

--

Waldek Hebisch

heb...@math.uni.wroc.pl

Jan 3, 2008, 2:47:15 PM1/3/08

to fricas...@googlegroups.com

> Same for

>

> #313 ie. restrict sqrt(%, NNI) to sqrt(%, PI)

>

> #313 ie. restrict sqrt(%, NNI) to sqrt(%, PI)

That looks reasonable, but may require some extra changes to satisfy the

typechecker. Alternatively , we can return "failed".

Also, did you check that nobody (of course prime suspect is RECLOS

itself) depends on this behaviour?

> #227 ie. remove random()$INS, and random()$QFCAT

> (they do not make sense! and when something similar is needed, it is

> provided by random()$ZMOD(largenumber).

>

OK.

--

Waldek Hebisch

heb...@math.uni.wroc.pl

Jan 3, 2008, 3:45:41 PM1/3/08

to fricas...@googlegroups.com

Martin,

These seem fine to me. Have you checked for regressions that might be

caused by these changes?

Regards,

Bill Page.

Jan 3, 2008, 4:18:12 PM1/3/08

to fricas...@googlegroups.com

"Bill Page" <bill...@newsynthesis.org> writes:

> Martin,

>

> These seem fine to me. Have you checked for regressions that might be

> caused by these changes?

It should go without saying that I'm running the regression tests, and write

new ones if I change behaviour. However, I guess one can never be sure whether

something breaks, even more so if we do not test at least most of the top-level

functions.

To make things look less catastrophic, I also check the source for occurrences

of functions I modify.

By the way: looking into the output files for the regression tests supplied by

Waldek recently, I found 360 occurrences of "Daly Bug", and it seems to me that

a lot of them are due to syntactically wrong input. For example, in

grpthry.input.pamphlet, we find

g1 : PERMGRPS INT := [ x , y ]

but that should obviously read

g1 : PERMGRP INT := [ x , y ]

Moreover, there seem to be some overlapping tests: grpthry overlaps with repa6,

for example.

Fixing the typos is mostly easy, but it still takes time... I'll try to do one

thing at the time.

Martin

Jan 3, 2008, 8:02:51 PM1/3/08

to fricas...@googlegroups.com

Martin Rubey wrote:

> By the way: looking into the output files for the regression tests supplied by

> Waldek recently, I found 360 occurrences of "Daly Bug", and it seems to me that

> a lot of them are due to syntactically wrong input. For example, in

> grpthry.input.pamphlet, we find

>

> g1 : PERMGRPS INT := [ x , y ]

>

> but that should obviously read

>

> g1 : PERMGRP INT := [ x , y ]

>

> Moreover, there seem to be some overlapping tests: grpthry overlaps with repa6,

> for example.

>

> Fixing the typos is mostly easy, but it still takes time... I'll try to do one

> thing at the time.

>

> By the way: looking into the output files for the regression tests supplied by

> Waldek recently, I found 360 occurrences of "Daly Bug", and it seems to me that

> a lot of them are due to syntactically wrong input. For example, in

> grpthry.input.pamphlet, we find

>

> g1 : PERMGRPS INT := [ x , y ]

>

> but that should obviously read

>

> g1 : PERMGRP INT := [ x , y ]

>

> Moreover, there seem to be some overlapping tests: grpthry overlaps with repa6,

> for example.

>

> Fixing the typos is mostly easy, but it still takes time... I'll try to do one

> thing at the time.

>

I am affraid that simply correctiong typos is not the correct thing

to do. Namely, error detection, reporting and recovery is important

part of quality. It is normal to have more than 50% of tests

devoted to error handling.

In our testsuite only few test intentionally contain error, but

for most tests it is not clear if errors are intensional and

even if errors are unitensional the test still may be of some

use (given that we have only few proper error tests).

So IMHO the right thing to make sure that for each error test

we also test "normal" behaviour. And we probably should move

all errror tests to separate files, so that it is easy to check

if errors are intentional.

--

Waldek Hebisch

heb...@math.uni.wroc.pl

Jan 3, 2008, 8:22:16 PM1/3/08

to fricas...@googlegroups.com

Waldek Hebisch <heb...@math.uni.wroc.pl> writes:

> I am affraid that simply correctiong typos is not the correct thing to do.

> Namely, error detection, reporting and recovery is important part of quality.

> It is normal to have more than 50% of tests devoted to error handling.

Yes, but I'm talking about tests that are definitively *not* testing error

handling.

Most of the obvious errors are in files where:

* files are missing from the distribution

* NAG parts have been removed

* somebody copied example pages from HyperDoc by hand and introduced typos.

I'll post the corrections I made to this list, I hope you will be convinced

then.

> So IMHO the right thing to make sure that for each error test we also test

> "normal" behaviour. And we probably should move all errror tests to separate

> files, so that it is easy to check if errors are intentional.

Well, moreover I think we should try to make the testsuite more specific. I

think that even when testing, duplication is not a good thing, because then one

overlooks those parts that are not tested properly easier.

However, I think the current strategy is not so bad:

* we have the bug*** files, where we do regression testing against filed bugs.

(Here, I'd like to have your OK to transform all tests so they simply return

true.)

* we have some system tests, testing individual domains or packages, or maybe a

couple of them. These come mostly from HyperDoc. In fact, I think all

system tests should go into HyperDoc example pages, and the test files should

be automatically be created from the HyperDoc sources.

* what we are currently missing are unit tests. I.e., tests that check every

exported function. Again, these tests should simply return true. I think,

this would also be the place to put error handling tests. (Of course, they

won't return true...)

Martin

Jan 3, 2008, 11:23:34 PM1/3/08

to fricas...@googlegroups.com

> * we have the bug*** files, where we do regression testing against filed bugs.

> (Here, I'd like to have your OK to transform all tests so they simply return

> true.)

>

> (Here, I'd like to have your OK to transform all tests so they simply return

> true.)

>

That is somewhat tricky. For example, given

zero?(complexIntegrate(log(x)/(x-1),x)+dilog x)

1) we may get nonzero result due to integration constant

2) in principle complexIntegrate can return unevaluated integral, but

simplifier can reduce it to zero (essentially differentiationg

both terms). This is not a purely theoretical problem -- we

already have implementation of Risch structure theorem for

elementary functions, and extending it to Liouvillean functions

(including unevaluated integrals!) seems worthwile and not so

hard to do.

So for test we need a comparison function which is smart enough to

handle legitimate differences (which are mostly trivial) but

which does not perform deeper simplifications. My first approximation

for such function would be to convert both arguments to input forms

and compare resulting expression trees.

--

Waldek Hebisch

heb...@math.uni.wroc.pl

Jan 4, 2008, 3:16:58 AM1/4/08

to fricas...@googlegroups.com

Waldek Hebisch <heb...@math.uni.wroc.pl> writes:

> [...] For example, given

>

> zero?(complexIntegrate(log(x)/(x-1),x)+dilog x)

>

> 1) we may get nonzero result due to integration constant

well, but then we just notice that the behaviour has changed. A regression

failure does not necessarily mean that we broke something, does it?

> 2) in principle complexIntegrate can return unevaluated integral, but

> simplifier can reduce it to zero (essentially differentiationg

> both terms). This is not a purely theoretical problem -- we

> already have implementation of Risch structure theorem for

> elementary functions, and extending it to Liouvillean functions

> (including unevaluated integrals!) seems worthwile and not so

> hard to do.

OK. I agree that testing for zero? was not so good.

> So for test we need a comparison function which is smart enough to

> handle legitimate differences (which are mostly trivial) but

> which does not perform deeper simplifications. My first approximation

> for such function would be to convert both arguments to input forms

> and compare resulting expression trees.

OK. Although I very much dislike converting to InputForm.

So, how about having a tiny package as follows: (untested)

Check(R: SetCategory): with

assertEquals: (R, R) -> Boolean

== add

if R has ConvertibleTo InputForm then

assertEquals(ex1, ex2) == (ex1::InputForm=ex2::InputForm)

else

assertEquals(a, b) == (a=b)@Boolean

that can be modified as needed. This gives us more freedom, since if I want to

test for zero, in AlgebraicNumber for instance, I can still do it...

For FLoats: should we set an error margin in Check or individually?

If you like that approach, I can go about converting bugs2007.input. I'll post

the result before doing any commits, of course.

Martin

Jan 4, 2008, 3:43:39 AM1/4/08

to fricas...@googlegroups.com

Waldek Hebisch <heb...@math.uni.wroc.pl> writes:

> Martin Rubey wrote:

> >

> > I would like to commit the patch proposed in #183. Is this OK for everybody?

>

> OK.

I just committed. I took the opportunity to fix that behaviour also for

member? and =. As far as I could check, there should be no other places where

shortcircuiting is not used. Finally, I removed the duplicate definitions in

CLAGG, that were overriding the definitions from HOAGG with identical

definitions. At least as far as I can see:

In CLAGG we had

#c == # parts c

count(f:S -> Boolean, c:%) == _+/[1 for x in parts c | f x]

any?(f, c) == _or/[f x for x in parts c]

every?(f, c) == _and/[f x for x in parts c]

which are identical to the definitions in HOAGG. Since parts is not defined in

neither CLAGG nor HOAGG, I guess that nothing should go wrong. For safety and

good practice, I added some regression tests. These do not test, however,

whether short circuiting is performed: I believe that one should not rely on

short-circuiting for semantics, only for performance.

I'm well aware that this is a slightly (well, but only slightly) more drastic

change than proposed on IssueTracker, so I did a clean build, too.

Martin

Jan 4, 2008, 3:49:41 AM1/4/08

to fricas...@googlegroups.com

Martin Rubey <martin...@univie.ac.at> writes:

> Waldek Hebisch <heb...@math.uni.wroc.pl> writes:

>

> > I am affraid that simply correctiong typos is not the correct thing to do.

[...]

> Yes, but I'm talking about tests that are definitively *not* testing error

> handling.

>

> Most of the obvious errors are in files where:

>

> * files are missing from the distribution

> * NAG parts have been removed

> * somebody copied example pages from HyperDoc by hand and introduced typos.

As announced, here is a first set of corrections. There are more obvious

typos, but I would need to check where the examples come from. On the other

hand, I think putting much more effort into this is a waste of time.

Is committing these OK?

Martin

Index: calculus2.input.pamphlet

===================================================================

--- calculus2.input.pamphlet (revision 164)

+++ calculus2.input.pamphlet (working copy)

@@ -76,8 +76,8 @@

laplace(2/t * (1 - cos(a*t)), t, s)

laplace(exp(-a*t) * sin(b*t) / b**2, t, s)

laplace((cos(a*t) - cos(b*t))/t, t, s)

-laplace(exp(a*t+b)*ei(c*t), t, s)

-laplace(a*ci(b*t) + c*si(d*t), t, s)

+laplace(exp(a*t+b)*Ei(c*t), t, s)

+laplace(a*Ci(b*t) + c*Si(d*t), t, s)

laplace(sin(a*t) - a*t*cos(a*t) + exp(t**2), t, s)

-- Input for page SeriesCoefficientPage

Index: collect.input.pamphlet

===================================================================

--- collect.input.pamphlet (revision 164)

+++ collect.input.pamphlet (working copy)

@@ -29,7 +29,16 @@

d := [i**3 for i in 0..10 by 2 | even? i]

e := reverse [i**3 for i in 10..0 by -2 | even? i]

[x - y for x in d for y in e]

+[x**3 - y for x in b | even? x for y in e]

+f := [i**3 for i in 0..]

+[i**3 for i in 0..10]

+[i**3 for i in 0.. while i < 11]

+[i**3 for i in 0.. for x in 0..10]

+[[i**j for j in 0..3] for i in 0..]

+[[i**j for j in 0..] for i in 0..3]

+brace [i**3 for i in 10..0 by -2]

+

-- Input generated from ContinuedFractionXmpPage

)clear all

@@ -55,14 +64,6 @@

r := ((x - 1) * (x - 2)) / ((x-3) * (x-4))

continuedFraction r

[i*i for i in convergents(z) :: Stream Float]

-[x**3 - y for x in b | even? x for y in e]

-f := [i**3 for i in 0..]

-[i**3 for i in 0..10]

-[i**3 for i in 0.. while i < 11]

-[i**3 for i in 0.. for x in 0..10]

-[[i**j for j in 0..3] for i in 0..]

-[[i**j for j in 0..] for i in 0..3]

-brace [i**3 for i in 10..0 by -2]

-- Input for page ForCollectionDetailPage

)clear all

Index: easter.input.pamphlet

===================================================================

--- easter.input.pamphlet (revision 164)

+++ easter.input.pamphlet (working copy)

@@ -323,14 +323,10 @@

D(y(t), t) = x(t) - y(t)]

-- Try solving this system one equation at a time

solve(system.1, x, t)

-isTimes(subst(%.basis.1, cos(t) = sqrt(1 - sin(t)**2)))

-reduce(*, cons(subst(

- factors(factor(subst(%.1**2, sin(t) = u) :: Polynomial Integer)).1.factor,

- u = sin(t)),

- rest(%)))

-x(t) = C1 * %

+eq1 := x(t) = C1 * %.basis.1

solve(map(e +-> subst(e, %), system.2), y, t)

-y(t) = simplify(%.particular) + C2 * %.basis.1

+eq2 := y(t) = simplify(%.particular) + C2 * %.basis.1

+map(e +-> rightZero eval(e, [eq1, D(eq1, t), eq2, D(eq2, t)]), system)

)clear properties x y

-- ---------- Operators ----------

-- Linear differential operator

Index: exlap.input.pamphlet

===================================================================

--- exlap.input.pamphlet (revision 164)

+++ exlap.input.pamphlet (working copy)

@@ -24,12 +24,12 @@

-- Input for page ExLapSpecial1

)clear all

-laplace(exp(a*t+b)*ei(c*t), t, s)

+laplace(exp(a*t+b)*Ei(c*t), t, s)

-- Input for page ExLapSpecial2

)clear all

-laplace(a*ci(b*t) + c*si(d*t), t, s)

+laplace(a*Ci(b*t) + c*Si(d*t), t, s)

-- Input for page ExLapTrigTrigh

)clear all

Index: exsum.input.pamphlet

===================================================================

--- exsum.input.pamphlet (revision 164)

+++ exsum.input.pamphlet (working copy)

@@ -36,8 +36,8 @@

-- Input for page ExSumListEntriesI

)clear all

-[1..15]

-reduce(+,[1..15])

+[i for i in 1..15]

+reduce(+,[i for i in 1..15])

-- Input for page ExSumApproximateE

)clear all

Index: grpthry.input.pamphlet

===================================================================

--- grpthry.input.pamphlet (revision 164)

+++ grpthry.input.pamphlet (working copy)

@@ -17,28 +17,28 @@

x : PERM INT := [[1,3,5],[7,11,9]]

y : PERM INT := [[3,5,7,9]]

z : PERM INT := [1,3,11]

-g1 : PERMGRPS INT := [ x , y ]

-g2 : PERMGRPS INT := [ x , z ]

-g3 : PERMGRPS INT := [ y , z ]

+g1 : PERMGRP INT := [ x , y ]

+g2 : PERMGRP INT := [ x , z ]

+g3 : PERMGRP INT := [ y , z ]

order g1

degree g3

movedPoints g2

orbit (g1, 3)

orbits g3

member? ( y , g2 )

-)sh PERMGRPS

+)sh PERMGRP

-- Input for page IrrRepSymPage

)clear all

ptn9 := partitions 9

-map(dimIrrRepSym, ptn9)

+map(dimensionOfIrreducibleRepresentation, ptn9)

yt := listYoungTableaus [4,2]; yt :: (LIST TABLEAU I)

-r1 := irrRepSymNat([4,2],[1,2,4,5,3,6])

-r2 := irrRepSymNat([4,2],[3,2,1,5,6,4])

-r3 := irrRepSymNat([4,2],[4,2,1,3,6,5])

+r1 := irreducibleRepresentation([4,2],[1,2,4,5,3,6])

+r2 := irreducibleRepresentation([4,2],[3,2,1,5,6,4])

+r3 := irreducibleRepresentation([4,2],[4,2,1,3,6,5])

(r3 = r1*r2) :: Boolean

-irrRepSymNat [4,4,1]

+irreducibleRepresentation [4,4,1]

-- Input for page REP1Page

)clear all

@@ -54,7 +54,8 @@

-- Input for page REP2Page

)clear all

-r0 := irrRepSymNat [2,2,2,1,1]; r28 := meatAxe (r0::(LIST MATRIX PF 2))

+r0 := irreducibleRepresentation [2,2,2,1,1];

+r28 := meatAxe (r0::(LIST MATRIX PF 2))

areEquivalent? (r28.1, r28.2)

meatAxe r28.2

isAbsolutelyIrreducible? r28.2

@@ -85,7 +86,7 @@

sp0 := meatAxe (pRA6::(List Matrix PF 2))

sp1 := meatAxe sp0.1

isAbsolutelyIrreducible? sp1.2

-d2211 := irrRepSymNat ([2,2,1,1],genA6)

+d2211 := irreducibleRepresentation([2,2,1,1],genA6)

d2211m2 := (d2211::(List Matrix PF 2)); sp2 := meatAxe d2211m2

isAbsolutelyIrreducible? sp2.1

areEquivalent? (sp2.1, sp1.2)

Index: knot2.input.pamphlet

===================================================================

--- knot2.input.pamphlet (revision 164)

+++ knot2.input.pamphlet (working copy)

@@ -16,7 +16,7 @@

)set messages time on

-- this is the color function, nothing elaborate for now

-f(x:SF):SF == x

+-- f(x:SF):SF == x

--------------------------------------------------------------

-- Creation of the knot!!! --

@@ -29,15 +29,17 @@

PQ := p/q

l := lcm(p, q) quo p

-maxRange := (odd? l => l * %pi::SF; 2 * l * %pi::SF)

+maxRange := (odd? l => l * %pi; 2 * l * %pi)

theRange := 0..maxRange

+draw(curve(sin t * cos(PQ*t),cos t * cos(PQ*t),cos t * sin(PQ*t)), t=theRange,

+_ tubeRadius==0.1)

-- create the knot

-knot:TUBE := tubePlot(sin t * cos(PQ*t),cos t * cos(PQ*t),cos t * sin(PQ*t),

- f, theRange, 0.1::SF, 6, "open" )

+-- knot:TUBE Plot3D := tubePlot(sin t * cos(PQ*t),cos t * cos(PQ*t),cos t *

sin(PQ*t),

+-- f, theRange, 0.1::SF, 6, "open" )$ExpressionTubePlot

-- make a viewport out of it

-makeViewport3D(knot, concat ["knot",p::String,q::String])$VIEW3D

+--makeViewport3D(knot, concat ["knot",p::String,q::String])$VIEW3D

@

\eject

\begin{thebibliography}{99}

Index: pat.input.pamphlet

===================================================================

--- pat.input.pamphlet (revision 164)

+++ pat.input.pamphlet (working copy)

@@ -14,7 +14,6 @@

-- Input for page PatternMatching

)clear all

-rule square(x) == x*x

fact(n | n > 0) == n * fact(n - 1)

fact(0) == 1

f('A) == 1

Index: repa6.input.pamphlet

===================================================================

--- repa6.input.pamphlet (revision 164)

+++ repa6.input.pamphlet (working copy)

@@ -10,7 +10,7 @@

\tableofcontents

\eject

<<*>>=

-)cls

+)cl all

-- This file demonstrates Representation Theory in Scratchpad

-- using the packages REP1, REP2, IRSN and SGCF, which are the

@@ -74,11 +74,11 @@

-- lambda : PRTITION := partition [2,2,1,1]

lambda := [2,2,1,1]

-dimIrrRepSym lambda

+dimensionOfIrreducibleRepresentation lambda

-- now create the restriction to A6:

-d2211 := irrRepSymNat(lambda, genA6)

+d2211 := irreducibleRepresentation(lambda, genA6)

-- ... and d2211m2 is the representation over PrimeField 2:

Jan 4, 2008, 6:20:12 AM1/4/08

to fricas...@googlegroups.com, William Sit

Dear factorisation gurus,

I have nearly finished with

Martin Rubey <martin...@univie.ac.at> writes:

> #227 ie. remove random()$INS, and random()$QFCAT

> (they do not make sense! and when something similar is needed, it is

> provided by random()$ZMOD(largenumber).

In fact there are only three (!) spots where random()$Integer which really is

random(2**26)$Integer) or random(100)$Integer is used, namely in

PFBRU PolynomialFactorizationByRecursionUnivariate

PFBR PolynomialFactorizationByRecursion

GENPGCD GeneralPolynomialGcdPackage

The affected code in PFBRU reads as below, where

randomR == (random()$Integer)::R, if R does not have random()$R. If R has

StepThrough, randomR is also defined, but in fact not used, as far as I can

see. In PFBR the code is very similar, except that random(100)$Integer is used.

In GENPGCD, the code is more involved. Again, random(100)$Integer is used.

My question: does anybody know what the random function buys us? Does it

really make sense to use random(100)$Integer? That looks like a fairly

arbitrary choice to me... Are there recommendations for a good choice of a

random element?

Martin

chooseFSQViableSubstitutions(pp) ==

substns:R

ppR: SupR

while true repeat

substns:= randomR()

zero? elt(leadingCoefficient pp,substns ) => "next"

ppR:=map( elt(#1,substns),pp)

degree gcd(ppR,differentiate ppR)>0 => "next"

leave

[substns,ppR]

[...]

if R has StepThrough then

factorSFBRlcUnit(pp) ==

val:R := init()

while true repeat

tempAns:=factorSFBRlcUnitInner(pp,val)

not (tempAns case "failed") => return tempAns

val1:=nextItem val

val1 case "failed" =>

error "at this point, we know we have a finite field"

val:=val1

else

factorSFBRlcUnit(pp) ==

val:R := randomR()

while true repeat

tempAns:=factorSFBRlcUnitInner(pp,val)

not (tempAns case "failed") => return tempAns

val := randomR()

Jan 4, 2008, 6:53:41 AM1/4/08

to fricas...@googlegroups.com

Another one: #309:

as of now:

partialFraction((x+a)/(x*(x**3+(b+c)*x**2+b*c*x)),x)

treats x as a FR FRAC POLY INT. If we expose PFRPAC as suggested by Christian

Sievers, the desired partialFraction is used. I suggested to rename the

operation, but now I think exposing should be enough.

Any comments, or can I go ahead?

Martin

Jan 4, 2008, 7:59:36 AM1/4/08

to Martin Rubey, fricas...@googlegroups.com

Martin Rubey <martin...@univie.ac.at> writes:

> As announced, here is a

second correction: the file realclos.input.pamphlet contains only a subset of

the tests in reclos.input.pamphlet (and no additional documentation), so I

propose to delete it.

Martin

Jan 4, 2008, 11:22:54 AM1/4/08

to fricas...@googlegroups.com

On 04 Jan 2008 09:43:39 +0100, Martin Rubey wrote:

> I just committed. I took the opportunity to fix that behaviour

> also for member? and =. As far as I could check, there should

> be no other places where short-circuiting is not used.

>...

> I believe that one should not rely on short-circuiting for

> semantics, only for performance.

I don't understand this last comment. Obviously the change you made

affects the semantics of the 'any?' and 'every?' operations. You can

not prevent someone from using these exported operations with

functions that have side-effects, so you *must* document that fact

that 'every?' will not evaluate 'f' for all parts of D:

every?(f:(D3 -> Boolean),x:D) -> Boolean

That is now part of it's semantics. If there had been any places in

Axiom where it was required that f be called for each x regardless of

whether 'true' or 'false' was returned then this change would have

broken something, but you said that you had checked that in all cases

where 'any?' and 'every?' were used in the existing code this

assumption was not made. So now we just have to make sure that no one

expects this behaviour in the future.

Regards,

Bill Page.

Jan 4, 2008, 4:07:36 PM1/4/08

to fricas...@googlegroups.com, William Sit

Martin Rubey <martin...@univie.ac.at> writes:

> Dear factorisation gurus,

>

> I have nearly finished with

>

> Martin Rubey <martin...@univie.ac.at> writes:

>

> > #227 ie. remove random()$INS, and random()$QFCAT

> > (they do not make sense! and when something similar is needed, it is

> > provided by random()$ZMOD(largenumber).

>

> In fact there are only three (!) spots where random()$Integer which really is

> random(2**26)$Integer) or random(100)$Integer is used, namely in

>

> PFBRU PolynomialFactorizationByRecursionUnivariate

> PFBR PolynomialFactorizationByRecursion

> GENPGCD GeneralPolynomialGcdPackage

Well, meanwhile I finished this. And I found out that there are some more

places where something like random(23)$Integer (i.e., with some arbitrary

looking number hardcoded) occurrs.

I am not sure how to proceed. I do not think this is ready for a commit,

especially since I could not get rid of random()$Integer anyway.

What I could commit however, would be those changes that replace unnecessary

calls like

random()$Integer rem size()$%

by

random(size()$%)$Integer

I do think that this makes the code quite a bit clearer.

Comments?

Martin

Jan 5, 2008, 11:24:23 AM1/5/08

to fricas...@googlegroups.com

Dear all,

I just found another place where a strange random() function is used, this time

random()$FRAC INT, namely in BOUNDZRO.

I must say that random()FRAC INT is especially strange: it is the quotient of

two random()$INT s, i.e., uniform on [0,2^26]. (I forgot what distribution

that is, but it should be easy to determine...)

"especially strange", because the way this random number is computed, is by

dividing two random()INT s, which implies that there is almost certainly a gcd

computation invoked.

I wonder what distribution one really wants for all these things. Are the

distributions really arbitrary, or are is experience showing that make some

distributions "better"?

Martin

Jan 6, 2008, 11:22:19 AM1/6/08

to fricas...@googlegroups.com

You probably know this, but: in many places that you looked at we

try to move problem to lower dimension by substituting constants

in place of variables. There is a notion of "bad reduction",

where solving lower dimensional problem does not help solving

original one. What we really want is having low probability

of bad reduction -- so exact distribution does not matter, as

long as set of bad reductions has low probability.

Note that what is "bad reduction" depends on algorithm. Estimating

(size of) set of bad reductions is important part of analysis of

the algorithm -- however such estimates are often too pessimistic.

In practice we want to have enough randomness to have low

probability of bad reductions, but we also want to limit

coefficient growth (so we prefer small numbers). One

possibilty is to try small ranges first and enlage range if

small range does not work.

Coming back to FriCAS code, it is likely that in many places

where you found fixed ranges they are just bugs: the ranges

look way too small.

I can see few explanations why such code is present. First,

the code may be unused, but required by the type checker

(I would prefer to just call 'error' in such places). Second,

debugging using small ranges is easier so when coding it

is usual to have very small ranges. Unfortunatly, such debugging

version may end up beeing released. Third, somebody developing

algorithm for given domain instead of developing proper generator

of random elements from different domain may just use "fake"

which works on his/her test cases (but is wrong in general).

Finally, somebody may have tuned code to run fast on small inputs,

ignoring correctness for large inputs.

Note that even if distribution used is very wrong, the code may

still pass several test, which lowers motivation to get such

code right.

Now, I belive that 'bringDown' in BOUNDZRO is wrong. Namely,

once we have algebraic dependencies representation of

expressions is no longer canonical: so 'bringDown' is _not

defined_ at math level and in particular is not a

Z-linear map -- this in principle could be corrected bringing

expression first to canonical form (then Bronstein argument

would work).

If we ignore problem with algebraic dependencies 'bringDown'

has another problem: we may get division by zero (which

causes error with no way to recover). So Bronstein probably

tried to make probability of zero division very low -- using

quotient means that insted of 2^26 values we get closer

to 2^50 distinct values. Another speculation is that

in practice integer substitutions are much more likely to

lead to zero division than rational ones.

Concerning correctness of 'bringDown': I belive that the

only bad effect is that we may miss some elementary

integeral (or solution to Risch differential equation).

This can only happen in rather complicated cases where

we are likely to run out of resources anyway, so is not

worse than other bugs... In other words: 'bringDown'

probably belongs to those rare cases where it makes

sense to document bugs.

About 'randomR' in pfbr.spad: I suspect that definition

would not work for R beeing 'Fraction (Polynomial PF(2))'

-- we seem to be getting only two values...

--

Waldek Hebisch

heb...@math.uni.wroc.pl

Jan 6, 2008, 11:33:05 AM1/6/08

to fricas...@googlegroups.com

Martin Rubey wrote:

> What I could commit however, would be those changes that replace unnecessary

> calls like

>

> random()$Integer rem size()$%

>

> by

>

> random(size()$%)$Integer

>

> I do think that this makes the code quite a bit clearer.

>

> What I could commit however, would be those changes that replace unnecessary

> calls like

>

> random()$Integer rem size()$%

>

> by

>

> random(size()$%)$Integer

>

> I do think that this makes the code quite a bit clearer.

>

Not only clearer: at least in principle 'random(size()$%)$Integer'

should be able to produce values larger than 2^26, which may

be crucial for correctness. So I support this change.

--

Waldek Hebisch

heb...@math.uni.wroc.pl

Jan 6, 2008, 2:23:52 PM1/6/08

to fricas...@googlegroups.com

Waldek Hebisch <heb...@math.uni.wroc.pl> writes:

Oh, that's very good news: I was already quite frustrated by the mess

(especially, since I do not understand *anything* there), and thought I would

have wasted two days of trying to find the places where random()$Integer et

al. is used. It proved trickier than I would have believed, it seems that I do

not master grep well enough :-)

So, as soon as I can, I'll prepare a patch implementing the change above, and

-- if you agree -- commenting all the other places (i.e., random(100),

random()$Integer, random()$Fraction Integer, random()$SingleInteger) with

-- FIXME: strange random distribution used.

However, I think removing random()$INT et all entirely is a bit premature: I

do not understand its usage well enough, so I'd be happier if you'd perform

such changes. Moreover, they do not seem urgent.

(I really rather have you concentrate on the proper integration of your Gruntz

implementation, for example. That looks really urgent to me.)

Again, many thanks for your comments, you made my day!

Martin

Jan 6, 2008, 6:24:35 PM1/6/08

to fricas...@googlegroups.com

Martin Rubey wrote:

>

> As announced, here is a first set of corrections. There are more obvious

> typos, but I would need to check where the examples come from. On the other

> hand, I think putting much more effort into this is a waste of time.

>

> Is committing these OK?

>

>

> As announced, here is a first set of corrections. There are more obvious

> typos, but I would need to check where the examples come from. On the other

> hand, I think putting much more effort into this is a waste of time.

>

> Is committing these OK?

>

Yes, OK (with a little nitpick):

> ===================================================================

> --- repa6.input.pamphlet (revision 164)

> +++ repa6.input.pamphlet (working copy)

> @@ -10,7 +10,7 @@

> \tableofcontents

> \eject

> <<*>>=

> -)cls

> +)cl all

>

Please use full form, ')clear all'. Abbreviated commands look

really cryptic.

--

Waldek Hebisch

heb...@math.uni.wroc.pl

Jan 6, 2008, 6:38:04 PM1/6/08

to fricas...@googlegroups.com

[ Charset ISO-8859-1 unsupported, converting... ]

OK. After your first change calculus.input.pamphlet will be a subset

of calculus2.input.pamphlet (the only difference was that

calculus.input.pamphlet had correct spelling of Ei and Si), so

calculus.input.pamphlet can also be removed.

--

Waldek Hebisch

heb...@math.uni.wroc.pl

Jan 6, 2008, 6:57:18 PM1/6/08

to fricas...@googlegroups.com

OK. In general I would prefer to use different names (which IMHO

reduces confusion) but in this case exposing should work

as well.

--

Waldek Hebisch

heb...@math.uni.wroc.pl

Jan 7, 2008, 3:12:39 AM1/7/08

to fricas...@googlegroups.com

"Bill Page" <bill...@newsynthesis.org> writes:

> > I believe that one should not rely on short-circuiting for

> > semantics, only for performance.

>

> I don't understand this last comment. Obviously the change you made

> affects the semantics of the 'any?' and 'every?' operations. You can

> not prevent someone from using these exported operations with

> functions that have side-effects, so you *must* document that fact

> that 'every?' will not evaluate 'f' for all parts of D:

>

> every?(f:(D3 -> Boolean),x:D) -> Boolean

I did, in fact. The comment was intended as general advise: I think it is a

bad idea to rely on whether or not short-circuiting is done. More precisely,

my (personal!) advice is:

* NEVER use the fact that no short-circuiting is done, EVEN if documented.

* if short-circuiting is advertised, do not rely on any "natural" order.

> So now we just have to make sure that no one expects this behaviour in the

> future.

Exactly. To make sure, I added that to the docstrings.

Martin

Jan 7, 2008, 7:34:26 AM1/7/08

to fricas...@googlegroups.com

On 07 Jan 2008 09:12:39 +0100, Martin Rubey wrote:

>

> "Bill Page" <bill...@newsynthesis.org> writes:

>

> > > I believe that one should not rely on short-circuiting for

> > > semantics, only for performance.

> >

> > I don't understand this last comment. Obviously the change you

> > made affects the semantics of the 'any?' and 'every?' operations.

> > You can not prevent someone from using these exported operations

> > with functions that have side-effects, so you *must* document that

> > fact that 'every?' will not evaluate 'f' for all parts of D:

> >

> > every?(f:(D3 -> Boolean),x:D) -> Boolean

>

> I did, in fact. The comment was intended as general advise: I think

> it is a bad idea to rely on whether or not short-circuiting is done.

> More precisely, my (personal!) advice is:

>

> * NEVER use the fact that no short-circuiting is done, EVEN

> if documented.

>

> * if short-circuiting is advertised, do not rely on any "natural"

> order.

>

>

> "Bill Page" <bill...@newsynthesis.org> writes:

>

> > > I believe that one should not rely on short-circuiting for

> > > semantics, only for performance.

> >

> > I don't understand this last comment. Obviously the change you

> > made affects the semantics of the 'any?' and 'every?' operations.

> > You can not prevent someone from using these exported operations

> > with functions that have side-effects, so you *must* document that

> > fact that 'every?' will not evaluate 'f' for all parts of D:

> >

> > every?(f:(D3 -> Boolean),x:D) -> Boolean

>

> I did, in fact. The comment was intended as general advise: I think

> it is a bad idea to rely on whether or not short-circuiting is done.

> More precisely, my (personal!) advice is:

>

> * NEVER use the fact that no short-circuiting is done, EVEN

> if documented.

>

> * if short-circuiting is advertised, do not rely on any "natural"

> order.

>

I think it is important to be explicit about the semantics. There

should be no ambiguity. Advice about good and bad practice is a

separate issue.

> > So now we just have to make sure that no one expects this behaviour in the

> > future.

>

> Exactly. To make sure, I added that to the docstrings.

>

Ok.

Regards,

Bill Page.

Jan 7, 2008, 11:01:02 AM1/7/08

to fricas...@googlegroups.com, Waldek Hebisch

Dear Waldek, *

as promised, below a diff of the change I propose.

I am not sure whether we really should deprecate random()$INS and $QFCAT. The

point is, I would rather rename them to something like largeRandomNumber,

because I'd like to reserve random() for the uniform distribution (or possibly:

near-uniform distribution) - to reduce confusion. But I'm not sure whether

this is such a good idea.

If this diff is OK for you, I'll commit it.

All the best,

Martin

Index: ChangeLog.wh

===================================================================

--- ChangeLog.wh (revision 168)

+++ ChangeLog.wh (working copy)

@@ -1,5 +1,25 @@

2008-01-07 Martin Rubey <martin...@univie.ac.at>

+ * src/algebra/aggcat.spad.pamphlet,

+ src/algebra/boolean.spad.pamphlet,

+ src/algebra/divisor.spad.pamphlet, src/algebra/ffcg.spad.pamphlet,

+ src/algebra/ffpoly.spad.pamphlet, src/algebra/files.spad.pamphlet,

+ src/algebra/fraction.spad.pamphlet,

+ src/algebra/gpgcd.spad.pamphlet,

+ src/algebra/groebsol.spad.pamphlet,

+ src/algebra/idecomp.spad.pamphlet,

+ src/algebra/integer.spad.pamphlet,

+ src/algebra/intfact.spad.pamphlet,

+ src/algebra/lingrob.spad.pamphlet,

+ src/algebra/lodof.spad.pamphlet, src/algebra/mring.spad.pamphlet,

+ src/algebra/permgrps.spad.pamphlet,

+ src/algebra/pfbr.spad.pamphlet, src/algebra/pfo.spad.pamphlet,

+ src/algebra/primelt.spad.pamphlet, src/algebra/rep2.spad.pamphlet,

+ src/algebra/si.spad.pamphlet, src/algebra/string.spad.pamphlet:

+ replace random()$INT rem s with random(s)$INT and similar changes,

+ add comments to places where strange distributions are used.

+ Deprecate random()$QFCAT and random()$INS.

+

* src/algebra/reclos.spad.pamphlet: make sqrt take a positive

second argument, since sqrt(x, 0) doesn't make sense.

Index: src/algebra/aggcat.spad.pamphlet

===================================================================

--- src/algebra/aggcat.spad.pamphlet (revision 168)

+++ src/algebra/aggcat.spad.pamphlet (working copy)

@@ -736,7 +736,7 @@

complement s == difference(universe(), s )

size() == 2 ** size()$S

index i == {index(j::PositiveInteger)$S for j in 1..size()$S | bit?(i-1,j-1)}

- random() == index((random()$Integer rem (size()$% + 1))::PositiveInteger)

+ random() == index((size()$% + 1)::PositiveInteger)

lookup s ==

n:PositiveInteger := 1

Index: src/algebra/boolean.spad.pamphlet

===================================================================

--- src/algebra/boolean.spad.pamphlet (revision 168)

+++ src/algebra/boolean.spad.pamphlet (working copy)

@@ -147,7 +147,7 @@

a pretend Boolean => 1

2

random() ==

- even?(random()$Integer) => false

+ even?(random(2)$Integer) => false

true

convert(x:%):InputForm ==

Index: src/algebra/divisor.spad.pamphlet

===================================================================

--- src/algebra/divisor.spad.pamphlet (revision 168)

+++ src/algebra/divisor.spad.pamphlet (working copy)

@@ -165,7 +165,7 @@

+/[random()$F * qelt(v, j) for j in minIndex v .. maxIndex v]

else

randomLC(m, v) ==

- +/[(random()$Integer rem m::Integer) * qelt(v, j)

+ +/[random(m)$Integer * qelt(v, j)

for j in minIndex v .. maxIndex v]

minimize i ==

@@ -315,7 +315,7 @@

lr := [i for i in minRowIndex x .. maxRowIndex x]$List(Integer)

for i in 1..(n := binomial(nr, nc)) repeat

(d := determinant x(enumerateBinomial(lr, nc, i), lc)) ^= 0 =>

- j := i + 1 + (random()$Z rem (n - i))

+ j := i + 1 + random(n-i)$Z

return gcd(d, determinant x(enumerateBinomial(lr, nc, j), lc))

0

Index: src/algebra/ffcg.spad.pamphlet

===================================================================

--- src/algebra/ffcg.spad.pamphlet (revision 168)

+++ src/algebra/ffcg.spad.pamphlet (working copy)

@@ -214,9 +214,10 @@

definingPolynomial() == defpol

- random() ==

- positiveRemainder(random()$Rep,sizeFF pretend Rep)$Rep -$Rep 1$Rep

+ random() == random(sizeFF pretend Rep)$Rep -$Rep 1$Rep

+

+

represents(v) ==

u:FFP:=represents(v)$FFP

u =$FFP 0$FFP => 0

Index: src/algebra/ffpoly.spad.pamphlet

===================================================================

--- src/algebra/ffpoly.spad.pamphlet (revision 168)

+++ src/algebra/ffpoly.spad.pamphlet (working copy)

@@ -982,7 +982,7 @@

random(m,n) ==

if m > n then (m,n) := (n,m)

d : NNI := (n - m) :: NNI

- if d > 1 then n := ((random()$I rem (d::PI)) + m) :: PI

+ if d > 1 then n := (random(d)$I + m) :: PI

random(n)

@

Index: src/algebra/files.spad.pamphlet

===================================================================

--- src/algebra/files.spad.pamphlet (revision 168)

+++ src/algebra/files.spad.pamphlet (working copy)

@@ -424,7 +424,7 @@

f.fileIOmode ^= "input" => error ["File not in read state",f]

ks: List Symbol := RKEYIDS(f.fileName)$Lisp

null ks => error ["Attempt to read empty file", f]

- ix := random()$Integer rem #ks

+ ix := random(#ks)$Integer

k: String := PNAME(ks.ix)$Lisp

[k, SPADRREAD(k, f.fileState)$Lisp]

write_!(f, pr) ==

Index: src/algebra/fraction.spad.pamphlet

===================================================================

--- src/algebra/fraction.spad.pamphlet (revision 168)

+++ src/algebra/fraction.spad.pamphlet (working copy)

@@ -155,9 +155,10 @@

fractionPart: % -> %

++ fractionPart(x) returns the fractional part of x.

++ x = wholePart(x) + fractionPart(x)

+-- FIXME: strange random distribution used (#227).

if S has IntegerNumberSystem then

random: () -> %

- ++ random() returns a random fraction.

+ ++ random() returns a random fraction. This function is deprecated.

ceiling : % -> S

++ ceiling(x) returns the smallest integral element above x.

floor: % -> S

@@ -250,6 +251,7 @@

"failed"

retractIfCan(u::S)

+-- FIXME: strange random distribution used (#227).

if S has IntegerNumberSystem then

random():% ==

while zero?(d:=random()$S) repeat d

Index: src/algebra/gpgcd.spad.pamphlet

===================================================================

--- src/algebra/gpgcd.spad.pamphlet (revision 168)

+++ src/algebra/gpgcd.spad.pamphlet (working copy)

@@ -156,7 +156,8 @@

randomCount:=init()

randomCount

else

- randomR() == (random$Integer() rem 100)::R

+-- FIXME: strange random distribution used (#227).

+ randomR() == (random(100)$Integer)::R

---- JHD's local functions ---

gcdSameVariables(p1:SUPP,p2:SUPP,lv:List OV) ==

-- two non-trivial primitive (or, at least, we don't care

Index: src/algebra/groebsol.spad.pamphlet

===================================================================

--- src/algebra/groebsol.spad.pamphlet (revision 168)

+++ src/algebra/groebsol.spad.pamphlet (working copy)

@@ -108,7 +108,7 @@

x := first rlvar;rlvar:=rest rlvar

testfail:=true

for count in 1.. while testfail repeat

- ranvals:L I:=[1+(random()$I rem (count*(# lvar))) for vv in rlvar]

+ ranvals:L I:=[1+random(count*(# lvar))$I for vv in rlvar]

val:=+/[rv*(vv::HDPoly)

for vv in rlvar for rv in ranvals]

val:=val+x::HDPoly

Index: src/algebra/idecomp.spad.pamphlet

===================================================================

--- src/algebra/idecomp.spad.pamphlet (revision 168)

+++ src/algebra/idecomp.spad.pamphlet (working copy)

@@ -146,7 +146,8 @@

pv:DPoly:=0

pw:DPoly:=0

while degree(lf,y)^=1 repeat

- val:=random()$Z rem 23

+-- FIXME: strange random distribution used (#227).

+ val:=random(23)$Z

pv:=px+val*py

pw:=px-val*py

Id:=groebner([(univariate(h,x)).pv for h in Id])

@@ -316,7 +317,8 @@

---- put the ideal in general position ----

genPosLastVar(J:FIdeal,truelist:List OV):GenPos ==

x := last truelist ;lv1:List OV :=remove(x,truelist)

- ranvals:List(Z):=[(random()$Z rem 23) for vv in lv1]

+-- FIXME: strange random distribution used (#227).

+ ranvals:List(Z):=[random(23)$Z for vv in lv1]

val:=_+/[rv*(vv::DPoly) for vv in lv1 for rv in ranvals]

val:=val+(x::DPoly)

[ranvals,groebnerIdeal(groebner([(univariate(p,x)).val

Index: src/algebra/integer.spad.pamphlet

===================================================================

--- src/algebra/integer.spad.pamphlet (revision 168)

+++ src/algebra/integer.spad.pamphlet (working copy)

@@ -180,6 +180,7 @@

[m pretend Matrix(Integer), vec pretend Vector(Integer)]

abs(x) == ABS(x)$Lisp

+-- FIXME: strange random distribution used (#227).

random() == random()$Lisp

random(x) == RANDOM(x)$Lisp

x = y == EQL(x,y)$Lisp

Index: src/algebra/intfact.spad.pamphlet

===================================================================

--- src/algebra/intfact.spad.pamphlet (revision 168)

+++ src/algebra/intfact.spad.pamphlet (working copy)

@@ -395,6 +395,7 @@

-- Pfun(y: I,n: I): I == (y**2 + 5) rem n

PollardSmallFactor(n:I):Union(I,"failed") ==

-- Use the Brent variation

+-- FIXME: strange random distribution used (#227).

x0 := random()$I

m := 100::I

y := x0 rem n

Index: src/algebra/lingrob.spad.pamphlet

===================================================================

--- src/algebra/lingrob.spad.pamphlet (revision 168)

+++ src/algebra/lingrob.spad.pamphlet (working copy)

@@ -264,7 +264,8 @@

rval:List Z :=[]

for ii in 1..(#lvar-1) repeat

c:Z:=0

- while c=0 repeat c:=random()$Z rem 11

+-- FIXME: strange random distribution used (#227).

+ while c=0 repeat c:=random(11)$Z

rval:=concat(c,rval)

nval:DPoly := (last.lvar)::DPoly -

(+/[r*(vv)::DPoly for r in rval for vv in lvar])

@@ -310,7 +311,8 @@

xn:=lvar.last

val := xn::DPoly

nvar1:NNI:=(#lvar-1):NNI

- ll: List Z :=[random()$Z rem 11 for i in 1..nvar1]

+-- FIXME: strange random distribution used (#227).

+ ll: List Z :=[random(11)$Z for i in 1..nvar1]

val:=val+ +/[ll.i*(lvar.i)::DPoly for i in 1..nvar1]

LL:=[elt(univariate(f,xn),val) for f in L]

LL:= groebner(LL)

Index: src/algebra/lodof.spad.pamphlet

===================================================================

--- src/algebra/lodof.spad.pamphlet (revision 168)

+++ src/algebra/lodof.spad.pamphlet (working copy)

@@ -58,7 +58,7 @@

s1 = s2 == s1.bits =$Bits s2.bits

coerce(s:%):OutputForm == brace [i::OutputForm for i in elements s]

- random() == index((1 + (random()$Integer rem size()))::PI)

+ random() == index((random(size())$Integer + 1)::PI)

reallyEnumerate() == [[b, i] for b in enum(m, n, n) for i in 1..]

member?(p, s) == s.bits.p

Index: src/algebra/mring.spad.pamphlet

===================================================================

--- src/algebra/mring.spad.pamphlet (revision 168)

+++ src/algebra/mring.spad.pamphlet (working copy)

@@ -154,8 +154,7 @@

res := res + co * p ** ex

res pretend PositiveInteger

- random() == index( (1+(random()$Integer rem size()$%) )_

- pretend PositiveInteger)$%

+ random() == index((random(size()$%)$Integer + 1)::PositiveInteger)$%

0 == empty()

1 == [[1, 1]]

Index: src/algebra/permgrps.spad.pamphlet

===================================================================

--- src/algebra/permgrps.spad.pamphlet (revision 168)

+++ src/algebra/permgrps.spad.pamphlet (working copy)

@@ -277,16 +277,16 @@

ranelt ( group : L V NNI , word : L L NNI , maxLoops : I ) : REC3 ==

-- generate a "random" element

numberOfGenerators := # group

- randomInteger : I := 1 + (random()$Integer rem numberOfGenerators)

+ randomInteger : I := 1 + random(numberOfGenerators)$Integer

randomElement : V NNI := group.randomInteger

words := nil()$(L NNI)

if wordProblem then words := word.(randomInteger::NNI)

if maxLoops > 0 then

- numberOfLoops : I := 1 + (random()$Integer rem maxLoops)

+ numberOfLoops : I := 1 + random(maxLoops)$Integer

else

numberOfLoops : I := maxLoops

while numberOfLoops > 0 repeat

- randomInteger : I := 1 + (random()$Integer rem numberOfGenerators)

+ randomInteger : I := 1 + random(numberOfGenerators)$Integer

randomElement := times ( group.randomInteger , randomElement )

if wordProblem then words := append ( word.(randomInteger::NNI) , words)

numberOfLoops := numberOfLoops - 1

@@ -650,11 +650,11 @@

maximalNumberOfFactors < 1 => 1$(PERM S)

gp : L PERM S := group.gens

numberOfGenerators := # gp

- randomInteger : I := 1 + (random()$Integer rem numberOfGenerators)

+ randomInteger : I := 1 + random(numberOfGenerators)$Integer

randomElement := gp.randomInteger

- numberOfLoops : I := 1 + (random()$Integer rem maximalNumberOfFactors)

+ numberOfLoops : I := 1 + random(maximalNumberOfFactors)$Integer

while numberOfLoops > 0 repeat

- randomInteger : I := 1 + (random()$Integer rem numberOfGenerators)

+ randomInteger : I := 1 + random(numberOfGenerators)$Integer

randomElement := gp.randomInteger * randomElement

numberOfLoops := numberOfLoops - 1

randomElement

Index: src/algebra/pfbr.spad.pamphlet

===================================================================

--- src/algebra/pfbr.spad.pamphlet (revision 168)

+++ src/algebra/pfbr.spad.pamphlet (working copy)

@@ -199,7 +199,9 @@

randomCount

else if R has random: -> R then

randomR() == random()

- else randomR() == (random()$Integer rem 100)::R

+-- FIXME: strange random distribution used (#227).

+-- cf. PFBR, where random()$Integer is used.

+ else randomR() == (random(100)$Integer)::R

factorByRecursion pp ==

and/[zero? degree u for u in coefficients pp] =>

map(raise,factorPolynomial lower pp)

@@ -416,6 +418,7 @@

val := randomR()

if R has random: -> R then

randomR() == random()

+-- FIXME: strange random distribution used (#227).

else randomR() == (random()$Integer)::R

if R has FiniteFieldCategory then

bivariateSLPEBR(lpolys,pp,v) ==

Index: src/algebra/pfo.spad.pamphlet

===================================================================

--- src/algebra/pfo.spad.pamphlet (revision 168)

+++ src/algebra/pfo.spad.pamphlet (working copy)

@@ -274,6 +274,7 @@

K2Z k ==

has?(operator k, ALGOP) => error "Cannot reduce constant field"

+-- FIXME: strange random distribution used (#227).

(u := search(k, redmap)) case "failed" =>

setelt(redmap, k, random()$Z)::F

u::Z::F

Index: src/algebra/primelt.spad.pamphlet

===================================================================

--- src/algebra/primelt.spad.pamphlet (revision 168)

+++ src/algebra/primelt.spad.pamphlet (working copy)

@@ -67,7 +67,7 @@

innerPrimitiveElement: (List P, List SY, SY) -> REC

multi(p, v) == multivariate(map(#1, p), v)

- randomInts(n, m) == [symmetricRemainder(random()$Integer, m) for i in 1..n]

+ randomInts(n, m) == [symmetricRemainder(random(m)$Integer, m) for i in 1..n]

incl?(a, b) == every?(member?(#1, b), a)

primitiveElement(l, v) == primitiveElement(l, v, new()$SY)

@@ -77,7 +77,7 @@

u := (new()$SY)::P

b := a2::P

for i in 10.. repeat

- c := symmetricRemainder(random()$Integer, i)

+ c := symmetricRemainder(random(i)$Integer, i)

w := u - c * b

r := univariate resultant(eval(p1, a1, w), eval(p2, a1, w), a2)

not zero? r and r = squareFreePart r => return [1, c, r]

Index: src/algebra/rep2.spad.pamphlet

===================================================================

--- src/algebra/rep2.spad.pamphlet (revision 168)

+++ src/algebra/rep2.spad.pamphlet (working copy)

@@ -303,10 +303,10 @@

createRandomElement(aG,algElt) ==

numberOfGenerators : NNI := #aG

-- randomIndex := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1 + random(numberOfGenerators)$Integer

algElt := algElt * aG.randomIndex

-- randomIndxElement := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1 + random(numberOfGenerators)$Integer

algElt + aG.randomIndex

@@ -474,7 +474,7 @@

-- fingerprint.

if randomelements then -- random should not be from I

--randomIndex : I := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1+ random(numberOfGenerators)$Integer

x0 : M R := aG0.randomIndex

x1 : M R := aG1.randomIndex

n : NNI := #row(x0,1) -- degree of representation

@@ -489,11 +489,11 @@

if i = 7 then randomelements := true

if randomelements then

--randomIndex := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1+ random(numberOfGenerators)$Integer

x0 := x0 * aG0.randomIndex

x1 := x1 * aG1.randomIndex

--randomIndex := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1 + random(numberOfGenerators)$Integer

x0 := x0 + aG0.randomIndex

x1 := x1 + aG1.randomIndex

else

@@ -570,7 +570,7 @@

numberOfGenerators : NNI := #aG

-- need a start value for creating random matrices:

-- randomIndex : I := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1 + random(numberOfGenerators)$Integer

x : M R := aG.randomIndex

n : NNI := #row(x,1) -- degree of representation

foundResult : B := false

@@ -581,10 +581,10 @@

-- x_i+1 :=x_i * mr1 + mr2, where mr1 and mr2 are randomly

-- chosen elements form "aG".

-- randomIndex := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1 + random(numberOfGenerators)$Integer

x := x * aG.randomIndex

--randomIndex := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1 + random(numberOfGenerators)$Integer

x := x + aG.randomIndex

-- test whether rank of x is n-1

rk : NNI := rank x

@@ -667,7 +667,7 @@

-- fingerprint.

if randomelements then -- random should not be from I

--randomIndex : I := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1 + random(numberOfGenerators)$Integer

x : M R := algebraGenerators.randomIndex

foundResult : B := false

for i in 1..numberOfTries until foundResult repeat

@@ -680,10 +680,10 @@

if i = 7 then randomelements := true

if randomelements then

--randomIndex := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1 + random(numberOfGenerators)$Integer

x := x * algebraGenerators.randomIndex

--randomIndex := randnum numberOfGenerators

- randomIndex := 1+(random()$Integer rem numberOfGenerators)

+ randomIndex := 1 + random(numberOfGenerators)$Integer

x := x + algebraGenerators.randomIndex

else

x := fingerPrint (i, algebraGenerators.1,_

Index: src/algebra/si.spad.pamphlet

===================================================================

--- src/algebra/si.spad.pamphlet (revision 168)

+++ src/algebra/si.spad.pamphlet (working copy)

@@ -55,8 +55,9 @@

++ rational(n) creates a rational number (see \spadtype{Fraction Integer})..

rationalIfCan: % -> Union(Fraction Integer, "failed")

++ rationalIfCan(n) creates a rational number, or returns "failed" if this is not possible.

+-- FIXME: strange random distribution used (#227).

random : () -> %

- ++ random() creates a random element.

+ ++ random() creates a random element. This function is deprecated.

random : % -> %

++ random(a) creates a random element from 0 to \spad{n-1}.

hash : % -> %

@@ -348,6 +349,7 @@

x pretend %

error "integer too large to represent in a machine word"

+-- FIXME: strange random distribution used (#227).

random() ==

seed := REMAINDER(TIMES(MULTIPLIER,seed)$Lisp,MODULUS)$Lisp

REMAINDER(seed,BASE)$Lisp

Index: src/algebra/string.spad.pamphlet

===================================================================

--- src/algebra/string.spad.pamphlet (revision 168)

+++ src/algebra/string.spad.pamphlet (working copy)

@@ -91,7 +91,7 @@

lookup c == (1 + ord c)::PositiveInteger

char(n:Integer) == n::%

ord c == convert(c)$Rep

- random() == char(random()$Integer rem size())

+ random() == char(random(size())$Integer)

space == QENUM(" ", 0$Lisp)$Lisp

quote == QENUM("_" ", 0$Lisp)$Lisp

escape == QENUM("__ ", 0$Lisp)$Lisp

Jan 7, 2008, 1:33:03 PM1/7/08

to fricas...@googlegroups.com

Martin Rubey wrote:

> as promised, below a diff of the change I propose.

>

> I am not sure whether we really should deprecate random()$INS and $QFCAT. The

> point is, I would rather rename them to something like largeRandomNumber,

> because I'd like to reserve random() for the uniform distribution (or possibly:

> near-uniform distribution) - to reduce confusion. But I'm not sure whether

> this is such a good idea.

>

We certainly need a way to generate some number of distinct elements

from a given domain.

> If this diff is OK for you, I'll commit it.

>

Yes, OK.

--

Waldek Hebisch

heb...@math.uni.wroc.pl

Jan 11, 2008, 8:21:54 AM1/11/08

to fricas...@googlegroups.com

Dear all,

here comes a first attempt at implementing a tiny unit testing facility.

Main drawback: it does not deal with fatalities.

Main goody: easy to use, does statistics

Below is an example session.

Note that )clear all does not reset the statistics, while )clear completely

does.

If you aprove, I'd like to convert the bugs****.input files to this new format.

Martin

(8) -> )re AxiomUnit.input

)lib CNTTEST UNITTEST

CountTest is already explicitly exposed in frame initial

CountTest will be automatically loaded when needed from

/users/rubey/martin/Axiom/CNTTEST.NRLIB/code

UnitTest is already explicitly exposed in frame initial

UnitTest will be automatically loaded when needed from

/users/rubey/martin/Axiom/UNITTEST.NRLIB/code

)clear all

All user variables and function definitions have been cleared.

testcase("issue #415")

Type: Void

assertEquals(1/x,0)

(2) false

Type: Boolean

statistics()

Summary:

Testcase issue #415

Number of tests: 1

Number of failed tests: 1

failed tests were:

[1]

Type: Void

assertEquals(1,1)

(4) true

Type: Boolean

statistics()

Summary:

Testcase issue #415

Number of tests: 2

Number of failed tests: 1

failed tests were:

[1]

Type: Void

)clear all

All user variables and function definitions have been cleared.

assertEquals(3,3)

(1) true

Type: Boolean

statistics()

Summary:

Testcase issue #415

Number of tests: 3

Number of failed tests: 1

failed tests were:

[1]

Type: Void

testcase("issue #416")

Type: Void

assertEquals(3,3)

(4) true

Type: Boolean

assertEquals(sin x,exp x)

(5) false

Type: Boolean

assertEquals(log x,exp x)

(6) false

Type: Boolean

statistics()

Summary:

Testcase issue #415

Number of tests: 3

Number of failed tests: 1

failed tests were:

[1]

Testcase issue #416

Number of tests: 3

Number of failed tests: 2

failed tests were:

[2,3]

Type: Void

-------------------------------------------------------------------------------

)abb package CNTTEST CountTest

CountTest(): with

testcase: String -> Void

incPass: () -> Void

incFail: () -> Void

-- incFatal: () -> Void

-- decFatal: () -> Void

statistics: () -> Void

== add

-- number of passed tests, indices of failed tests

TESTCASE ==> Record(total: Integer,

-- fatal: Integer,

-- fatalities: List Integer,

fail: Integer,

failed: List Integer)

tests: AssociationList(String, TESTCASE) := empty()

currentTestcase: String := ""

testcase s ==

currentTestcase := s

insert!([currentTestcase, _

[0, 0, []]$TESTCASE]$Record(key: String, entry: TESTCASE), _

tests)

incPass() ==

cur := tests.currentTestcase

cur.total := cur.total + 1

incFail() ==

cur := tests.currentTestcase

cur.total := cur.total + 1

cur.fail := cur.fail + 1

cur.failed := cons(cur.total, cur.failed)

-- incFatal() ==

-- cur := tests.currentTestcase

-- cur.total := cur.total + 1

--

-- cur.fatal := cur.fatal + 1

-- cur.fatalities := cons(cur.total, cur.fatalities)

-- decFatal() ==

-- cur := tests.currentTestcase

-- cur.total := cur.total - 1

--

-- cur.fatal := cur.fatal - 1

-- cur.fatalities := rest(cur.fatalities)

statistics() ==

messagePrint("Summary:")$OutputForm

for t in reverse parts(tests)@List Record(key: String, entry: TESTCASE) repeat

messagePrint("Testcase " t.key)$OutputForm

messagePrint("Number of tests: " string(t.entry.total)$String)$OutputForm

messagePrint("Number of failed tests: " string(t.entry.fail)$String)$OutputForm

-- messagePrint("Number of fatal tests: " string(t.entry.fatal)$String)$OutputForm

if t.entry.fail > 0 then

messagePrint("failed tests were: ")$OutputForm

output((reverse!(t.entry.failed))::OutputForm)$OutputPackage

-- if t.entry.fatal > 0 then

-- messagePrint("fatal tests were: ")$OutputForm

-- output((reverse!(t.entry.fatalities))::OutputForm)$OutputPackage

)abb package UNITTEST UnitTest

UnitTest(R: SetCategory): with

assertEquals: (R, R) -> Boolean

== add

if R has ConvertibleTo InputForm then

assertEquals(ex1, ex2) ==

if (convert(ex1)@InputForm=convert(ex2)@InputForm)@Boolean

then (incPass()$CountTest; true)

else (incFail()$CountTest; false)

else

assertEquals(a, b) ==

if (a=b)@Boolean

then (incPass()$CountTest; true)

else (incFail()$CountTest; false)

Jan 11, 2008, 4:17:42 PM1/11/08

to fricas...@googlegroups.com

An update:

I refined my package a little, see below for the code. I have also adapted the

Makefiles, and have a fricas that uses the input file below.

I believe its reasonable good to use now and would like to see it included in

the fricas distribution. As an example, I converted bugs2008.input to use it:

-------------------------------------------------------------------------------

)set break resume

-- the following two commands reduce output to a minimum. This makes the job

-- for norm-out a lot easier.

)set message type off

)set output algebra off

testcase "equality in TBAGG (issue #412)"

R ==> Record(key: Symbol, entry: String)

T ==> AssociationList(Symbol, String)

t1 := construct([[x, "ix"]$R])$T

t2 := construct([[y, "iy"]$R])$T

assertNotEquals(t1, t2)

testcase "expose PartialFractionPackage (issue #309)"

-- note that we cannot easily test for equality of the result of

-- partialFraction itself: the result of partialFraction is of type Any, thus

-- we would have to provide type information when creating the other side of

-- the equation. But then, the bug won't manifest.

f := (x+a)/(x*(x**3+(b+c)*x**2+b*c*x))

assertEquals(numberOfFractionalTerms partialFraction(f, x), 3)

f := 2*x/(x^2-1)

assertEquals(numberOfFractionalTerms partialFraction(f, x), 2)

testcase "loop-bugs: resultants for finite integraldomains (issue #413)"

P ==> UP(x, PF 5)

R ==> Record(coef1: P,coef2: P,resultant: PF 5)

assertEquals(resultantEuclidean(x, x)$PRS(PF 5, P), [0,0,0]$R)

testcase "loop-bugs: ListMultiDictionary for non-ordered SetCategories (issue

#414)"

b := empty()$LMDICT(PF 13)

insert!(0, b);

insert!(0, b);

assertEquals(#b, 2)

testcase "from open-axiom: Stack missing exports (issue #415)"

s := stack [1,2,3,1]

assertEquals(parts s, [1,2,3,1])

assertEquals(map(i +-> i^2, s), stack [1,4,9,1])

t := s; r := map!(i +-> i^2, s);

assertEquals(eq?(t, r), true)

)set output algebra on

statistics()

-------------------------------------------------------------------------------

The last command yields the interesting part of bugs2008.output:

Summary:

Testcase: equality in TBAGG (issue #412)

failed tests / total: 0 (1)

Testcase: expose PartialFractionPackage (issue #309)

failed tests / total: 0 (2)

Testcase: loop-bugs: resultants for finite integraldomains (issue #413)

failed tests / total: 0 (1)

Testcase: loop-bugs: ListMultiDictionary for non-ordered SetCategories (issue #414)

failed tests / total: 0 (1)

Testcase: from open-axiom: Stack missing exports (issue #415)

failed tests / total: 0 (3)

Here a demonstration what happens in case of a failed test:

testcase("T1")

assertEquals(1, 1)

assertEquals(sin x, cos x)

assertEquals(log exp x, x)

statistics()

yields

Summary:

Testcase: T1

failed tests / total: 1 (3)

failed tests were:

Index: 2

Output1: sin(x)

Output2: cos(x)

The main problems remaining are:

* how can we detect errors. I really want that these are reported, too. I.e.,

looking at the tail of bugs****.output should tell you how many fatalities

ocurred. One possibility would be to say instead of

assertEquals(1, 1)

assertEquals(sin x, cos x)

beginTest(); assertEquals(1, 1); endTest()

beginTest(); assertEquals(sin x, cos x); endTest()

but I decided that this is impractical. Maybe norm-out should rather grep

for "Daly Bug" and list these separately. I think this would be reasonably

easy.

* what to do with lists of floats, etc. I guess, the only practical solution

is to make the user transform them into a proper test by invoking round

appropriately, or some such.

* for some reason I could not discover, the pamphlet does not TeX properly.

Help would be greatly appreciated!

can I commit, and then transform the input files as I go?

Martin

\documentclass{article}

\newcommand{\Axiom}{{\tt Axiom}}

\begin{document}

\title{unittest.spad}

\author{Martin Rubey}

\maketitle

\begin{abstract}

The packages defined in this file enable us to write unit tests for {\Axiom}s

library.

\end{abstract}

\tableofcontents

\section{Usage}

To make writing tests and checking them as easy as possible, we provide three

functions:

\begin{description}

\item[\spadfun{testcase}] introduces a new testcase,

\item[\spadfun{assertEquals}] checks equality of two elements and steps an

appropriate counter, depending on whether equality holds or not, and

\item[\spadfun{statistics}] that outputs for each testcase the number of tests

and the number and indices of failed tests.

\end{description}

Currently, because of limitations of the SPAD language and because we wanted

the interface to be as simple as possible, there is no way to detect whether a

test caused an error.

The system command \verb|)clear completely| resets all the statistics

counters, while \verb|)clear all| leaves them untouched.

Introducing a new testcase calls \verb|)clear all|, so it should not be

necessary to call it manually.

\section{package TESTCNT UnittestCount}

<<package TESTCNT UnittestCount>>=

)abbrev package TESTCNT UnittestCount

UnittestCount(): with

testcase: String -> Void

++ testcase s starts a new testcase with s as title. It also calls

++ \axiom{clear all}.

statistics: () -> Void

++ statistics() prints out a summary of the outcome of the testcases so

++ far. Use \axiom{clear completely} to reset the statistics.

incPass: () -> Void

++ incPass is an internal function, that steps the number of passed

++ tests.

incFail: (OutputForm, OutputForm) -> Void

++ incFail is an internal function, that steps the number of failed

++ tests and records their output.

incFail: () -> Void

++ incFail is an internal function, that steps the number of failed

++ tests.

-- incFatal: () -> Void

-- decFatal: () -> Void

== add

-- number of passed tests, indices of failed tests

FAILED ==> Record(index: Integer,

out1: OutputForm,

out2: OutputForm)

TESTCASE ==> Record(total: Integer,

-- fatal: Integer,

-- fatalities: List Integer,

fail: Integer,

failed: List FAILED)

tests: AssociationList(String, TESTCASE) := empty()

currentTestcase: String := ""

testcase s ==

systemCommand("clear all")$MoreSystemCommands

currentTestcase := s

insert!([currentTestcase, _

[0, 0, []]$TESTCASE]$Record(key: String, entry: TESTCASE), _

tests)

incPass() ==

cur := tests.currentTestcase

cur.total := cur.total + 1

incFail() ==

cur := tests.currentTestcase

cur.total := cur.total + 1

cur.fail := cur.fail + 1

cur.failed := cons([cur.total, empty(), empty()]$FAILED, _

cur.failed)

incFail(output1, output2) ==

cur := tests.currentTestcase

cur.total := cur.total + 1

cur.fail := cur.fail + 1

cur.failed := cons([cur.total, output1, output2]$FAILED, _

cur.failed)

-- incFatal() ==

-- cur := tests.currentTestcase

-- cur.total := cur.total + 1

--

-- cur.fatal := cur.fatal + 1

-- cur.fatalities := cons(cur.total, cur.fatalities)

-- decFatal() ==

-- cur := tests.currentTestcase

-- cur.total := cur.total - 1

--

-- cur.fatal := cur.fatal - 1

-- cur.fatalities := rest(cur.fatalities)

statistics() ==

messagePrint("Summary:")$OutputForm

for t in reverse parts(tests)@List Record(key: String, entry: TESTCASE) repeat

messagePrint("Testcase: " t.key)$OutputForm

messagePrint("failed tests / total: " _

string(t.entry.fail)$String " (" _

string(t.entry.total)$String ")")$OutputForm

-- messagePrint("Number of fatal tests: " string(t.entry.fatal)$String)$OutputForm

if t.entry.fail > 0 then

messagePrint("failed tests were: ")$OutputForm

for f in reverse(t.entry.failed) repeat

messagePrint("Index: " string(f.index))$OutputForm

output(hconcat("Output1: ", f.out1)$OutputForm)$OutputPackage

output(hconcat("Output2: ", f.out2)$OutputForm)$OutputPackage

-- if t.entry.fatal > 0 then

-- messagePrint("fatal tests were: ")$OutputForm

-- output((reverse!(t.entry.fatalities))::OutputForm)$OutputPackage

@

\section{package TESTUNIT Unittest}

<<package TESTUNIT Unittest>>=

)abbrev package TESTUNIT Unittest

Unittest(R: BasicType): with

assertEquals: (R, R) -> Boolean

++ assertEquals(ex1, ex2) states that ex1 and ex2 should be equal. To

++ sidestep the possibility that the equality function of the domain R

++ performs some simplifications, we convert ex1 and ex2 to

++ \axiom{InputForm}, if possible

assertNotEquals: (R, R) -> Boolean

++ assertNotEquals(ex1, ex2) states that ex1 and ex2 should be

++ different.

== add

if R has ConvertibleTo InputForm then

assertEquals(ex1, ex2) ==

if (convert(ex1)@InputForm=convert(ex2)@InputForm)@Boolean

then (incPass()$UnittestCount; true)

else

if R has CoercibleTo OutputForm

then incFail(ex1::OutputForm, ex2::OutputForm)$UnittestCount

else incFail()$UnittestCount

false

else

assertEquals(ex1, ex2) ==

if (ex1=ex2)@Boolean

then (incPass()$UnittestCount; true)

else

if R has CoercibleTo OutputForm

then incFail(ex1::OutputForm, ex2::OutputForm)$UnittestCount

else incFail()$UnittestCount

false

assertNotEquals(ex1, ex2) ==

if (ex1=ex2)@Boolean

then

if R has CoercibleTo OutputForm

then incFail(ex1::OutputForm, ex2::OutputForm)$UnittestCount

else incFail()$UnittestCount

false

else (incPass()$UnittestCount; true)

@

\section{License}

<<license>>=

--Copyright (c) 2006-2007, Martin Rubey <Martin...@univie.ac.at>

--

--Redistribution and use in source and binary forms, with or without

--modification, are permitted provided that the following conditions are

--met:

--

-- - Redistributions of source code must retain the above copyright

-- notice, this list of conditions and the following disclaimer.

--

-- - Redistributions in binary form must reproduce the above copyright

-- notice, this list of conditions and the following disclaimer in

-- the documentation and/or other materials provided with the

-- distribution.

--

--THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS

--IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED

--TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A

--PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER

--OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,

--EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,

--PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR

--PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF

--LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING

--NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS

--SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

@

<<*>>=

<<license>>

<<package TESTCNT UnittestCount>>

<<package TESTUNIT Unittest>>

@

\end{document}

Jan 11, 2008, 6:27:32 PM1/11/08

to fricas...@googlegroups.com

Martin Rubey wrote:

>

> I refined my package a little, see below for the code. I have also adapted the

> Makefiles, and have a fricas that uses the input file below.

>

> I believe its reasonable good to use now and would like to see it included in

> the fricas distribution. As an example, I converted bugs2008.input to use it:

>

>

> I refined my package a little, see below for the code. I have also adapted the

> Makefiles, and have a fricas that uses the input file below.

>

> I believe its reasonable good to use now and would like to see it included in

> the fricas distribution. As an example, I converted bugs2008.input to use it:

>

Martin, this is move in good direction. But Ithink that beforewe

start large-scale conversion of test files we should improve the

package. Today I am tired, I will give you some details tomorrow.

--

Waldek Hebisch

heb...@math.uni.wroc.pl