I now could remove all the blacklisted domains in the aldor-interface.
https://github.com/hemmecke/fricas/tree/fix-blacklisted-domains
In fact, aldor helped to discover a bug in the two spad files, that
actually the SPAD compiler should have refused to compile.
See explanation in the commit message... (attaced at the end of this mail)
https://github.com/hemmecke/fricas/commit/e68b728e0a5d7491e07f4a45db82e5cf2acf02cd
Although I personally don't like a type that involves #vl, a proper fix
would involve a change in the inferface of
DistributedMultivariatePolynomial. That seems to be too involved for the
moment, and maybe isn't wanted by other people.
I would like to make 3 commits as given below. I only have a ChangeLog
entry in the last commit, but I hope it's OK that way.
May I commit?
Ralf
=============================================================
commit 303577fa7632cfe6fc9f14d54ce5611e50043e14 (HEAD,
origin/fix-blacklisted-do
Author: Ralf Hemmecke <ra...@hemmecke.de>
Date: Sat Dec 31 17:25:22 2011 +0100
forget about blacklisted domains
ChangeLog | 6 ++++++
src/aldor/Makefile.in | 9 ---------
2 files changed, 6 insertions(+), 9 deletions(-)
commit e68b728e0a5d7491e07f4a45db82e5cf2acf02cd
Author: Ralf Hemmecke <ra...@hemmecke.de>
Date: Sun Feb 5 23:57:54 2012 +0100
fix: remove wrong parameter nv
IdealDecompositionPackage and QuasiAlgebraicSet2 used to require two
parameters where the first parameter was a list of variables and the
second parameter was supposed to be the length of this list.
This additional parameter for the list length was seemingly necessary
because of a weakness of the compiler. It no longer is.
In fact, that parameter was wrong conceptually and compilation should
have been rejected by the SPAD compiler. Formerly we had
F ==> Fraction Integer
Var ==> OrderedVariableList vl
Expon ==> DirectProduct(nv,NNI)
Dpoly ==> DistributedMultivariatePolynomial(vl,F)
QALG ==> QuasiAlgebraicSet(F, Var, Expon, Dpoly)
The 4 parameter of QuasiAlgebraicSet is required to be of type
PolynomialCategory(F,Expon,Var). However, the type of
DistributedMultivariatePolynomial(vl,F) is
PolynomialCategory(F,DirectProduct(#vl,NonNegativeInteger),
OrderedVariableList(vl)) with
reorder: (%,List Integer) -> %
This was a mismatch in the third parameter, namely the provided
DirectProduct(nv,NNI) vs. the expected
DirectProduct(#vl,NonNegativeInteger)
src/algebra/idecomp.spad.pamphlet | 6 ++----
src/algebra/qalgset.spad.pamphlet | 7 +++----
2 files changed, 5 insertions(+), 8 deletions(-)
commit 9449d68bfd108e70d85378c5d5c3d833f802d8d2
Author: Ralf Hemmecke <ra...@hemmecke.de>
Date: Sun Feb 5 21:51:28 2012 +0100
fix definition of RationalNumber
src/algebra/random.spad.pamphlet | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
OK. However one remark: ATM handling of DirectProduct(#vl, ...)
in Spad compiler is buggy, in particular the compiler rejects
valid code. I am a bit surprised that the new version compiles,
but this seem to be principle of bug cancellation in action:
compiler (wrongly) does not check types of arguments to types,
so it does not have oportunity to (wrongly) reject the code.
BTW, rejecting valid code like DirectProduct(#vl, ...) in some
context I consider as probably worst currently unfixed bug
in Spad compiler.
--
Waldek Hebisch
heb...@math.uni.wroc.pl
Well, as I said. I would never design such code. The actually offending
place for me is not at the place that was fixed by my patch but rather
it is
https://github.com/hemmecke/fricas-svn/blob/master/src/algebra/gdpoly.spad.pamphlet#L37
GeneralDistributedMultivariatePolynomial(vl,R,E): public == private where
vl: List Symbol
R: Ring
E: DirectProductCategory(#vl,NonNegativeInteger)
and also
DistributedMultivariatePolynomial(vl,R): public == private where
vl : List Symbol
R : Ring
E ==> DirectProduct(#vl,NonNegativeInteger)
If I had to design this, then I would certainly remove vl from the
argument and just require E. E then may or may not provide names for the
variables. In fact, polynomials would just be a monoid ring RE (ring R,
monoid E) where E happens to have a particular structure that enables to
extract variables and what not.
It's not that I consider the above code invalid, I just don't like that
the return type of DMP not only depends on vl but on a function applied
to vl.
> I am a bit surprised that the new version compiles,
As you see in the patch there was even a comment that some code would
not compile without the nv parameter.
> BTW, rejecting valid code like DirectProduct(#vl, ...) in some
> context I consider as probably worst currently unfixed bug
> in Spad compiler.
Well, maybe. But since I don't like such code anyway, I don't care too much.
Ralf
Just an add-on... I don't like a function in this place, because we
could then potentially have something like "random(vl)", i.e., the type
would not be deterministic.
I think, in SPAD/Aldor the "type language" is considered to be functional.
Ralf
I do not understand your objection to the use of a function here. It
seems to me that the type is "constant in context" as required for
compilation of static types in both Axiom and Aldor - even if the
function on whose result the type depends behaves
non-deterministically. There seems nothing remarkable about this. It
does not somehow make the type language "non-functional".
As regards to style and what "looks good" and what does not, that is
another mattter and I do agree that there are probably better ways to
write this code. But of course this applies to a lot of code in the
library.
Regards,
Bill Page.
--
Waldek Hebisch
heb...@math.uni.wroc.pl
Let me illustrate the issue.
X ==> IntegerMod(random n)
Foo(n: Integer): with
foo: () -> X
== add
foo(): X == 1$X
I've deliberately used a macro for X. So we have in fact 3 places where
random might or might not be called. In a functional context, the X in
all places would be identical, so there would not be an issue and a type
check can be done on purely syntactical grounds.
If it's not functional, then it is not even clear whether Foo(..)
actually implements the function it exports. Not even is it clear
whether the type of 1$X is what is expected on the lhs of == in the add
part.
And suppose we accept that the code above should compile.
Now we compute
x := foo()$X
y := foo()$X
Should I be able to add x and y or will they have different type due to
different invocations of random?
As far as I understand, Aldor leaves the latter case unspecified, since
it doesn't say how often the type X in a "type context" is evaluated.
http://www.aldor.org/docs/HTML/chap7.html#3
I'm not an expert in these things, but I somehow fear, dependent types
with executed functions in their arguments defeats the "static type"
property.
Ralf
X makes no sense above, because IntegerMod has 'foo' as an export.
You probably mean:
x := foo()$Foo(10000)
y := foo()$Foo(10000)
^^^^^
or some other fixed number.
> Should I be able to add x and y or will they have different type due to
> different invocations of random?
>
> As far as I understand, Aldor leaves the latter case unspecified, since
> it doesn't say how often the type X in a "type context" is evaluated.
>
> http://www.aldor.org/docs/HTML/chap7.html#3
>
> I'm not an expert in these things, but I somehow fear, dependent types
> with executed functions in their arguments defeats the "static type"
> property.
Yes. Of course, I would like to forbid 'random' but I am
affraid it is not possible in general without also forbiding
functions which non-type arguments to types useful. More
precisely, it is user responsiblity to ensure that what they
use is true function and not someting with state-dependent
behaviour. The compiler could catch some obvious violations,
(like your random) and could verify some simple cases, but
in general it is up to user.
Why I do not want to forbid functions. Actually the
DirectProduct case is already an example: DirectProduct
fixes size of the product. Without fixed size all
operations would have to check that size matches.
But sometimes you have slightly more complicated
situations. In case of DMP number of variables
much match size of the product. For DirectProduct
one of natural operation is concatention, but
this takes arguments of size n and m and produces
result of size n + m. For kroneckerProduct of
matrices we need mutiplication on sizes. Once
you allow addition and mutiplication it is very
unnatural to reject polynomials.
To say it in different words: without functions in
type parameters it is impossible to provide various
interesing operation for fixed size domains, so
you are forced to use arbitrary size domain and
check sizes at runtime. Of course, if type
parameters are given by functions you can finish
typechecking only at runtime. But IMHO it still
has advanteges:
- some common cases can be checked at compile time
(by comparing parse trees of expressions)
- even if you need runtime checking (not implemented
in current compiler) checking in types should be
more efficient than checking in each operation and
checks in types can be automatically generated by
compiler, while for arbitrary sized domains programmer
need to insert checks by hand (and may forget some...).
--
Waldek Hebisch
heb...@math.uni.wroc.pl
I am not sure if I would design polynomials as they are now. But
I am affraid what you propose just shifts problem to other place: there
are variables and we need DirectProduct of size matching number
of variables. As long as we want to have separate DirectProduct
and reuse it for exponents at some moment we need to match
size of the product with number of variables. We can request
that sizes are provided separately, but then either it will
be completely unchecked or we need special code in domain
to check at runtime. Alternatively, we could create
DirectProductWithVariableNames and pass vl as argument to
it instead of size. All the above IMHO is very inelegant.
Also, note that that for users it would be quite inconvenient
to specify E and you would need a wrapper domain which take
variables and produces E.
Actually I would think of change in opposite direction. Currently
we have:
PolynomialCategory(R:Ring, E:OrderedAbelianMonoidSup, VarSet:OrderedSet):
Category ==
...
However E is determined by the domain, so we could add 'exponentMonoid'
to polynomial domains and have someting like:
PolynomialWithExponentsCategory(R:Ring, E:OrderedAbelianMonoidSup,
VarSet:OrderedSet): Category ==
PolynomialCategory(R:Ring, VarSet:OrderedSet) : Category == _
PolynomialWithExponentsCategory(R, exponentMonoid()$%, VarSet) with
exponentMonoid : () -> OrderedAbelianMonoidSup
That way users of polynomials would be spared most of effort of
computing correct E (since given someting of PolynomialCategory
they could retrive it from the domain).
--
Waldek Hebisch
heb...@math.uni.wroc.pl
Ah, yes, of course.
> Yes. Of course, I would like to forbid 'random' but I am
> affraid it is not possible in general without also forbiding
> functions which non-type arguments to types useful.
Well, yes. I think I would already be satisfied with the specification
that whenever someone uses a function in a type (similar like random in
my example), the programmer must ensure (or in other words the compiler
assumes) that this function is a pure function without side effects.
This assumption must be written somewhere in big letters.
> In case of DMP number of variables
> much match size of the product.
As written before, I would put the variables into the "exponent" parameter.
> For DirectProduct
> one of natural operation is concatention, but
> this takes arguments of size n and m and produces
> result of size n + m.
You would have an operation
DPC==>DirectProductCategory
concat: (DPC(m, R), DPC(n, R)) -> DPC(m+n, R)
(assuming the R is known from somewhere.
But look more closely at the definition of DirectProductCategory. The
dim parameter is not at all used in the "with" part. It's completely
useless to carry it around in the category argument.
So we would have
concat: DPC(R), DPC(R)) -> DPC(R)
and the resulting domain exports a function
dimension:()-> NonNegativeInteger
from which one could extract the size of the product.
> ... you are forced to use arbitrary size domain and
> check sizes at runtime.
Of course I understand the usefulness of types like arrays of a fixed
size or vectors of a fixed dimension (different from Vector -- which is
the union over all finite dimensional vector spaces).
> Of course, if type
> parameters are given by functions you can finish
> typechecking only at runtime. But IMHO it still
> has advanteges:
> - some common cases can be checked at compile time
> (by comparing parse trees of expressions)
> - even if you need runtime checking (not implemented
> in current compiler) checking in types should be
> more efficient than checking in each operation and
> checks in types can be automatically generated by
> compiler, while for arbitrary sized domains programmer
> need to insert checks by hand (and may forget some...).
Oh, I am not at all against a mild compile time evaluation that enables
Foo(2+3) to be identified with Foo(4+1) or Foo(5). But I guess that
needs some thoughts to get it properly specified, how much computation
can be allowed at compile time.
Ralf
> Ralf, after your commit tests in 'ideal.input.pamphlet' fail.
That's annoying, but that this failure is due to the evaluation of types
that happens in the interpreter.
Explicit package calling can resolve the problem. It's annoying, though
and I don't know exactly what the best short term solution is.
I tend to blame the interpreter for not being able to resolve (evaluate)
types in order check that the input type of
primaryDecomp$IdealDecompositionPackage([x,y,z]) matches the type of ide
only after the functions that appear in the type have been evaluated.
I think that is exactly the trap that can happen if functions are
allowed in types.
Ralf
(1) -> )version
Value = "FriCAS f1c11f524fc460d47e99d77b484e922c75bfe5d1 compiled at
Wednesday February 8, 2012 at 21:44:34 "
(1) -> l: List DMP([x,y,z],FRAC INT)
Type: Void
(2) -> l:=[x^2+2*y^2,x*z^2-y*z,z^2-4]
2 2 2 2
(2) [x + 2y ,x z - y z,z - 4]
Type:
List(DistributedMultivariatePolynomial([x,y,z],Fraction(Integer)))
(3) -> ide := ideal l
2 2 2 2
(3) [x + 2y ,x z - y z,z - 4]
Type:
PolynomialIdeal(Fraction(Integer),DirectProduct(3,NonNegativeInteger),OrderedVariableList([x,y,z]),DistributedMultivariatePolynomial([x,y,z],Fraction(Integer)))
(4) -> ld := primaryDecomp(ide)$IdealDecompositionPackage([x,y,z])
1 2 1 2
(4) [[x + - y,y ,z + 2],[x - - y,y ,z - 2]]
2 2
Type:
List(PolynomialIdeal(Fraction(Integer),DirectProduct(3,NonNegativeInteger),OrderedVariableList([x,y,z]),DistributedMultivariatePolynomial([x,y,z],Fraction(Integer))))
> Explicit package calling can resolve the problem.
Attached is a patch that would make ideal.input run without problems on
trunk@1329.
The question is whether we want to solve the problem in this way or
revert my resent commits.
I am rather for package calling since I believe that my commits actually
fixed an issue of incompatible types in SPAD. The problem is in the
interpreter, methinks.
In a SPAD program one has to import from IdealDecompositionPackage
anyway (or package call). In the interpreter it's a nuisance to juggle
with the types, but in this case I think, it's a minor one and as in
some other places one can simply argue that the interpreter doesn't
understand what the user wants and thus a package call is necessary to
help the interpreter.
May I commit the attached patch?
Ralf
(1) -> )r ideal.input
--Copyright The Numerical Algorithms Group Limited 1994.
)clear all
All user variables and function definitions have been cleared.
(n,m) : List DMP([x,y],FRAC INT)
Type: Void
m := [x^2+y^2-1]
2 2
(2) [x + y - 1]
Type:
List(DistributedMultivariatePolynomial([x,y],Fraction(Integer)))
n := [x^2-y^2]
2 2
(3) [x - y ]
Type:
List(DistributedMultivariatePolynomial([x,y],Fraction(Integer)))
id := ideal m + ideal n
2 1 2 1
(4) [x - -,y - -]
2 2
Type:
PolynomialIdeal(Fraction(Integer),DirectProduct(2,NonNegativeInteger),OrderedVariableList([x,y]),DistributedMultivariatePolynomial([x,y],Fraction(Integer)))
zeroDim? id
(5) true
Type:
Boolean
zeroDim?(ideal m)
(6) false
Type:
Boolean
dimension ideal m
(7) 1
Type:
PositiveInteger
(f,g):DMP([x,y],FRAC INT)
Type: Void
f := x^2-1
2
(9) x - 1
Type:
DistributedMultivariatePolynomial([x,y],Fraction(Integer))
g := x*(x^2-1)
3
(10) x - x
Type:
DistributedMultivariatePolynomial([x,y],Fraction(Integer))
relationsIdeal [f,g]
2 3 2 2 3
(11) [- %B + %A + %A ] | [%A= x - 1,%B= x - x]
Type:
SuchThat(List(Polynomial(Fraction(Integer))),List(Equation(Polynomial(Fraction(Integer)))))
l: List DMP([x,y,z],FRAC INT)
Type: Void
l:=[x^2+2*y^2,x*z^2-y*z,z^2-4]
2 2 2 2
(13) [x + 2y ,x z - y z,z - 4]
Type:
List(DistributedMultivariatePolynomial([x,y,z],Fraction(Integer)))
ID3==>IdealDecompositionPackage([x,y,z])
Type: Void
ld:=primaryDecomp(ideal l)$ID3
1 2 1 2
(15) [[x + - y,y ,z + 2],[x - - y,y ,z - 2]]
2 2
Type:
List(PolynomialIdeal(Fraction(Integer),DirectProduct(3,NonNegativeInteger),OrderedVariableList([x,y,z]),DistributedMultivariatePolynomial([x,y,z],Fraction(Integer))))
reduce(intersect,ld)
1 2 2
(16) [x - - y z,y ,z - 4]
4
Type:
PolynomialIdeal(Fraction(Integer),DirectProduct(3,NonNegativeInteger),OrderedVariableList([x,y,z]),DistributedMultivariatePolynomial([x,y,z],Fraction(Integer)))
reduce(intersect,[radical(ld.i)$ID3 for i in 1..2])
2
(17) [x,y,z - 4]
Type:
PolynomialIdeal(Fraction(Integer),DirectProduct(3,NonNegativeInteger),OrderedVariableList([x,y,z]),DistributedMultivariatePolynomial([x,y,z],Fraction(Integer)))
radical(ideal l)$ID3
2
(18) [x,y,z - 4]
Type:
PolynomialIdeal(Fraction(Integer),DirectProduct(3,NonNegativeInteger),OrderedVariableList([x,y,z]),DistributedMultivariatePolynomial([x,y,z],Fraction(Integer)))
quotient(ideal l,y)
2
(19) [x,y,z - 4]
Type:
PolynomialIdeal(Fraction(Integer),DirectProduct(3,NonNegativeInteger),OrderedVariableList([x,y,z]),DistributedMultivariatePolynomial([x,y,z],Fraction(Integer)))
(20) ->
I think we need to look at the problem more carefuly. Basic question
is why the tests worked before your change. It is possible (and in
fact quite likely) that interpreter got it right "by accident", then
adding package calling is a resonable solution. But it is also
possible that your change broke some of assumptions hardcoded in
the interpreter, then adding package call would just mask deeper
problem.
--
Waldek Hebisch
heb...@math.uni.wroc.pl
> I think we need to look at the problem more carefuly. Basic question
> is why the tests worked before your change. It is possible (and in
> fact quite likely) that interpreter got it right "by accident", then
> adding package calling is a resonable solution. But it is also
> possible that your change broke some of assumptions hardcoded in
> the interpreter, then adding package call would just mask deeper
> problem.
I hope, you agree, that my commits *fixed* a bug in the source code.
Now whether package calling masks or doesn't mask a problem in the
interpreter or of some assumption, is a matter of putting a link to this
thread into the bugtracker. The interpreter problem is there with or
without my commits.
My suggestion is to also commit the package-calling patch and don't let
the release be delayed by a long search in the interpreter details. I
think that search must be done carefully and will probably cost you more
time than you planned for 1.1.6.
All of this only concerns just 3 files, that might probably not be used
by anyone until the 1.1.7 release.
cd .../fricas/src/algebra
grep '\(PolynomialIdeal\|QuasiAlgebraic\|IdealDecomp\)' *|sed
's/[+][+].*//'|grep -v ':$'|sed 's/:.*//'|uniq
exposed.lsp.pamphlet
ideal.spad.pamphlet
idecomp.spad.pamphlet
qalgset.spad.pamphlet
Ralf
PS: I've no idea how to look at this interpreter issue, but if you feel
I can help you in some way, I'd be happy to invest some time to get this
issue nicely resolved.
The compiler assumes that evaluating function second time with
the same argument gives the same result. Some side effects are
OK, for example caching or updating performance monitoring
counters.
> > In case of DMP number of variables
> > much match size of the product.
>
> As written before, I would put the variables into the "exponent" parameter.
But what type "exponent" would have and how you build it.
> > For DirectProduct
> > one of natural operation is concatention, but
> > this takes arguments of size n and m and produces
> > result of size n + m.
>
> You would have an operation
> DPC==>DirectProductCategory
>
> concat: (DPC(m, R), DPC(n, R)) -> DPC(m+n, R)
>
> (assuming the R is known from somewhere.
>
> But look more closely at the definition of DirectProductCategory. The
> dim parameter is not at all used in the "with" part. It's completely
> useless to carry it around in the category argument.
Yes, currently what you can do with parameters to categories is
quite limited. But IMHO either we allow more ways to use
no-type parametes or we will end up with almost useless feature.
To make clearer what I mean: working with power series I have
several times hit problem due to expansion point: I could
calculate correct expansion point but Spad compiler newertheless
would reject the code. I fixed some worst compiler problems, in
other cases I used workarounds or made code less capable (compared
to version rejected by the compiler). I have a small piece
of code which tries to use Groebner basis, but compiler
rejects it. More precisely, it works fine with fixed types
but generic version is rejected. Again, the problem is
that I need to build appropriate types at runtime and
due to bugs and limiatations compiler rejects them.
The point is that real code computes sizes, variable lists
etc via functions and compiler should cope with that.
Otherwise types parametrized by variables are just a toy,
samewhat usable in the interpreter (because interpreter fully
evaluates types), but not for use from Spad code.
--
Waldek Hebisch
heb...@math.uni.wroc.pl
> The point is that real code computes sizes, variable lists
> etc via functions and compiler should cope with that.
Well, as I said, my approach is to design things in a way that does not
involve functions. Unfortunately, one has to cope with the current
Algebra library which is not perfect in this respect.
Anyway, the usual approach would be to remove such potential
dependencies and function parameters have to be constant in context. The
latter can usually be achieved by introducing a function layer that
get's all the type parameters as arguments.
Of course, that doesn't solve the problem that in some cases there is an
inherent dependency between the arguments. My approach would be to
improve the library and remove such dependencies whenever possible. But
that is a huge task and already your an my opinion differs.
I think, I'm going to invest some more time in improving the test
framework in FriCAS (before modifying Algebra). Only yesterday I learned
how you seem to test regression via the .output files. That all should
become a simple "make check" that returns a summary of which tests
succeed and which failed.
Ralf
The problem is that in interesting cases you get dependencies
between arguments. Then you have problem how to ensure that
needed relations hold. You may leave this unchecked (but
apparently you call such case a "bug"). So having smaller
number of independent parameters and computing the other
via functions seem better.
> Of course, that doesn't solve the problem that in some cases there is an
> inherent dependency between the arguments. My approach would be to
> improve the library and remove such dependencies whenever possible. But
> that is a huge task and already your an my opinion differs.
IME dependencies are essential and you limit yourself to toy
problems disallowing them.
> I think, I'm going to invest some more time in improving the test
> framework in FriCAS (before modifying Algebra). Only yesterday I learned
> how you seem to test regression via the .output files. That all should
> become a simple "make check" that returns a summary of which tests
> succeed and which failed.
I wrote few years ago about the 'norm-out' script. You give it
as argument path to previous build directory and path to current
build directory and it prints differences in .output files.
If build is done using different Lisps there is a lot of numerical
noise, but when using the same Lisp normal differences are quite
small. Anything more needs explanantion (usualy more differences
mean bugs, but sometimes may be due to deliberate change).
--
Waldek Hebisch
heb...@math.uni.wroc.pl
I have now resonably good idea what the problem is: basically
interpreter support for functions in type parameters is very
limited, basically intepreter can not see that the type
with concrete values (which is obtained after evaluation)
is equal to type containing functions inside. The second
case case occurs naturally during typechecking. Bad news
is that it will take some time to fix. OTOH the breakege
is that instead of type that interpreter can handle we
got type which is unhandled. Given that there is workaraund
using package call I think that we can keep your fixes and
update ideal.input.pamphlet to use package call (so that
test pass). But we also need a test for this kind of call,
that is new bugs2012 file and a short test marked as
expected failure.
More generaly, if a fix triggers a problem we need to think
twice before commiting a fix, even if core reason of the new
problem is outside of the fix. Simply, type discipline is
to get more reliable code at the end. If we allow breakage
to get in at the end of day it may happen that our "better"
code actually works worse than original. Note also that
broken code is harder to test, so more breakage is likely
to follow because of limited testing.
--
Waldek Hebisch
heb...@math.uni.wroc.pl
Does that mean I should commit my package call patch?
http://groups.google.com/group/fricas-devel/msg/efebd3dec4ae18b1
> But we also need a test for this kind of call, that is new bugs2012
> file and a short test marked as expected failure.
Or do you want to do this yourself in connection with adding bugs2012?
> More generaly, if a fix triggers a problem we need to think twice
> before commiting a fix, even if core reason of the new problem is
> outside of the fix. Simply, type discipline is to get more reliable
> code at the end. If we allow breakage to get in at the end of day
> it may happen that our "better" code actually works worse than
> original. Note also that broken code is harder to test, so more
> breakage is likely to follow because of limited testing.
I'm sorry that I missed that failing test. I promise to run norm-output
next time. I agree that "better" is not necessarily better.
Ralf
Both things should go together -- I will add them.
--
Waldek Hebisch
heb...@math.uni.wroc.pl