(1) 1
Type: Integer
Shouldn't that complain, because X is not defined?
Also the following I find somehow problematic (although it's not
dramatic since there is an easy workaround).
---rhxBEGIN tst.spad
)abbrev domain MEX MExpression
Z ==> Integer
X ==> Expression Z
MExpression: with
0: () -> %
coerce: % -> OutputForm
== add
Rep := X
-- auxiliary functions
rep(x: %): Rep == x pretend Rep
per(x: Rep): % == x pretend %
0: % == per((0$Z)::X)
coerce(x: %): OutputForm == coerce(rep x)$X
---rhxEND tst.spad
The addition of "$X" in the last line is superfluous since the input
type for coerce in (coerce rep x) is Rep and the return time could be
extracted from the lefthand side. So it should be clear which coerce to
select. Unfortunately, without the "$X" the code compiles, but then
asking for
0$MEX
seems to run forever.
Ralf
Apparently you can avoid triggering this bug by writing:
Rep == X
(using == instead of :=).
Perhaps this avoids trigger the automatic coercions that normally make
rep and per unnecessary in Spad.
Regards,
Bill Page.
> --
> You received this message because you are subscribed to the Google Groups
> "FriCAS - computer algebra system" group.
> To post to this group, send email to fricas...@googlegroups.com.
> To unsubscribe from this group, send email to
> fricas-devel...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/fricas-devel?hl=en.
>
This should be an FAQ because we have discussed it several times :-)
The short answer is that, in all flavours of AXIOM, if you assign to Rep
then you are asking for implicit coercions between % and Rep -- and that
can lead to unfunny situations as above.
The proper way is to define Rep as a constant, i.e. Rep == X.
-- Gaby
Oh, I thought, that just OpenAxiom changed it to using "Rep==X". If I
understood correctly, in FriCAS "==" does not mean constant definition
but rather "delayed assignment".
>grep '^[ \t]*Rep *:=' *|wc -l
239
>grep '^[ \t]*Rep *==' *
ffrac.as.pamphlet: Rep == Record(num : X, den : X) ; -- representation
herm.as.pamphlet: Rep ==> Vector R;
interval.as.pamphlet: Rep ==> Record(Inf:R, Sup:R);
newdata.spad.pamphlet: Rep ==> VTB
newdata.spad.pamphlet: Rep ==> A
newpoly.spad.pamphlet: Rep ==> List Term
regset.spad.pamphlet: Rep ==> LP
sregset.spad.pamphlet: Rep ==> LP
triset.spad.pamphlet: Rep ==> LP
triset.spad.pamphlet: Rep ==> LP
So in the FriCAS sources, there is only *one* instance with ==. And
interestingly, this is not SPAD, but an Aldor program.
So I assume, in FriCAS, I will rather have to use ==> instead of ==.
I've tried both, and for ==> as well as for ==, I can do without $X.
Waldek, what would you suggest? I definitely want rep and per. Would be
good, if they were defined automatically, but for the moment I can live
with defining them myself. Using rep and per just helps me to keep track
of the difference between % and Rep. Automatic coercion should not be
done for SPAD programs, not even between % and Rep.
Ralf
| > This should be an FAQ because we have discussed it several times :-)
| >
| > The short answer is that, in all flavours of AXIOM, if you assign to Rep
| > then you are asking for implicit coercions between % and Rep -- and that
| > can lead to unfunny situations as above.
| >
| > The proper way is to define Rep as a constant, i.e. Rep == X.
|
| Oh, I thought, that just OpenAxiom changed it to using "Rep==X". If I
| understood correctly, in FriCAS "==" does not mean constant definition
| but rather "delayed assignment".
We are talking about the compiler.
In all flavours of AXIOM, if you have a toplevel or capsule-level definition
of the form
xyz == expr
and xyz does not have a modemap or a local type declaration, then that is
interpreted as a local constant implemented as a macro.
See isMacro and how it is used in doIt.
OpenAxiom went further and added special support when xyz is Rep.
-- Gaby
I would avoid '=='. Currently when '==' defines function with
no arguments and no mode it is conveted to a macro ('==>').
Given that semantics of '==>' is quite different of _expected_
semantics of '==' this is rather weird irregularity.
--
Waldek Hebisch
heb...@math.uni.wroc.pl
| I would avoid '=='. Currently when '==' defines function with
| no arguments and no mode it is conveted to a macro ('==>').
| Given that semantics of '==>' is quite different of _expected_
| semantics of '==' this is rather weird irregularity.
I would think the opposite: if there is there is no modemap or type
declaration, then it surely can only be observed locally and it has all
the semantics of a local constant, or a macro! I suspect that -may-
also have been the reasoning behind Aldor's choice.
-- Gaby
This is reasonable choice if you _have to_ give meaning to '=='
in such cases. But signaling error seems better.
For example, look at:
dummy == new()$SE :: F
in combfunc.spad.pamphlet. This is _very_ unlike constants.
IMHO explicitly writing
dummy ==> new()$SE :: F
would be much clearer.
--
Waldek Hebisch
heb...@math.uni.wroc.pl
| Gabriel Dos Reis wrote:
| >
| > Waldek Hebisch <heb...@math.uni.wroc.pl> writes:
| >
| > | I would avoid '=='. Currently when '==' defines function with
| > | no arguments and no mode it is conveted to a macro ('==>').
| > | Given that semantics of '==>' is quite different of _expected_
| > | semantics of '==' this is rather weird irregularity.
| >
| > I would think the opposite: if there is there is no modemap or type
| > declaration, then it surely can only be observed locally and it has all
| > the semantics of a local constant, or a macro! I suspect that -may-
| > also have been the reasoning behind Aldor's choice.
| >
|
|
| This is reasonable choice if you _have to_ give meaning to '=='
| in such cases.
Indeed, all flavours of AXIOM support the following style
SomeFunctor(): Public == Private where
Public == some category expression
Privae == some domain expression
I am unconvinced of the value of requiring that to be written as
SomeFunctor(): Public == Private where
Public ==> some category expression
Privae ==> some domain expression
From my point of view, a macro is a `papering over some pesky imperfection'.
The definition of domain or a category is fundamental and macros should
not be the first thing people have to encounter.
And when I want to attract people used to languages with similar syntax
(hello Haskell!), I don't even want to given them incentives to ridicule
the syntax.
| But signaling error seems better.
|
| For example, look at:
|
| dummy == new()$SE :: F
|
| in combfunc.spad.pamphlet. This is _very_ unlike constants.
In fact, I think this is an example that is illustrative of the constant
vs. nullary function dichotomy.
In this case `dummy' is an abstraction that is supposed to be
generative, e.g. evaluated every time and generate new value. That is
called a function and it should have been written as
dummy(): SE == new()$SE::F
The inlining machinery will take care of making sure a call to dummy()
is in fact inlined (which pretty much is the case for all flavours of
AXIOM for such a simple expression).
| IMHO explicitly writing
|
| dummy ==> new()$SE :: F
|
| would be much clearer.
I would argue that the function definition is much more reflective of
the intent and more elegant. Writing it as a macro only hides the
original intent, it does not make it clearer.
-- Gaby
On Tue, Jan 17, 2012 at 12:59 PM, Gabriel Dos Reis wrote:
> ...
> From my point of view, a macro is a `papering over some pesky
> imperfection'. The definition of domain or a category is fundamental
> and macros should not be the first thing people have to encounter.
+1
> And when I want to attract people used to languages with similar
> syntax (hello Haskell!), I don't even want to given them incentives to
> ridicule the syntax.
> ...
I have a similar "lack of respect" for macro usage. SPAD should
consistently be a high level language. Macros belong (perhaps) at a
lower level in Boot and in Lisp. If there is something that you think
requires a macro in SPAD I would take that as a criticism of the
language. It seems to me that even the use of the ... where clause
should not encourage us to think in terms of macros but rather as a
way of locally defining a context.
> |
> | For example, look at:
> |
> | dummy == new()$SE :: F
> |
> | in combfunc.spad.pamphlet. This is _very_ unlike constants.
>
I think this usage is what Ralf referred to as "delayed assignment".
> In fact, I think this is an example that is illustrative of the constant
> vs. nullary function dichotomy.
> In this case `dummy' is an abstraction that is supposed to be
> generative, e.g. evaluated every time and generate new value. That is
> called a function and it should have been written as
>
> dummy(): SE == new()$SE::F
> ...
> I would argue that the function definition is much more reflective of
> the intent and more elegant. Writing it as a macro only hides the
> original intent, it does not make it clearer.
>
I think these semantics are obviously more economical than thinking of
nullary functions as "delayed assignment".
Regards,
Bill Page.
Well, inside toplevel 'where' FriCAS blindly converts '==' to '==>'
(because othere uses of '==' make no sense here). But that is
clearly a hack. IMHO normal constant definitions should be
typechecked. But above we need to "expand" where to see what
type is defined (the 'some category expression' part). Worse,
freqently we have things like:
SomeFunctor(A : CA, B : CB) : Public == Private where
CA ==> cata
CB ==> catb
...
that is even to get types of parameters we need to analyse 'where'.
Macros give clear semantics for such constructs, without
need need for special hacks. I must admit that properly
giving types to 'Public', 'Private' above seems tricky,
given that both contains references to other parts of
algebra. Probably some fixpoint definition with sufficiently
lazy implementation would do, but this is very far from
current implementation. To say the truth, even with
macro expansion before typechecking, parameters of
constructors lead to "interesting" dependencies (current
compiler avoids the problem by not checking parameters
in definitions at all).
OTOH untyped treatment of 'Public == ...' is an irregularity
that I prefer to avoid.
> From my point of view, a macro is a `papering over some pesky imperfection'.
Well, that is my feeling of untyped use of '=='.
--
Waldek Hebisch
heb...@math.uni.wroc.pl
| It is good to see this exchange of ideas.
Language design and evolution is a fertile field of interesting and
complementary/diverging point of views :-)
[...]
| I think these semantics are obviously more economical than thinking of
| nullary functions as "delayed assignment".
In the library and compiler, OpenAxiom does not have a notion of
'delayed assignment'.
-- Gaby
| Gabriel Dos Reis wrote:
| >
| > Waldek Hebisch <heb...@math.uni.wroc.pl> writes:
| >
| > | This is reasonable choice if you _have to_ give meaning to '=='
| > | in such cases.
| >
| > Indeed, all flavours of AXIOM support the following style
| >
| > SomeFunctor(): Public == Private where
| > Public == some category expression
| > Private == some domain expression
| >
| > I am unconvinced of the value of requiring that to be written as
| >
| > SomeFunctor(): Public == Private where
| > Public ==> some category expression
| > Private ==> some domain expression
|
| Well, inside toplevel 'where' FriCAS blindly converts '==' to '==>'
| (because othere uses of '==' make no sense here). But that is
| clearly a hack. IMHO normal constant definitions should be
| typechecked.
or the definition could have its type inferred (as opposed to type checked)...
| But above we need to "expand" where to see what
| type is defined (the 'some category expression' part). Worse,
| freqently we have things like:
|
| SomeFunctor(A : CA, B : CB) : Public == Private where
| CA ==> cata
| CB ==> catb
I am not sure this is as complicated as it sounds.
I believe all flavours of AXIOM do (or at least used to do) the
following for this where-expression:
1. elaborate of the side conditions.
This essentially means that the environment is augmented with
the constant/macro definitions.
2. in the resulting environment, pull our types and names from the
parameter declarations.
3. elaborate the type declarations in the resulting environment.
4. elaborate SomeFunctor(A,B) == Private
Did FriCAS change that sequence of events?
-- Gaby
Yes. Toplevel 'where' is macro expanded (and '==' and ':'
contained there are converted to macros). So when compiler sees
the code there is _no_ where at toplevel. The reason is
that currently compiler need substantial amount of type
information. Normally this information is taken from
databases, but during bootstap databases contain almost
no data. FriCAS collects needed information directly
from constructor definitions, but for that needs macro
expanded form.
BTW: Ability to bootstrap is main reason for the approach, but
FriCAS way also fixes some problems with macros beeing expanded
too late (I forgot datails and it would take some time to find
report, but it appeared on the mailing list).
--
Waldek Hebisch
heb...@math.uni.wroc.pl
Look at DoubleFloatEllipticIntegrals and FloatEllipticFunctions,
both in special2.spad.pamphlet. The real and complex versions
are very similar. In DoubleFloat case we want to use fast
version of square root for real version (it leads to much faster
code) but regular square root for complex version. In Float
case we need to raise errors in real version for arguments that
are legal in complex version, OTOH complex version needs more
tricky handling of periods. So I have parts which are exactly
the same and parts that differ. I use macros to use single
definition for common part of code. In principle I could
create a common generic package for shared parts, but:
- code is not generic, there are just two versions (real and
complex)
- I prefer to keep code together (shared parts are strongly
related to no-shared ones)
- there is little gain in efficiency from using non-generic code.
Or look at 'addm!' in poly.spad.pamphlet. This routine is
performance critical -- speed of polynomial multiplication
(and conseqently of most our symbolic operations) depend on
it. There are two versions of code, one that uses machine
integers, the other general ones. Machine integers can
be used in most cases and this version is much faster than
the other one (using generic operations), but we want
correnctly handle general case. I use macro to
share most of body of 'addm!', so the versions just
differ in declarations and few operations which explicitely
involve types.
I do not think there is other way n Spad to have fast
specialized versions and share code...
Of course, for each specific use case one can find a
language feature which handles such case. But the point
is that macros form a simple and general mechanism.
I do not see way should we reject them when they
solve real problems.
--
Waldek Hebisch
heb...@math.uni.wroc.pl
When I see the prefix DoubleFloat and Float in the names of these
packages I think immediately that something must be wrong with this
design. DoubleFloat and Float are names of domains. Somehow here the
programmer expects to make an implicit association with this domains.
In fact it seems like this must be a post facto extension of these
domains. Why are these functions on exported by the domains?
I realize that this is not your original design but most probably
patterned after the older code in special.spad.pamphlet. Perhaps this
design had some advantages when it took a long time to re-compile
Axiom and memory space was limited but it does not seem like that to
me today.
If there really is some advantage to be gained from having this code
in a separate package then why not write them in a generic manner
something like this:
EllipticIntegrals(F:Join(Field, TranscendentalFunctionCategory) ): with
ellipticRC : (F, F) -> F
...
if F has arbitraryPrecision then -- Float
...
else -- DoubleFloat
if F has ComplexCategory(F) then -- Complex
...
Of given the often adhoc seeming design of some other parts of the
library this might not be quite so straight forward but it seems
sensible to me to work towards this.
Would this sort of code really be significantly slower than your
performance optimized version?
> In DoubleFloat case we want to use fast
> version of square root for real version (it leads to much faster
> code) but regular square root for complex version. In Float
> case we need to raise errors in real version for arguments that
> are legal in complex version, OTOH complex version needs more
> tricky handling of periods. So I have parts which are exactly
> the same and parts that differ. I use macros to use single
> definition for common part of code. In principle I could
> create a common generic package for shared parts, but:
>
> - code is not generic, there are just two versions (real and
> complex)
That some parts of the code is common seems like a reasonable
definition of "generic" to me.
> - I prefer to keep code together (shared parts are strongly
> related to no-shared ones)
It might also make sense for something like SpecialFunctionCategory to
provide default code for many of these function? Where there are some
domain specific optimizations only these would then be given in the
domain to override the generic versions.
> - there is little gain in efficiency from using non-generic code.
>
Perhaps you meant to claim that there is little to gain from generic code?
Using a macro is seems like sort of the ultimate in generic code -
arbitrary text with textual substitutions. I think the goal is not so
much to be generic as it is to express the formal abstract structure
of the mathematics.
> Or look at 'addm!' in poly.spad.pamphlet. This routine is
> performance critical -- speed of polynomial multiplication
> (and conseqently of most our symbolic operations) depend on
> it. There are two versions of code, one that uses machine
> integers, the other general ones. Machine integers can
> be used in most cases and this version is much faster than
> the other one (using generic operations), but we want
> correctly handle general case. I use macro to
> share most of body of 'addm!', so the versions just
> differ in declarations and few operations which explicitly
> involve types.
I don't much like looking at ADDM_BODY. It seems to me that very
little is gained by factoring it out.
>
> I do not think there is other way n Spad to have fast
> specialized versions and share code...
>
I am tempted to spend some time trying to compare a more generic
version of the code in special2 to the "optimized" version. I am not
convinced that there must be such a great difference. But I must
admit that you do obviously have a great deal more experience than I
do in this matters.
> Of course, for each specific use case one can find a
> language feature which handles such case. But the point
> is that macros form a simple and general mechanism.
> I do not see way should we reject them when they
> solve real problems.
>
Assembler language (or for that matter Lisp) also solves very real
problems but we discourage it's use in the Axiom library for what I
think are some very good reasons.
Regards,
Bill Page.
my sense is that it is possible to do that kind of bootstrapping without
changing the compilation scheme above. At any rate, can you tell me
whether FriCAS has invalidated the description (i.e. including syntax)
of the AXIOM book in section 13.2 which uses '==' to define exports and
implemetations instead of '==>' or 'macro'?
-- Gaby
The most of the functions _are_ exported by the domains. But
DoubleFloat and Float are long, code to implement special functions
is long so it makes sense to create separate package.
> I realize that this is not your original design but most probably
> patterned after the older code in special.spad.pamphlet. Perhaps this
> design had some advantages when it took a long time to re-compile
> Axiom and memory space was limited but it does not seem like that to
> me today.
My motivation is conveniens of humams reading/extending this code.
Too many small pieces make things hard because with small pieces
we need many of them. But too big pieces are also inconvenient.
I am trying to find best split. In particular in those packages
functions (and their implementations) are closely related, while
relations to other parts are weak.
> If there really is some advantage to be gained from having this code
> in a separate package then why not write them in a generic manner
> something like this:
>
> EllipticIntegrals(F:Join(Field, TranscendentalFunctionCategory) ): with
> ellipticRC : (F, F) -> F
> ...
> if F has arbitraryPrecision then -- Float
> ...
> else -- DoubleFloat
>
> if F has ComplexCategory(F) then -- Complex
> ...
>
> Of given the often adhoc seeming design of some other parts of the
> library this might not be quite so straight forward but it seems
> sensible to me to work towards this.
>
Did you notice that DoubleFloat... package used quite different
method of computation and exports different set of functions
than Float... package?
> Would this sort of code really be significantly slower than your
> performance optimized version?
>
You could do code like
NotSoGenericPackage(F : Field) :
if F is DoubleFloat then
...
else if F is Float then
...
but IMHO this would be misleading.
> >=A0In DoubleFloat case we want to use fast
> > version of square root for real version (it leads to much faster
> > code) but regular square root for complex version. =A0In Float
> > case we need to raise errors in real version for arguments that
> > are legal in complex version, OTOH complex version needs more
> > tricky handling of periods. =A0So I have parts which are exactly
> > the same and parts that differ. =A0I use macros to use single
> > definition for common part of code. =A0In principle I could
> > create a common generic package for shared parts, but:
> >
> > - code is not generic, there are just two versions (real and
> > =A0complex)
>
> That some parts of the code is common seems like a reasonable
> definition of "generic" to me.
>
> > - I prefer to keep code together (shared parts are strongly
> > =A0related to no-shared ones)
>
> It might also make sense for something like SpecialFunctionCategory to
> provide default code for many of these function? Where there are some
> domain specific optimizations only these would then be given in the
> domain to override the generic versions.
No. Exact computations with special functions are quite
different than numeric computations which again are quite
different than series expansions.
> > - there is little gain in efficiency from using non-generic code.
> >
>
> Perhaps you meant to claim that there is little to gain from generic code?
I meant 'there is small but nonzero gain in efficiency from using
non-generic code'.
> > Or look at 'addm!' in poly.spad.pamphlet. =A0This routine is
> > performance critical -- speed of polynomial multiplication
> > (and conseqently of most our symbolic operations) depend on
> > it. =A0There are two versions of code, one that uses machine
> > integers, the other general ones. =A0Machine integers can
> > be used in most cases and this version is much faster than
> > the other one (using generic operations), but we want
> > correctly handle general case. =A0I use macro to
> > share most of body of 'addm!', so the versions just
> > differ in declarations and few operations which explicitly
> > involve types.
>
> I don't much like looking at ADDM_BODY. It seems to me that very
> little is gained by factoring it out.
>
You mean you prefer to keep two copies?
> >
> > I do not think there is other way n Spad to have fast
> > specialized versions and share code...
> >
>
> I am tempted to spend some time trying to compare a more generic
> version of the code in special2 to the "optimized" version. I am not
> convinced that there must be such a great difference. But I must
> admit that you do obviously have a great deal more experience than I
> do in this matters.
For ellipticRF and DoubleFloat arguments there is large difference.
In other cases it depends. But you missed the point that
real versions and complex ones sometimes use exactly the
same code, sometimes need different code. AFAICS "generic"
version would need more packages (splitting code in IMHO
unnatural ways) and would be not more generic than
current one.
--
Waldek Hebisch
heb...@math.uni.wroc.pl
I agree that convenience is important and that "packaging" related
code is one way to achieve this. But in FriCAS we have other
(better?) ways. For example we can inherit default code from
categories and we have "add" inheritance for domains. It seems to me
(although perhaps easier said, than done) that one can bundle related
code and reduce code duplication by careful design of categories and
domains. As you implied in another thread actually using PACKAGE in
FriCAS should mostly be limited to those places where it is essential.
>> If there really is some advantage to be gained from having this code
>> in a separate package then why not write them in a generic manner
>> something like this:
>>
>> EllipticIntegrals(F:Join(Field, TranscendentalFunctionCategory) ): with
>> ellipticRC : (F, F) -> F
>> ...
>> if F has arbitraryPrecision then -- Float
>> ...
>> else -- DoubleFloat
>>
>> if F has ComplexCategory(F) then -- Complex
>> ...
>>
>> Of given the often adhoc seeming design of some other parts of the
>> library this might not be quite so straight forward but it seems
>> sensible to me to work towards this.
>>
>
> Did you notice that DoubleFloat... package used quite different
> method of computation and exports different set of functions
> than Float... package?
>
Yes, that is the reason I used
Join(Field, TranscendentalFunctionCategory)
since that seemed like the best we could do given the current
categorical design. It seems likely to me that the differences in
terms of exports between Float, DoubleFloat, Complex Float and Complex
DoubleFloat, etc. could be further minimized. But of course the actual
method of computation (implementation) must often be determined
specifically by the domain. So I would rather this be tied directly to
the domain rather than separated in a package.
> ...
>>
>> I don't much like looking at ADDM_BODY. It seems to me that very
>> little is gained by factoring it out.
>>
>
> You mean you prefer to keep two copies?
>
In this case, yes.
>... But you missed the point that
> real versions and complex ones sometimes use exactly the
> same code, sometimes need different code. AFAICS "generic"
> version would need more packages (splitting code in IMHO
> unnatural ways) and would be not more generic than
> current one.
>
Yes I basically agree with you. That is the reason why I think the
best approach is a more fundamental re-design of the domains and
categories.
Regards,
Bill Page.
For FriCAS 13.2 is valid at least to degree it is valid for
original Axiom. But 13.2 lacks details and is very imprecise.
Given 'first class type' propaganda one can expect the following
to work:
if true then
-- define constructor
....
AFAIK this form is rejected by all Axiom flavours.
Similarly:
Foo : X == Y where
Z ==> f : Integer -> Integer
X ==> with Z
Y ==> add (f(x) == x)
The following apparently worked in original Axiom, but no
longer works in FriCAS:
(Foo : X == Y where X ==> Ring) where Y ==> Integer
OTOH the following works in FriCAS, but failed in original:
Foo : X == Y where
Z ==> Integer -> Integer
X ==> with f : Z
Y ==> add (f(x) == x)
If you think about elaborating declarations in environment,
then failures above are bugs. But examining implementation
we see that implemented model is different and patterns
of working and non working constructs is explained by
the implemented model.
--
Waldek Hebisch
heb...@math.uni.wroc.pl