I'm not sure multimethods alone are enough to solve issues with Sage's
type system (e.g. coercion, where the result of a+b may happen in a
completely new domain) but they could certainly help.
class multi_dispatch(object):
def __init__(self):
self.dispatch = []
def register_function(self,signature,function):
self.dispatch.append( (signature,function) )
def register(self,*signature):
def register_decorator(function):
self.register_function(signature,function)
return function
return register_decorator
def __call__(self,*args):
for sig in self.dispatch:
if len(args) == len(sig) and all(isinstance(a,b) for a,b in zip(args,sig[0])):
return sig[1](*args)
raise TypeError("Signature not implemented")
T = multi_dispatch()
@T.register(int,int)
def int_int(a,b):
return "called with integers (%d,%d)"%(a,b)
@T.register(int,float)
def int_float(a,b):
return "called with integer and float (%d,%f)"%(a,b)
@T.register(float,float)
def float_float(a,b):
return "called with floats (%f,%f)"%(a,b)
>>> T(1,2)
'called with integers (1,2)'
>>> T(1,2.0)
'called with integer and float (1,2.000000)'
>>> T(1.0,2.0)
'called with floats (1.000000,2.000000)'
>>> T(1.0,2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "dispatch.py", line 15, in __call__
raise TypeError("Signature not implemented")
TypeError: Signature not implemented
Magma uses "multiple dispatch" extensively to deal with what our coercion model code addresses, and it's definitely a step back in my experience.
> --
> You received this message because you are subscribed to the Google Groups "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
> To post to this group, send email to sage-...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
Magma uses "multiple dispatch" extensively to deal with what our coercion model code addresses, and it's definitely a step back in my experience.
I'm not sure multimethods alone are enough to solve issues with Sage's type system (e.g. coercion, where the result of a+b may happen in a completely new domain) but they could certainly help.
In a dynamically typed language, multiple dispatch has a rather high cost. LISP MOP implementations have a lot of experience.
+(x::Number, y::Number) = +(promote(x,y)...)
julia> @code_llvm 1 + 2.5define double @"julia_+;19364"(i64, double) {top:%2 = sitofp i64 %0 to double, !dbg !1885%3 = fadd double %2, %1, !dbg !1885ret double %3, !dbg !1885}
julia> @code_native 1 + 2.5.section __TEXT,__text,regular,pure_instructionsFilename: promotion.jlSource line: 158push RBPmov RBP, RSPSource line: 158vcvtsi2sd XMM1, XMM0, RDIvaddsd XMM0, XMM1, XMM0pop RBPret
function +{T,S,m,kg,s,A,K,mol,cd}(x::SIQuantity{T,m,kg,s,A,K,mol,cd},y::SIQuantity{S,m,kg,s,A,K,mol,cd})val = x.val+y.valSIQuantity{typeof(val),m,kg,s,A,K,mol,cd}(val)end
julia> using SIUnits, SIUnits.ShortUnits # allow `s` for `seconds`
julia> 1s + 2.5s3.5 s
julia> @code_native 1s + 2.5s.section __TEXT,__text,regular,pure_instructionsFilename: /Users/stefan/.julia/SIUnits/src/SIUnits.jlSource line: 139push RBPmov RBP, RSPSource line: 139vcvtsi2sd XMM1, XMM0, RDIvaddsd XMM0, XMM1, XMM0Source line: 140pop RBPret
julia> (7//5)m/s + (12873918273918273918273981273981273981273980 + 2im)m/s64369591369591369591369906369906369906369907//5 + 2//1*im m s⁻¹julia> typeof(ans)SIQuantity{Complex{Rational{BigInt}},1,0,-1,0,0,0,0} (constructor with 1 method)
In the end it's just a little helper tool, though. It doesn't really do things you can't otherwise do.
Yes, but here T must explicitly implement to multiple dispatch. One
can't take an existing object and "add a method" based on the type of
the second argument.
We also have @coerce_binop
https://github.com/sagemath/sage/blob/6.2/src/sage/structure/element.pyx#L3260
which is nice.
Here we're conflating the issue of types and Parents
(where the latter can be viewed, somewhat, as dynamically created
parameterized types),
However, Julia multimethods are backed up by a powerful coercion
system, so I do not understand the "step back" criticism.
Consider A+B where A is a polynomial in ZZ[x,y] and B is a power series in F_q[[x]] (finite field with q elements).
Do you expect your CAS to make sense of that addition? Sage does. It returns an answer in F_q[[x]][y] (i.e., a polynomial in y over power series in x over F_q) . You can argue whether it's desirable for a system to try to be that smart, but all computer algebra systems I know are a "step back" relative to this. Programming languages do not tend to have type models that would even allow you to try and make sense of this kind of question.
julia> big(3)^100*im + [1, 1//2, 2]3-element Array{Complex{Rational{BigInt}},1}:1//1+515377520732011331036461129765621272702107522001//1*im1//2+515377520732011331036461129765621272702107522001//1*im2//1+515377520732011331036461129765621272702107522001//1*im
This can be pretty straightforwardly handled with multiple dispatch promotion in a parametric type system. Similar things are done in real code regularly in Julia. Although I'm not going to try to define these particular types, the polynomial, power series, and finite field example is quite possible given appropriate type definitions – people have done similar things. An example that already works and is similar in spirit, is figuring out that a Complex{BigInt} plus a Vector{Rational{Int}} should be a Vector{Complex{Rational{BigInt}}}. This falls out of a surprisingly small set of generic function methods and promotion rules (which are just methods):julia> big(3)^100*im + [1, 1//2, 2]3-element Array{Complex{Rational{BigInt}},1}:1//1+515377520732011331036461129765621272702107522001//1*im1//2+515377520732011331036461129765621272702107522001//1*im2//1+515377520732011331036461129765621272702107522001//1*im
--
If Julia has shown anything, it's that you *can* have ubiquitous multiple dispatch in a dynamic language and have it be very fast – it is commonly a zero-cost abstraction in Julia.
On Friday, July 18, 2014 4:01:10 PM UTC-7, Stefan Karpinski wrote:I find this an interesting claim. Do you have some easy, high level pointers to what ideas are used to get to zero-cost? (I realize you have a "commonly" disclaimer here, so perhaps I'm hoping for too much).If Julia has shown anything, it's that you *can* have ubiquitous multiple dispatch in a dynamic language and have it be very fast – it is commonly a zero-cost abstraction in Julia.
I hope that all makes some sense. I'm not sure if doing this kind of code specialization in Sage is realistic, but type inference on Python code is certainly possible. As you've pointed out, there is no obvious way to make a completely generic version of code using multiple dispatch really fast (although polymorphic inline caches might help a lot).
It certainly seems unlikely we'd be able to utilize these ideas without significantly altering python/cython. I guess "Sulia" or "Juxiom" might be the next generation computer algebra system.
So any optimizations along the lines you're suggesting would only be possible at a JIT compiler level, where any compiled code caches would probably have to be invalidated when the type/parent/coercion graph gets modified.
Yes, you can definitely do most of this with template metaprogramming, although, as you say it's pretty tricky.
On Saturday, July 19, 2014 5:43:57 AM UTC-7, defeo wrote:However, Julia multimethods are backed up by a powerful coercion
system, so I do not understand the "step back" criticism.
That comment wasn't made with respect to Julia, because that would be comparing the coercion facilities of a CAS to those of a programming language. Coercion in a CAS tends to be a *lot* more complicated than what programming languages are designed for. As an example:
Consider A+B where A is a polynomial in ZZ[x,y] and B is a power series in F_q[[x]] (finite field with q elements).
Do you expect your CAS to make sense of that addition? Sage does.
It returns an answer in F_q[[x]][y] (i.e., a polynomial in y over power series in x over F_q) . You can argue whether it's desirable for a system to try to be that smart, but all computer algebra systems I know are a "step back" relative to this. Programming languages do not tend to have type models that would even allow you to try and make sense of this kind of question.
On Wed, Jul 23, 2014 at 5:47 PM, rjf <fat...@gmail.com> wrote:
>
>
> On Saturday, July 19, 2014 8:22:39 AM UTC-7, Nils Bruin wrote:
>>
>> On Saturday, July 19, 2014 5:43:57 AM UTC-7, defeo wrote:
>>>
>>> However, Julia multimethods are backed up by a powerful coercion
>>> system, so I do not understand the "step back" criticism.
>>>
>> That comment wasn't made with respect to Julia, because that would be
>> comparing the coercion facilities of a CAS to those of a programming
>> language. Coercion in a CAS tends to be a *lot* more complicated than what
>> programming languages are designed for. As an example:
>>
>> Consider A+B where A is a polynomial in ZZ[x,y] and B is a power series in
>> F_q[[x]] (finite field with q elements).
>>
>> Do you expect your CAS to make sense of that addition? Sage does.
>
> A CAS that has representations for those two objects will very likely make
> sense of that addition, so Sage is hardly unique.
Show me one.
>> It returns an answer in F_q[[x]][y] (i.e., a polynomial in y over power
>> series in x over F_q) . You can argue whether it's desirable for a system to
>> try to be that smart, but all computer algebra systems I know are a "step
>> back" relative to this. Programming languages do not tend to have type
>> models that would even allow you to try and make sense of this kind of
>> question.
>
>
> A harder question is when the coercion is not so obvious, As a simple
> example, is
> the polynomial x^0 coerced to the integer 1?
The *polynomial* x^0, yes.
> How do you add two bigfloats of different precision?
Return the result in lower precision.
> Or add a float to an exact rational?
Coerce to a rational (same rule as above).
> Do you allow 1/(1+i) or do you coerce by rationalizing the denominator?
That depends on what you're going to do with it. Viewed as elements of
C, or Q[i], yes, rationalize the denominator.
> The speed of dispatch and specialization has probably been explored in the
> context of compiling Lisp, ad nauseum.
Which is one of the reasons it didn't take decades to get to where it is.
[1] http://maxima.sourceforge.net/docs/manual/en/maxima_29.html
> Perhaps Axiom and Fricas have such odd Taylor series objects; they don't
> seem to be
> something that comes up much, and I would expect they have some
> uncomfortable
> properties.
They do come up frequently in algebraic geometry, number theory,
representation theory, combinatorics, coding theory and many other
areas of pure and applied mathematics.
> Note that even adding 5 mod 13 and the integer 10 is potentially
> uncomfortable,
sage: a = Mod(5,13); a
5
sage: parent(a)
Ring of integers modulo 13
sage: type(a)
<type 'sage.rings.finite_rings.integer_mod.IntegerMod_int'>
sage: a + 10
2
sage: parent(a+10)
Ring of integers modulo 13
That was really comfortable.
It's basically the same in Magma and
GAP. In PARI it's equally comfortable.
? Mod(5,13)+10
%2 = Mod(2, 13)
> and the rather common operation of Hensel lifting requires doing arithmetic
> in a combination
> of fields (or rings) of different related sizes.
>
>>
>>
>> >> It returns an answer in F_q[[x]][y] (i.e., a polynomial in y over power
>> >> series in x over F_q) . You can argue whether it's desirable for a
>> >> system to
>> >> try to be that smart, but all computer algebra systems I know are a
>> >> "step
>> >> back" relative to this. Programming languages do not tend to have type
>> >> models that would even allow you to try and make sense of this kind of
>> >> question.
>> >
>> >
>> > A harder question is when the coercion is not so obvious, As a simple
>> > example, is
>> > the polynomial x^0 coerced to the integer 1?
>>
>> The *polynomial* x^0, yes.
>
>
> That means that 1 is not a polynomial. Adds to some checking, if you assume
> that
> a polynomial has a main variable, but 1 does not.
> As does the representation of zero as a polynomial with no terms, or a
> polynomial
> with one term, say 0*x^0.
There are many 1's.
>> > How do you add two bigfloats of different precision?
>>
>> Return the result in lower precision.
>
>
> I would say that is wrong. Macsyma converts both to the globally set
> precision.
You can't be serious -- that's a total a recipe for disaster in a
complicated system. Unbelievable.
> There is no reason to believe that a "low precision" representation is
> inaccurate, and that one can raise the precision by adding (binary) zeros on
> the right.
>
> Or not.
>
>>
>>
>> > Or add a float to an exact rational?
>>
>> Coerce to a rational (same rule as above).
>
>
> That's not the rule used in most programming languages, where the result is
> returned as a float. Like FORTRAN did, I guess. Any float can be converted
> to an exact rational, but do you want to do that? Sometimes. That's
> why it is sticky.
It's not what Sage does. Robert wrote the wrong thing (though his
"see above" means he just got his words confused).
In Sage, one converts the rational to the parent ring of the float,
since that has lower precision.
sage: 1.3 + 2/3
1.96666666666667
Systematically, throughout the system we coerce from higher precision
(more info) naturally to lower precision. Another example is
Z ---> Z/13Z
If you add an exact integer ("high precision") to a number mod 13
("less information"), you get a number mod 13 (as above).
>
>>
>>
>> > Do you allow 1/(1+i) or do you coerce by rationalizing the denominator?
>>
>> That depends on what you're going to do with it. Viewed as elements of
>> C, or Q[i], yes, rationalize the denominator.
>
>
> So now you are going to require users to have a background in, what,
> modern algebra?
Here's what happens:
sage: 1/(1+i)
-1/2*I + 1/2
If a user does have a background in "modern" algebra (which was
developed in the 1800's by some French kid?), then they have more
options available. At least way, way more options than they will
ever have in your CAS (Macsyma).
Just because Sage is capable of doing X, and X is only something that
a user with a background in quasi-functional triads can understand,
does not mean that ever user of Sage is assumed to have a background
in quasi-functional triads.
On Sat, Aug 2, 2014 at 11:16 PM, rjf <fat...@gmail.com> wrote:
> On Wednesday, July 30, 2014 10:23:03 PM UTC-7, William wrote:
>> [1] http://maxima.sourceforge.net/docs/manual/en/maxima_29.html
>>
>> > Perhaps Axiom and Fricas have such odd Taylor series objects; they
>> > don't
>> > seem to be
>> > something that comes up much, and I would expect they have some
>> > uncomfortable
>> > properties.
>>
>> They do come up frequently in algebraic geometry, number theory,
>> representation theory, combinatorics, coding theory and many other
>> areas of pure and applied mathematics.
>
>
> You can say
> whatever you want about some made-up computational problems
> in "pure mathematics" but I think you are just bluffing regarding
> applications.
Let me get this straight -- you honestly thinking I'm bluffing regarding applications of:
"power series in one variable over a finite field"
Are you seriously claiming that "power series over a finite field" have no applications?
>> > Note that even adding 5 mod 13 and the integer 10 is potentially
>> > uncomfortable,
>>
>> sage: a = Mod(5,13); a
>> 5
>> sage: parent(a)
>> Ring of integers modulo 13
>> sage: type(a)
>> <type 'sage.rings.finite_rings.integer_mod.IntegerMod_int'>
>> sage: a + 10
>> 2
>> sage: parent(a+10)
>> Ring of integers modulo 13
>>
>> That was really comfortable.
>
>
> Unfortunately, it (a) looks uncomfortable, and (b) got the answer wrong.
>
> The sum of 5 mod13 and 10 is 15, not 2.
>
> The calculation (( 5 mod 13)+10) mod 13 is 2.
>
> Note that the use of mod in the line above, twice, means different things.
> One is a tag on the number 5 and the second is an operation akin to
> remainder.
The answer from Sage is correct, and your remarks to the contrary are just typical of rather shallow understanding of mathematics.
>> It's basically the same in Magma and
>> GAP. In PARI it's equally comfortable.
>>
>> ? Mod(5,13)+10
>> %2 = Mod(2, 13)
>>
>>
>> > and the rather common operation of Hensel lifting requires doing
>> > arithmetic
>> > in a combination
>> > of fields (or rings) of different related sizes.
>
>
> If you knew about Hensel lifting, I would think that you would comment here,
> and that
> you also would know why 15 rather than 2 might be the useful answer.
The answer output by Pari is neither 15 nor 2. It is "Mod(2, 13)".
> coercing the integer 10 into something like 10 mod 13 or perhaps -3
> mod 13 is
> a choice, probably the wrong one.
It is a completely canonical choice, and the only possible sensible one to make in this situation.
That's why every single math software system with any real algebraic sophistication has made this (same) choice.
> presumably one could promote your variable a to have a value in Z and do
> the addition again. Whether comfortably or not.
sage: a = Mod(5,13); a
5
sage: a + 10
2
sage: b = a.lift(); parent(b)
Integer Ring
sage: b + 10
15
>> There are many 1's.
>
>
> that's a choice; it may make good sense if you are dealing with a raft of
> algebraic structures
> with various identities.
We are.
> I'm not convinced it is a good choice if you are dealing with the bulk of
> applied analysis and
> scientific computing applications amenable to symbolic manipulation.
Sage is definitely not only dealing with the that.
>> It's not what Sage does. Robert wrote the wrong thing (though his
>> "see above" means he just got his words confused).
>> In Sage, one converts the rational to the parent ring of the float,
>> since that has lower precision.
>
>
> It seems like your understanding of "precision" is more than a little fuzzy.
> You would not be alone in that, but writing programs for other people to
> use sometimes means you need to know enough more to keep from
> offering them holes to fall into.
>
> Also, as you know (machine) floats don't form a ring.
>
> Also, as you know some rationals cannot be converted to machine floats.
>
> ( or do you mean software floats with an unbounded exponent?
> Since these are roughly the same as rationals with power-of-two denominators
> / multipliers).
Yes, Sage uses http://www.mpfr.org/
> What is the precision of the parent ring of a float?? Rings with
> precision???
sage: R = parent(1.399390283048203480923490234092043820348); R
Real Field with 133 bits of precision
sage: R.precision()
133
>
>>
>> sage: 1.3 + 2/3
>> 1.96666666666667
>
>
> arguably, this is wrong.
>
> 1.3, if we presume this is a machine double-float in IEEE form, is
> about
> 1.300000000000000044408920985006261616945266723633....
> or EXACTLY
> 5854679515581645 / 4503599627370496
>
> note that the denominator is 2^52.
>
> adding 2/3 to that gives
>
> 26571237801485927 / 13510798882111488
>
> EXACTLY.
>
>
> which can be written
> 1.9666666666666667110755876516729282836119333903...
>
> So you can either do "mathematical addition" by default or do
> "addition yada yada".
>
In Sage both operands are first converted to a common structure in which the operation can be performed, and then the operation is performed.
>
>>
>> Systematically, throughout the system we coerce from higher precision
>> (more info) naturally to lower precision. Another example is
>>
>> Z ---> Z/13Z
>>
>> If you add an exact integer ("high precision") to a number mod 13
>> ("less information"), you get a number mod 13 (as above).
>
>
> This is a choice but it is hardly defensible.
> Here is an extremely accurate number 0.5
> even though it can be represented in low precision, if I tell you it is
> accurate to 100 decimal digits, that is independent of its representation.
>
> If I write only 0.5, does that mean that 0.5+0.04 = 0.5? by your rule of
> precision, the answer can only have 2 digits, the precision of 0.5, and so
> correctly rounded,
> the answer is 0.5. Tada. [Seriously, some people do something very close
> to this.
> Wolfram's Mathematica, using software floats. One of the easy ways of
> mocking it.]
The string representation of a number in Sage is not the same thing as that number.
Here's an example of how to make the image of 1/2, but stored with very few bits of precision, then add 0.04 to it:
sage: RealField(2)(0.5) + 0.04
0.50
If one needs more precision tracking regarding exactly what happens when doing arithmetic (and using functions) with real or complex numbers, Sage also have efficient interval arithmetic, based on, e.g., http://gforge.inria.fr/projects/mpfi/.
>
> I think that numbers mod 13 are perfectly precise, and have full
> information.
>
> Now if you were representing integers modulo some huge modulus as
> nearby floating-point numbers, I guess you would lose some information.
>
> There is an excellent survey on What Every Computer Scientist Should Know
> About Floating Point...
> by David Goldberg. easily found via Google.
>
> I recommend it, often.Paul Zimmerman writes many useful things about what mathematicians should know about floating point -- http://www.loria.fr/~zimmerma/papers/.
>> That's why every single math software system with any real algebraic
>> sophistication has made this (same) choice.
>
>
> I think the issue comes up as to the definition of a system with "real
> algebraic sophistication" and for that
> matter, whether you are familiar with all of them.
>
> While you are probably aware that I am not a big fan of Mathematica, it is
> quite popular. This is what it
> has in its documentation.
>
> Modulus->n
> is an option that can be given in certain algebraic functions to specify
> that integers should be treated modulo n.
>
> It seems that Mathematica does not have built-in functionality to manipulate
> objects such as Mod(2,13) .
Nope. Or function fields. Or p-adics. Let alone more complicated
objects like points on algebraic varieties.
> In fact, your whole premise seems to be that mathematically sophisticated
> <whatever> must be based on
> some notion of strongly-typed hierarchical mathematical categories, an idea
> that has proven unpopular
> with users, even if it has seemed to system builders, repeatedly and
> apparently incorrectly, as the golden fleece,
> the universal solvent, etc.
>
> I think the historical record shows not only the marketing failure of Axiom,
> but also mod Reduce, and some subsystem
> written in Maple. Also the Newspeak system of a PhD student of mine,
> previously mentioned.
> Now you may think that (for example) Magma is just what you? the world?
> needs (except only that it is not free).
> That does not mean it should be used for scientific computing.
I can just see you sitting there wondering why Sage hasn't died the
tragic death your crystal ball said it would ages ago...
>> > presumably one could promote your variable a to have a value in Z and do
>> > the addition again. Whether comfortably or not.
>>
>> sage: a = Mod(5,13); a
>> 5
>> sage: a + 10
>> 2
>> sage: b = a.lift(); parent(b)
>> Integer Ring
>> sage: b + 10
>> 15
>
>
> Thanks. Can the lifting be done to another computational structure, say
> modulo 13^(2^n)?
Sure.
sage: b = Integers(13^1024)(a); b
5
sage: b^(4*13^1023)
1
We consider the rationals to have infinite precision, our
real "fields" a specified have finite precision. This lower precision
is represented in the output, similar to how significant figures are
used in any other scientific endeavor.
This is similar to 10 + (5 mod 13), where the right hand side has
"less precision" (in particular there's a canonical map one way, many
choices of lift the other.
Also, when a user writes 1.3, they are more likely to mean 13/10 than
5854679515581645 / 4503599627370496, but by expressing it in decimal
form they are asking for a floating point approximation. Note that
real literals are handled specially to allow immediate conversion to a
higher-precision parent.
sage: QQ(1.3)
13/10
sage: (1.3).exact_rational()
5854679515581645/4503599627370496
sage: a = RealField(200)(1.3); a
1.3000000000000000000000000000000000000000000000000000000000
sage: a.exact_rational()
522254864384171839551137680010877845819715972979407671472947/401734511064747568885490523085290650630550748445698208825344
> (There is a difference in Mathematica. Mathematica claims to be the best
> system for scientific computing. :)
Few systems don't make that claim about themselves :).
- Robert
The are two representations of the same canonical object.
>>
> And what structure is that? Does Sage know about Z_{nonprime} ?
Of course, as illustrated.
sage: Integers(13^1024)
Ring of integers modulo 4764...1
> I'm still confused. Is the term "Real Field" in Sage the (or some) real
> field?
>
> If it is an approximation to a field, but not a field, why are you calling
> it a field?
Because it's shorter to type and easier to find/discover than
ApproximateRealField or something like that.
> If that doesn't get you in trouble, why doesn't it? Does Real Field inherit
> from
> Field? Does every non-zero element have an inverse?
Of course it suffers from the same issues that (standard) floating
point numbers do in any language, user be aware (and we at least
document that).
> Does Sage have other um, approximations, in its nomenclature?
Sure. RealField(123)[x]. Power series rings. P-adics.
>
>> It is more conservative to convert operands to the domain with less
>> precision.
>
> Why do you say that? You can always exactly convert a float number in
> radix b to
> an equal number of higher precision in radix b by appending zeros.
> So it is more conserving (of values) to do so, rather than clipping off
> bits from the other.
Clipping bits (or digits) is exactly how one is taught to deal with
significant figures in grade school, and follows the principle of
least surprise (though floating point numbers like to play surprises
on you no matter what). It's also what floating point arithmetic does
when the exponent is different.
>> We consider the rationals to have infinite precision, our
>> real "fields" a specified have finite precision. This lower precision
>> is represented in the output, similar to how significant figures are
>> used in any other scientific endeavor.
>
> Thanks for distinguishing between "field" and field. You don't seem
> to understand the concept of precision though.
That's a bold claim. My Ph.D. thesis depended on understanding issues
of precision. I'll admit explaining it to a layman can be difficult.
> You seem to think
> that a number of low precision has some inaccuracy or uncertainty.
> Which it doesn't. 0.5 is the same number as 0.500000.
> Unless you believe that 0.5 is the interval [0.45000000000....01,
> 0.549999..........9]
> which you COULD believe -- some people do believe this.
> But you probably don't want to ask them about scientific computing.
No, I don't think that at all. Sage also has the concept of real
intervals distinct from real numbers.
There's 0.5 the real number equal to one divided by two. There's also
0.5 the IEEE floating point number, which is a representative for an
infinite number of real numbers in a small interval.
>> This is similar to 10 + (5 mod 13), where the right hand side has
>> "less precision" (in particular there's a canonical map one way, many
>> choices of lift the other.
>>
>> Also, when a user writes 1.3, they are more likely to mean 13/10 than
>> 5854679515581645 / 4503599627370496, but by expressing it in decimal
>> form they are asking for a floating point approximation. Note that
>> real literals are handled specially to allow immediate conversion to a
>> higher-precision parent.
>
> What do you know when a user writes 1.3, really?
Quite a bit. They probably didn't mean pi. If they really cared, they
could have been more specific. At least we recognize this ambiguity
but we can't let it paralyze us.
> You want the user
> to believe that Sage uses decimal arithmetic? Seriously? How far
> are you going to try to carry that illusion?
If they don't immediately specify a new domain, we'll treat it as
having 53 bits. It's syntactic sugar.
> You imagine a user who understands rings and fields, and knows that
> Real Field is not a field, but knows so little about computers that he
> thinks 1.3 is 13/10 exactly? (By the way, I have no problem if
> 1.3 actually produces 13/10, and to get a float, you have to try to
> convert it explicitly, and the conversion might even come up with an
> interval
> or an error bound or something that leads to "reliable" computing.
> Rather than some guessy-how-many-bits stab in the dark thing that
> prints as 1.3
Again, it's syntactic sugar.
>> sage: QQ(1.3)
>> 13/10
>> sage: (1.3).exact_rational()
>> 5854679515581645/4503599627370496
>> sage: a = RealField(200)(1.3); a
>> 1.3000000000000000000000000000000000000000000000000000000000
>> sage: a.exact_rational()
>> 522254864384171839551137680010877845819715972979407671472947/401734511064747568885490523085290650630550748445698208825344
>
> I assume it is possible to calculate all kinds of things if you carefully
> specify them,
> in Sage. After all, it has all those programs to call, including sympy.
> The
> issue iwe have been discussing is really what does it do "automatically".
I don't think any answer is going to be right for everyone, given the
diversity of users we have. I personally think we've found a nice
happy medium, but you're free to disagree.
>> sage: a = 0.1000000000000000000000000000000000
>> sage: a.precision()
>> 113
>
>
> So 0.1- 0.10000000000000000000000000000....
> is 0.0????????????????? where ? = undetermined?
sage: .1- 0.10000000000000000000000000000
0.000000000000000
sage: parent(_)
Real Field with 53 bits of precision
> and anyone who writes x+0.1 is relegating that sum to 1 digit "precision"?
As I mentioned previously, we default to the (somewhat arbitrary)
minimum of 53 bits of precision.
>> > Inventing fast methods is fun, although (my opinion) multiplying
>> > integers
>> > of astronomical size is hardly
>> > mainstream scientific computing.
>> >
>> > Not to say that someone might claim that
>> > this problem occurs frequently in many computations in pure and applied
>> > mathematics...
>>
>> I've personally "applied" multiplying astronomically sized before
>> (thought the result itself is squarely in the domain of pure math):
>> http://www.nsf.gov/news/news_summ.jsp?cntn_id=115646/
>
> Assume there is an application that involves multiplication of polynomials.
> You can multiply polynomials by encoding them as large integers,
> multiplying, and decoding. Sometimes called Kronecker's Trick.
>
> So there are lots of applications. Are they stupid tricks? Probably.
You claim "this idea has practical implications for efficient
programs" in http://www.cs.berkeley.edu/~fateman/papers/polysbyGMP.pdf
Now you claim it's stupid. Maybe it's both.
On Thu, Aug 7, 2014 at 9:02 AM, rjf <fat...@gmail.com> wrote:
>
>
> On Wednesday, August 6, 2014 8:11:21 PM UTC-7, Robert Bradshaw wrote:
>>
>>
>>
>> The are two representations of the same canonical object.
>
>
> The (computer algebra) use of the term, as in "simplified to a canonical
> form" means
> the representation is canonical. It doesn't make much sense to claim that
> all these
> are canonical: 1+1, 2, 2*x^0, sin(x)^2+cos(x)^2 + exp(0).
The point was that there's a canonical domain in which to do the computation.
We also have an object called the ring of integers, but really it's
the ring of integers that fits into the memory of your computer.
Should we not call it a Ring?
>> > Does Sage have other um, approximations, in its nomenclature?
>>
>> Sure. RealField(123)[x]. Power series rings. P-adics.
>
> These approximations are approximations by their nature. If you are
> computing with a power series, the concept inherently includes an error term
> which you are aware of. Real Field is (so far as I know) a concept that
> should have the properties of a field. The version in Sage does not.
> It's like saying someone isn't pregnant. well only a little pregnant.
They're no more approximate by nature than the real numbers.
The p-adic numbers form a field. For any choice of representation some
of them can be represented exactly on a computer, most can't. When
doing computations with p-adic numbers one is typically chooses a
precision (e.g. how many digits, not unlike a choice of number of
bits) to use.
Power series (to make things concrete, say the power series in one
variable over the integers) form a ring. For any choice of
representation some of them can be represented exactly on a computer,
most can't. When doing computations with power series one is typically
chooses a precision (e.g. how many terms, not unlike a choice of
number of bits) to use.
Real numbers form a field. For any choice of representation some of
them can be represented exactly on a computer, most can't. When doing
computations with real numbers...
>> >> It is more conservative to convert operands to the domain with less
>> >> precision.
>> >
>> > Why do you say that? You can always exactly convert a float number in
>> > radix b to
>> > an equal number of higher precision in radix b by appending zeros.
>> > So it is more conserving (of values) to do so, rather than clipping off
>> > bits from the other.
>>
>> Clipping bits (or digits) is exactly how one is taught to deal with
>> significant figures in grade school, and follows the principle of
>> least surprise (though floating point numbers like to play surprises
>> on you no matter what). It's also what floating point arithmetic does
>> when the exponent is different.
>
>
> It is of course also taught in physics and chemistry labs, and I used this
> myself in the days when slide-rules were used and you could read only
> 3 or so significant figures. That doesn't make it suitable for a computer
> system. There are many things you learn along the way that are simplified
> versions of the more fully elaborated systems of higher math.
> What did you know about the branch cuts in the complex logarithm
> or log(-1) when you were first introduced to log?
Only being able to store 53 significant bits is completely analogous
to only being able to read 3 significant (decimal) figures.
I think
the analogy is very suitable for a computer system. It can clearly be
made much more rigorous and precise.
Or are you seriously proposing when adding 3.14159 and 1e-100 it makes
more sense, by default, to pad the left hand side with zeros (whether
in binary or decimal) and return 3.1415900000...0001 as the result?
>> >> We consider the rationals to have infinite precision, our
>> >> real "fields" a specified have finite precision. This lower precision
>> >> is represented in the output, similar to how significant figures are
>> >> used in any other scientific endeavor.
>> >
>> > Thanks for distinguishing between "field" and field. You don't seem
>> > to understand the concept of precision though.
>>
>> That's a bold claim. My Ph.D. thesis depended on understanding issues
>> of precision. I'll admit explaining it to a layman can be difficult.
>
> Is your thesis available online? I would certainly look at it and see
> how you define precision.
Yes, it's online.
I define precision (e.g. computing a value to a given precision) as
the negated log of the absolute (respectively relative) error, but
most importantly it's something you can have more or less of, and lose
due to rounding, etc. Perhaps I could have used the term "accuracy"
instead. I use the actual errors themselves in my analysis.
Generally, however, precision is something attached to a read "field"
and specifies the number of bits used to represent its elements (and,
consequently, exactly what rounding errors are introduced when doing
arithmetic).
They're representatives for the actual (either unknown or impossible
to represent) elements of your data. And, yes, they're also exact
rational numbers--once again they can be both--but the point is that
the input and output floating point numbers are viewed as
perturbations of the (actual) real field elements you care about.
Here they also talk about "loss of precision" similar in spirit to the
precision (of a value) as above.
- Robert
On 9 Aug 2014 02:57, "rjf" <fat...@gmail.com> wrote:
> On Thursday, August 7, 2014 10:55:37 PM UTC-7, Robert Bradshaw wrote:
>> The point was that there's a canonical domain in which to do the computation.
>
>
> I have not previously encountered the term "canonical domain". There is
> a CAS literature which includes the concept of simplification to a canonical form.
Richard,
Is it really necessary or desirable to quote such large sections of a message?
I have not counted the number of words in your email, but it fills 18 screen fills on my mobile phone. I am sure that you could trim a large amount out, whilst preserving sufficient that someone can follow the context.
Dave.
Note that even adding 5 mod 13 and the integer 10 is potentially uncomfortable,and the rather common operation of Hensel lifting requires doing arithmetic in a combinationof fields (or rings) of different related sizes.
--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To post to this group, send email to sage-...@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscribe@googlegroups.com.
To post to this group, send email to sage-...@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.
On Fri, 8 Aug 2014 18:57:21 -0700 (PDT), rjf <fat...@gmail.com> wrote:
> On Thursday, August 7, 2014 10:55:37 PM UTC-7, Robert Bradshaw wrote:
>
> > On Thu, Aug 7, 2014 at 9:02 AM, rjf <fat...@gmail.com> wrote:
> >
> > > On Wednesday, August 6, 2014 8:11:21 PM UTC-7, Robert Bradshaw
> > > wrote:
...
> Just that for a mathematician to call something a field when it isn't
> would, I think, be a serious mistake. You seem to think that it is
> ok for a computer system to do so. I certainly agree that there are
> more important issues with floating point computation than the
> fact that these numbers do not constitute a real closed field.
In my experience computer scientists are often more precise than
mathematicians. )
It seems quite common to me to (intentionally) confuse mathematical
objects with their imperfect realizations. For instance when I use my
compasses to draw something that looks a lot like a circle, I would call
what I drew a circle. Yet I am sure that the points that I drew do not
have equal distance to the center.
....
Similarly I would call RealField(prec) a field, because it represents
the field of real numbers (e.g. Dedekind cuts). At the same time I am
sure that some mathematical properties that I know from the mathematical
field of real numbers will not hold in RealField(prec).
Luckily Sage
readily admits that RealField(prec) is not an exact representation of
the mathematical field of real numbers:
sage: F = RealField(100)
sage: F.is_exact()
False
I agree that it is imprecise that when using F.is_field(), we are
asking whether the mathematical object of real numbers is a field, while
when using F.is_exact() we are asking whether the representation is
exact.
By the way, there also is AA or AlgebraicRealField() in Sage, which is
an exact representation of the real closed field of algebraic real
numbers.
...
> The domain of arbitrary-precision integers is an excellent model of
> the ring of integers. It is true that one can specify a computation
> that would fill up the memory of all the computers in existence. or
> even all the atoms in the (known?) universe. Presumably a
> well-constructed support system will give an error message on much
> smaller examples. I assume that your Real Field operation of
> division would give an error if the result is inexact.
RealField(prec) is an inexact approximation of a field (as witnessed by
is_exact), so I never expect the division to be exact.
In fact I don't
expect addition, negation, multiplication, or subtraction to be exact
either. Indeed, they are not exact:
sage: a = 1e-58
sage: a
1.00000000000000e-58
sage: (1+a)-1
0.000000000000000
> > > > > Does Sage have other um, approximations, in its nomenclature?
> > > >
> > > > Sure. RealField(123)[x]. Power series rings. P-adics.
> > >
> > > These approximations are approximations by their nature. If you
> > > are computing with a power series, the concept inherently includes
> > > an error term which you are aware of. Real Field is (so far as I
> > > know) a concept that should have the properties of a field. The
> > > version in Sage does not. It's like saying someone isn't pregnant.
> > > well only a little pregnant.
> >
> > They're no more approximate by nature than the real numbers.
>
> Huh? How so? I was not aware that real numbers (at least the ones
> that you can construct) are approximate by nature. But maybe you
> can direct me to some reference on this.
What do you mean by "can construct"? The computable reals perhaps? Those
would indeed form an actual field. I am very unsure about how practical
they are though.
Might we then not also have a ring of computable power series? That is,
those power series given by an algorithm that given n returns the n'th
term in finite time?
...
> > Power series (to make things concrete, say the power series in one
> > variable over the integers) form a ring.
For any choice of
> > representation some of them can be represented exactly on a
> > computer, most can't. When doing computations with power series one
> > is typically chooses a precision (e.g. how many terms, not unlike a
> > choice of number of bits) to use.
>
> Truncated power series with coefficients that are typically exact
> rational numbers (but could be, you say , elements of a finite field)
> form a computation structure that has exact operations. Just because
> you associate some other concept that is outside the computer with
> that TPS doesn't mean the TPS is approximate.
I would argue that a ring of truncated power series with rational
coefficients is an inexact representation of the ring of (untruncated)
power series with rational coefficients, just as RealField(prec) is an
inexact representation of the field of real numbers.
It is important to realize that my use of "inexact" here refers to
TPS/RealField(prec) as a representation of the ring of (untruncated)
power series with rational coefficients/the field of real numbers, and
not to TPS/RealField(prec) itself.
The object PowerSeriesRing(QQ, 'x') in Sage is not a ring of truncated
power series QQ[x]/x^n. In it not a ring at all. In fact == is not even
transitive:
sage: R.<x> = PowerSeriesRing(QQ, 'x')
sage: a = O(x^20)
sage: a in R
True
sage: (0 == x^30 + a) and (x^30 + a == x^30)
True
sage: 0 == x^30
False
> > Real numbers form a field. For any choice of representation some of
> > them can be represented exactly on a computer, most can't. When
> > doing computations with real numbers...
>
> So you would say that 3 is approximate, because maybe it is pi, but pi
> cannot be represented as an integer. This point of view seems to me
> to be peculiar.
Indeed, I would argue that 3 in RealField(100) is approximate.
For
instance the (computable) real number 3.00...001, where the ellipsis is
1000 zeroes, gets mapped to 3 in RealField(100), but is not equal to 3
as an actual real number.
When speaking of exact/inexact I think it is important, to keep track of
what is to be represented.
Hence the element 3 of RealField(100) is an
inexact approximation of a real number since there are many reals that
become 3 in RealField(100).
The element 3 of IntegerRing() is an exact
approximation of an integer, since there is only one integer that
becomes 3 in IntegerRing().
....
> > to only being able to read 3 significant (decimal) figures.
>
> Actually this analogy is false. The 3 digits (sometimes 4) from a
> slide rule are the best that can be read out because of the inherent
> uncertainty in the rulings and construction of the slide rule, the
> human eye reading the lines, etc. So if I read my slide rule and say
> 0.25 it is because I think it is closer to 0.25 than 0.24 or 0.26
> There is uncertainty there. If a floating point number is computed as
> 0.25, there is no uncertainty in the representation per se. It is
> 1/4, exactly a binary fraction, etc. Now you could use this
> representation in various ways, e.g. 0.25+-0.01 storing 2 numbers
> representing a center and a "radius" or an interval or ..../ But the
> floating point number itself is simply a computer representation of a
> particular rational number aaa x 2^bbb Nothing more, nothing less.
> And in particular it does NOT mean that bits 54,55,56... are
> uncertain. Those bits do not exist in the representation and are
> irrelevant for ascertaining the value of the number aaa x 2^bbb.
>
> So the analogy is false.
I think it is again important to remember what is to be represented. If
you use floats to express numbers of the form aaa x 2^bbb (with
appropriate conditions on aaa and bbb), then sure, floats are an exact
representation. However there are many real numbers that are not of
this form.
So, if you use floats for real numbers, then they are an
inexact representation. In that case the bits 54, 55, 56, ... of the
real number are lost in the 53 bit floating point representation. Hence
given the 53 bit floating point representation, the bits 54, 55,
56, ... of the real number that it is supposed to represent, are indeed
uncertain. (This is simplified since the bit-numbering depends on the
exponent.)
If you happen to know that the real number that is represented is of the
form aaa x 2^bbb (with appropriate conditions on aaa and bbb), then you
can reconstruct all those extra bits, and the representation is arguably
exact.
But how should Sage know that the represented real number is of
this form?
...
> > Or are you seriously proposing when adding 3.14159 and 1e-100 it
> > makes more sense, by default, to pad the left hand side with zeros
> > (whether in binary or decimal) and return 3.1415900000...0001 as the
> > result?
>
> If you did so, you would preserve the identity (a+b)-a = b
How would the real number pi be represented in this system? Do you have
to explicitly say that you want the first n digits? Or is the padding
scheme smart enough to add pi's digits? In the latter case, how do you
keep checking (in)equality efficient?
How does (1/3)*3 give? Does it compare equal to 1? I mean, I would
imagine 1/3 to be 0.333..., with more digits 3 as I need them, coming
from larger and larger 0 paddings of 1 and 3. Supposing this, (1/3)*3
would be 0.999..., with more digits 9 as I need them. Will comparing
with 1 ever finish?
...
> Now if the user specified the kind of arithmetic explicitly, or even
> implicitly by saying "use IEEE754 binary floating point arithmetic
> everywhere" then I could go along with that.
I think you can specify this to some extend. At least
sage: RealField?
gives mentions opportunities to tweak precision and rounding.
You can
use AA if you want exact arithmetic (of real algebraic numbers). I don't
think there is a field of computable reals in Sage.
...
> > > > > You seem to think that a number of low precision has some
> > > > > inaccuracy or uncertainty. Which it doesn't. 0.5 is the same
> > > > > number as 0.500000. Unless you believe that 0.5 is the
> > > > > interval [0.45000000000....01, 0.549999..........9] which you
> > > > > COULD believe -- some people do believe this.
> > > > > But you probably don't want to ask them about scientific
> > > > > computing.
Again, I think you should keep track of ambient sets here. For sure, the
real numbers 0.5 and 0.5000000 are the same, and real numbers have no
uncertainty (when representing real numbers). However, users do not type
real numbers. They type finite strings of characters that they probably
mean to represent some real number. The _strings_ 0.5 and 0.5000000 are
not the same. Hence it might be reasonable to assign different meaning
to these strings, as long as the user also has a way to precisely
express what is meant. Yet, I'm no expert on scientific computing, so I
will leave this be.
...
> > > > There's 0.5 the real number equal to one divided by two. There's
> > > > also 0.5 the IEEE floating point number, which is a
> > > > representative for an infinite number of real numbers in a small
> > > > interval.
> > >
> > > Can you cite a source for that last statement? While I suppose
> > > you can decide anything you wish about your computer, the
> > > (canonical?) explanation is that the IEEE ordinary floats
> > > correspond to a subset of the EXACT rational numbers equal to
> > > <some integer> X 2^<some integer> which is not an interval at all.
> > > (there are also inf and nan things which are not ordinary in that
> > > sense)
> > >
> > > So, citation? (And I don't mean citing a physicist, or someone
> > > who learned his arithmetic in 4th grade and hasn't re-evaluated it
> > > since. A legitimate numerical analyst.)
I am not a numerical analyst, so perhaps the following is naive, but I
would really like know where it is flawed or why my assumptions are
unreasonable.
I call a real number exactly representable when it occurs as an exact
value of a float with some bounded number of bits for its exponent.
Then when using floats to represent real numbers, you have to decide
for each real number x by which exactly representable real number f(x)
it should be represented.
I can think of two desirable properties:
* exactly representable real numbers should be represented by
themselves,
* the relation <= should be preserved.
These properties yield that for each exactly representable real number y
the set {x real : f(x) = y} is an interval around y.
Moreover this
interval is contained in every interval (a, b) where a and b are exactly
representable real numbers with a < y and y < b.
Okay, we need some condition on the mantissa and exponent, for if the
mantissa and exponent are allowed to be arbitrary integers, a function
with these properties fails to exists: Let f be a partial function with
the required properties, then the intervals {x real : f(x) = y}, being
contained in the intervals (a,b) with a and b exactly representable
with a < y < b, necessarily contain only y. Since there are only
countable many exactly representable real numbers, each with preimage
containing only itself, the domain of definition of f is countable. In
particular f is not a total function.
...
> What this means is that the numbers in the system A and b often come
> from the real world and you don't know their true values, just the
> values that you measured.
> When you enter the data into the computer you put the values for A and
> b that are the closest numbers to what you believe. A major question
> in computational linear algebra is how to measure the sensitivity of
> the computed solution to slight changes in the initial data. The
> numbers used in solving the system are exactly the represented
> floating-point numbers.
>
> > They're representatives for the actual (either unknown or impossible
> > to represent) elements of your data. And, yes, they're also exact
> > rational numbers--once again they can be both--but the point is that
> > the input and output floating point numbers are viewed as
> > perturbations of the (actual) real field elements you care about.
>
> That does not give you or anyone else permission to say "oh, this
> data might not be right so instead of solving the system as best as
> I can, I will use some off-hand rule to clip bits off, since it's just
> a perturbation of something". No, you compute the best you can
> with the data you have.
You cannot just assume either that the data is exact.
Also it makes more sense to me not to do the best you can, but the best
you can do efficiently. Trying to do the best you can is of no use if it
means running out of memory or taking a year. If doing things
efficiently involves some clipping, then that's okay with me,
especially if the clipping is configurable.
Regards,
Erik Massop
You didn't tell us the answers to the riddles. Or did I miss them.
Oh what a shame I missed the party. And I don't have time to read all the wonderfully eloquent nonsense in this thread.
But I think Julia is good for a few years yet. It's one of the few truly 21st century programming languages available.
* I think Sage's RealField(prec) is more accurately called
fixed-precision floating-point than arbitrary-precision floating-point,
since range of mantissa and exponent are fixed once prec is known.
* I should not be participating in threads like these. They take too
much time that should be spent on work.
On Wednesday, August 20, 2014 4:54:35 AM UTC+2, Bill Hart wrote:Oh what a shame I missed the party. And I don't have time to read all the wonderfully eloquent nonsense in this thread.
So, since we both are the only remaining hangover crowd, I've one last question about Julia:
But I think Julia is good for a few years yet. It's one of the few truly 21st century programming languages available.
All your arguments center around technical arguments: speed, efficiently maintenance of "exponentially" growing complexities in codebases, etc. Those are without question desirable properties. What's missing for me is the other side for the actual humble user of this.
Python's roots are in ABC (which does not stand for abstract base classes, but for the effort to produce a human friendly programming language as a basic successor). Does Julia have such goals?
Second question is about multi-dispatch, and goes in the same direction: Given I have an object behind x, is there a x.[TAB] which lists me what I can "do" with it?
Or does this playful way of discovering functionalities work differently?
-- H
Thus the language is formally extensible at no cost and in an extremely simple way that the user can easily do themselves.
julia> getindex(a::Array{Int, 1}, b::Int) = print("test")getindex (generic function with 140 methods)
Incidentally, the whole argument type-dependent function dispatch is at the core of the GAP language (though you have to recompile the GAP kernel to change it). And GAP predates K&R C.
But it does X nonetheless; whereas, programming Sage
to do X from scratch would take the right expert about 2 years. This
illustrates how Sage is failing to be a viable alternative to Magma.
If Magma does X, and Sage doesn't, and people care about X, then Sage
is not a viable alternative.
Unfortunately, I don't think see "Sage switching to Julia", or Sage
having better support for "(jit, dependent typing, metaprogramming,
type checking and inference, etc.)" in itself would be enough of a
reason for people to have the option to choose Sage over Magma.
Such advantages don't address the **real difficulty** in implementing
a first version of X that works,
since implementing X fundamentally
takes reading and understanding papers, working out the right
algorithms, etc. -- it's almost entirely a human expert and design
problem.
At least that's my experience watching/making this happen
many times over the last decade.
Please keep in mind that much of
your (Bill Hart's) implementation work has been looking at
implementations of various algorithms and coming up with a new
approach that is better in certain ways -- that's certainly very
important and useful, but it is a different problem than doing the
first general implementation of code for computing with some class of
mathematical objects. It's like the difference between building the
first ever car and inventing a better wheel.
Here are two concrete examples of X's where I've seen this exact
situation in the last few months, and for which there is no open
source implementation in any language, despite a demand for one over
the last 10 years:
- (a) quaternion algebras over number fields (topic of Sage Days
61 next week -- https://sites.google.com/site/sagedays61/)
- (b) functional fields, mainly Florian Hess's Riemann-Roch spaces
algorithm; this supports basic algebraic geometry constructions, which
Magma has and Sage doesn't.
There are dozens of other similar examples. They are hard to get
funded with grants, because in all cases the proposal basically says:
"replicate something that's already been done... years ago". Also,
the type of funding one wants is for a person to work full time on the
problem for 1-2 years. A grant proposal that says "We plan to do Y,
and [lots about Y], but we need X [a little about X]," has a chance to
get funded (depending on Y). However, there are many X's (like a and
b above) where I certainly can't make a compelling case for any Y
except "make sage a viable alternative".
If we actually had "real money" to support open source math software
development, I'm certain there could be useful open source
implementations of (a) and (b) above within 2 years. I can think of
specific people who would love to carry out these core implementation
problems, but instead will be teaching a lot of Calculus instead, and
probably not enjoying it.
Anyway, a lack of funding is a big obstruction to creating a viable
open source free alternative to the Ma's for all.
-- William
Oh, perhaps you mean that Magma has switched over to Julia? :-)
>
> As I have already mentioned, the technology behind Julia can in theory do
> generic programming even faster than C or C++. So it is a strictly superior
> technology to what has been available in the past. It's a genuine
> innovation, whether the Julia designers see it that way or not.
Wait, are you saying there is an optimising Julia compiler available?
Without it, it sounds a bit like claims by Java poeple saying that their
JIT compilers are so great that they beat C compiled with gcc...
My cursory look into Julia only poped out JIT stuff, not a real complier...
>
> The Julia developers are not like me. They like Python and embrace it (many
> of them are pythonistas). They would never go around claiming Julia is in
> any way superior to Python. But I am not one of them and can go around
> making such claims, because in the sense that I mean it, it is true.
What personally put me off Julia when it was announced was that
"We are power Matlab users" in the top of the blog post announcing
Julia project back in Feb 2012. I thought "oh yes, swell, we do need a better
Matlab nchoosek() and 'everything is a matrix' greatness :-)"...
My cursory look into Julia only poped out JIT stuff, not a real complier...Static compilation is coming to Julia. But not because it will speed things up. It already has all the speed of an optimising compiler at the console, as you type!
I thought we would have another argument about real numbers this morning, but I see the topic has changed back to Julia. And I have to say that I find the Nemo project very interesting, in spite of the over-aggressive publicity it is receiving here.
You don't often get to see a new piece of software in its infancy, with the opportunity to request features from the developpers so early on. So here's a suggestion for Bill Hart; you write:
>Nemo is Julia, in the same way as Sage is Python
I'm not sure if I fully understand this, but I'm afraid I do. And I don't think it's a good decision. Let me explain.
I'm very jealous of some colleagues in numerical analysis here at my university whose ecosystem is ipython+numpy+matplotlib (pretty much, i may forget something). All three can be installed with easy_install at the prompt; it's modular (don't load matplotlib if you don't plan to plot); it's lightweight.
why shouldn't nemo by a Julia module that people load explicitly?
There are many good reasons why Sage cannot just be a module for Python. However I remember conversations in this forum years ago when people were musing about how great it would be if, by some miracle, Sage could indeed be a (collection of) module(s): it would be so simple and healthy. (Wasn't the Purple Sage project started with this "simplicity" in mind?) And it would be much easier to explain to people what Sage is. This may be a minor point, but if you've followed the conversation on reddit a few hours ago:
http://www.reddit.com/r/math/comments/2e3qla/william_stein_says_sage_has_overall_failed/
then you will see how many misconceptions people have about Sage, and how scary the technology appears to some.
Again there are very good reasons why Sage is what it is, but I see no reason why Nemo should be compelled to take over Julia like Sage has had to take over Python. You even say that no preparsing is needed! I'm not sure if, for example, you want to insist that integer constants be interpreted automatically as elements of ZZ. It seems to me a minor nuisance to have to write a few extra ZZ's, if in exchange the system can be that much simpler. I'm wondering what other people think.
Oh, and I ain't trying Nemo myself until installation is along the lines of "julia --easy_install nemo", sorry... I would love to try it on SageMathCloud if it's available.
best,
Pierre
Thanks for the examples. However, given that in Julia you can re-define about everything (cf your example with the function extracting elements from a list, replaced by " print 'test' "), surely you could change, say, the behaviour of the function / upon loading the Nemo module? (without mention of a bare-module mode or anything) This in order to make the ordinary machine words behave consistently in comparison with bignums. (There would be the occasional problem when someone has written code speciafically expecting a/b to be a float, but will you ever mix code involving floats with your own ?) This would be the exact reverse of the "from __future__ import division" which you see at the beginning of so many python+numpy scripts.
I don't think the factorization of a = 2*3*5^2*17*71*65537*123456543234565433489*3249861239487619238746032103420132947612384712340813246341 is a good example. You can expect the user to known that this will fail, and wrap things in ZZ. (Do not underestimate the users: there are zillions of casual C programmers out there who realize that "int" is not the same as "double"; understanding the difference between Int64 and ZZ is no harder.) Of course Python or Sage users don't have to worry about that. But isn't Julia about better performance at the cost of specifying types?
As for :
>It's also a pain in the neck to take output from one system, such as Pari/GP and cut and paste a long polynomial, having to put ZZ(...) around every bignum. It's just not practical.
i tend to disagree. A little function parse_PARI(string) or whatever would be easy to write.
I also have an unrelated question, still about Nemo. I think in Julia's type system, when A <: B, then the intuition is that elements of type A are also of type B, or am I wrong? Like elements of type Int64 are also of type Integer (or whatever it's called). If so, your supertype of ZZ should not be called Ring, but RingElement. For an element of ZZ is not a ring. Well, abstract types are not types, they are more like type classes, but still with the machine types the logic is as I've said.
Pierre
On Fri, Aug 8, 2014 at 6:57 PM, rjf <fat...@gmail.com> wrote:
>
>
> On Thursday, August 7, 2014 10:55:37 PM UTC-7, Robert Bradshaw wrote:
>>
>> On Thu, Aug 7, 2014 at 9:02 AM, rjf <fat...@gmail.com> wrote:
>> >
>> >
>> > On Wednesday, August 6, 2014 8:11:21 PM UTC-7, Robert Bradshaw wrote:
>> >>
>> >>
>> >>
>> >> The are two representations of the same canonical object.
>> >
>> >
>> > The (computer algebra) use of the term, as in "simplified to a canonical
>> > form" means
>> > the representation is canonical. It doesn't make much sense to claim
>> > that
>> > all these
>> > are canonical: 1+1, 2, 2*x^0, sin(x)^2+cos(x)^2 + exp(0).
>>
>> The point was that there's a canonical domain in which to do the
>> computation.
>
> I have not previously encountered the term "canonical domain". There is
> a CAS literature which includes the concept of simplification to a canonical
> form.
> There is also a useful concept of a zero-equivalence test, whereby E1-E2
> can be shown to be zero, although there is not necessarily a simplification
> routine that will "canonically simplify" E1 to E3 and also E2 to E3.
You have to think beyond just the limited domain of a computer *algebra* system.
If I want to do arithmetic between a \in Z and b \in Z/nZ, I could
either lift b to Z or push a down to Z/nZ. Only one of these maps is
canonical.
>> We also have an object called the ring of integers, but really it's
>> the ring of integers that fits into the memory of your computer.
>> Should we not call it a Ring?
>
> The domain of arbitrary-precision integers is an excellent model of the
> ring of integers. It is true that one can specify a computation that would
> fill up the memory of all the computers in existence. or even all the atoms
> in the (known?) universe. Presumably a well-constructed support system
> will give an error message on much smaller examples. I assume
> that your Real Field operation of division would give an error if the
> result is inexact.
Such a system would be pedantic to the point of being unuseful.
….snip...
> or log(-1) when you were first introduced to log?
>>
>> Only being able to store 53 significant bits is completely analogous
>> to only being able to read 3 significant (decimal) figures.
>
>
> Actually this analogy is false. The 3 digits (sometimes 4) from a slide
> rule are the best that can be read out because of the inherent uncertainty
> in the rulings and construction of the slide rule, the human eye reading
> the lines, etc. So if I read my slide rule and say 0.25 it is because I
> think
> it is closer to 0.25 than 0.24 or 0.26 There is uncertainty there.
> If a floating point number is computed as 0.25, there is no uncertainty in
> the representation per se. It is 1/4, exactly a binary fraction, etc.
> Now you could use this representation in various ways, e.g.
> 0.25+-0.01 storing 2 numbers representing a center and a "radius"
> or an interval or ..../ But the floating point number itself is simply
> a computer representation of a particular rational number aaa x 2^bbb
> Nothing more, nothing less. And in particular it does NOT mean
> that bits 54,55,56... are uncertain. Those bits do not exist in the
> representation
> and are irrelevant for ascertaining the value of the number aaa x 2^bbb.
>
> So the analogy is false.
I would argue that most floating point numbers are either (1)
real-world measurements or (2) intermediate results, both of which are
(again, likely) approximations to the value they're representing.
When
they are measured/stored, they are truncated due to the "construction
of the [machine], the [sensor] reading the [values], etc." Thus the
analogy.
> On the other hand, the
>
>>
>> I think
>> the analogy is very suitable for a computer system. It can clearly be
>> made much more rigorous and precise.
>
> What you are referring to is sometimes called significance arithmetic,
> and it has been thoroughly discredited.
> Sadly, Wolfram the physicist put it in Mathematica.
Nope, that's not what I'm referring to.
>> Or are you seriously proposing when adding 3.14159 and 1e-100 it makes
>> more sense, by default, to pad the left hand side with zeros (whether
>> in binary or decimal) and return 3.1415900000...0001 as the result?
>
>
> If you did so, you would preserve the identity (a+b)-a = b
>
> If you round to some number of bits, say 53, with a=3.14159 and b=1e-100,
> the left side is 0, and the right side is 1e-100. The relative error in
> the answer
> is, um, infinite.
>
> Now if the user specified the kind of arithmetic explicitly, or even
> implicitly
> by saying "use IEEE754 binary floating point arithmetic everywhere" then
> I could go along with that.
You would suggest that this be IEEE754 be requested by the user
(perhaps globally) before using it? Is that how maxima works? (I think
not.)
>> > So it sounds like you actually read the input as 13/10, because only
>> > then
>> > can
>> > you approximate it to higher precision than 53 bits or whatever. Why
>> > not
>> > just admit this instead of talking
>> > about 1.3.
>>
>> In this case the user gives us a decimal literal. Yes, this literal is
>> equal to 13/10. We defer interpreting this as a 53-bit binary floating
>> point number long enough for the user to tell us to interpret it
>> differently. This prevents surprises like
>>
>> sage: RealField(100)(float(1.3))
>> 1.3000000000000000444089209850
>>
>> or, more subtly
>>
>> sage: sqrt(RealField(100)(float(1.3)))
>> 1.1401754250991379986106491649
>>
>> instead of
>>
>> sage: sqrt(RealField(100)(1.3))
>> 1.1401754250991379791360490256
>>
>> When you write 1.3, do you really think 5854679515581645 /
>> 4503599627370496, or is your head really thinking "the closest thing
>> to 13/10 that I can get given my choice of floating point
>> representation?" I bet it's the latter, which is why we do what we do.
>
> I suspect it is not what python does.
It is, in the degenerate case that Python only has one native choice
of floating point representation. It's also what (to bring things full
circle), Julia does too.
> It is what Macsyma does if you write 1.3b0 to indicate "bigfloat".
You're still skirting the question of whether *you* mean when you write 1.3.
- Robert
You're (intentionally?) missing the point.
Given the linear system Ax = b, where A and b are given in terms of
floating point numbers, one could
(1) Return x' that is the closest (according to some chosen rounding
scheme) to the x exact solution of Ax = b, treating the entries of A
and b as exact rational numbers.
(2) Solve for x approximately, in such a way that x is the exact
solution to a perturbed problem, and one is able to bound the error
wrt the exact solution in terms of condition numbers, etc.
Approach (2) is saying, to quote you, "oh, this data might not be
right so instead of solving the system as best as I can, I will use
some off-hand rule to clip bits off" where the "clipping" here is not
treating the unrepresented lower order bits as all 0. The input is not
taken to be exact (meaning exactly representing the system you really
want to solve).
All (floating point) linear algebra software that I'm aware of take
approach (2).
- Robert
RJF said...
I don't know about canonical maps. The term "canonical representation"makes sense to me.
He means this. In algebra Z/nZ is actually a ring modulo an ideal. Z is the ring, nZ is the ideal.The elements of Z/nZ are usually written a + nZ. It means precisely this:a + nZ = { x in Z such that x = a +nk for some k in Z }So it is a *set* of numbers (actually, in algebra it is called a coset).When you write Mod(a, n) you really mean a + nZ if you are an algebraist, i.e. the *set* of numbers congruent to a modulo n.So Z/nZ consists of (co)sets of the form C = Mod(a, n) = a + nZ. If you like Z/nZ is a set of sets (actually a group of cosets in algebra).There is a canonical map from Z to Z/nZ: we map b in Z to the (co)set C in Z/nZ that contains b.If a = b mod n then C = mod(a, n) is the *only* one of these sets which contains b. So the map is canonical. We only have one choice.But there is no canonical map going the other way. For any such (co)set C, which element of Z are you going to pick? Any of the elements of C will do.You can make an arbitrary choice, e.g. pick the unique element of C in the range [0, n). Or you could pick the unique element in the range (-n/2, n/2]. But that is not a canonical choice. You had to make an arbitrary choice. Someone else may have made a different choice. (Indeed some people use the latter choice because some algorithms, such as gcd and Jacobi symbols run faster with this choice.)So canonical map means there is only one way you could define the map. You don't need to tell anyone what your map is, because they *have* to make the same choice as you, as there are no alternatives. That's the meaning of canonical in mathematics.It's probably not a terminology they teach in Computer Algebra,
but it is taught to undergraduates around the world in pure mathematics.
The whole argument here is that because only one direction gives a canonical map, coercion must only proceed in that direction. Otherwise your computer algebra system is making a choice that someone else's computer algebra system does not.
On 8/21/14, 18:08, Bill Hart wrote:
> In theory, I can do generic programming in Julia that is faster (at
> runtime) than you can do in *any* C compiler. And I don't just mean a
> little bit. I mean a lot. I don't even think the Julia people fully
> realise this yet (I might be wrong about that, I don't know). And it's
> not been realised in practice for many cases. But it is true.
Are you talking about the sort of things illustrated in this julia issue
with the various macros they talk about?
https://github.com/JuliaLang/julia/issues/7033
Thanks,
Jason
P.S. I've been following Julia for a while and I'm very intrigued by it.
I've also started keeping an eye on Nemo recently---very interesting!
On 8/21/14, 11:36, Bill Hart wrote:
> You can define the A.b syntax in Julia if you should so desire. It's
> essentially just another kind of method overload in Julia.
>
> And then, it supports A.<tab> giving a list of all the things that could
> follow the dot, just as Python would.
Are you saying you can overload the dot operator? I thought that was
still an open issue (https://github.com/JuliaLang/julia/issues/1974).
So, whether or not you use RingElement or Ring depends on whether you consider an element of ZZ to be a ring element or whether you think the integers ZZ form a ring.With the machine types, we really want a :: T where T <: Integer <: Ring. Again with the baremodule idea, we might be able to do this, but it's not possible if you use the standard provided modules, as far as I can tell.
> A <: B either tells you if a type A is of abstract type B or if abstract type A is of abstract type B.So, whether or not you use RingElement or Ring depends on whether you consider an element of ZZ to be a ring element or whether you think the integers ZZ form a ring.With the machine types, we really want a :: T where T <: Integer <: Ring. Again with the baremodule idea, we might be able to do this, but it's not possible if you use the standard provided modules, as far as I can tell.
I guess to me the only names that would really make sense would be ZZElement <: RREelement <: RingElement (or else rename the builtin types to something else: it's "Integer" that's not the same of a set!! Rename it Integers and everything makes sense). Anyway, I probably need to learn to think differently about this.
I do wonder about one thing. Since it is not possible in Julia to have instances of abstract types, and you have ZZ <: RR with the possibility of doing ZZ(3) as well as RR(3), does it mean that you've created supplementary concrete types, say ZZ_concrete <: ZZ and RR_concrete <: RR or similar, together with a bunch of constructors to hide everything?
If so, I find this quite complicated. Here good old OOP treats this much more elegantly. But perhaps I am mistaken?
Pierre
At this point, the fact that abstract types could be instantiated was seen as a limitation compared to OOP. And somehow, it is. Except at the same time, the type system does many new things.
Thanks again for the explanations. I realize that there were many things about Julia/Nemo which I had misunderstood. Let me explain a little, so I can (i) embarrass myself a little more, and (ii) perhaps give an indication of common mistakes, for anyone who would write a tutorial about Julia/Nemo :-)
Mistake (1): taking it for OOP.
When hearing about the way Julia worked, I jumped when I saw it wasn't OO, and then naturally was looking for ways to emulate OOP with Julia (rather than make the effort of a paradigm shift). And with multi-dispatch, i thought OK, if I want to write code that applies to objects of type A, B and C at once, just declare them A <: X, B <: X and C <: X and define f(x :: X). However this gave me the wrong intuition " A <: X is the same as A inherits from X".
At this point, the fact that abstract types could be instantiated was seen as a limitation compared to OOP.
And somehow, it is.
Except at the same time, the type system does many new things. Apples and oranges, really.
Mistake (2): taking types for sets.
This little discussion about names reveals another mistake of mine: thinking of types as sets. I thought ZZ <: QQ for example.
That's not even consistent with the previous mistake. In Sage say, i'm guessing QQ is of type Field or something, which inherits from a type Ring or something -- but not the other way around!
So mistake (1) should have led me to believe QQ <: ZZ. Oh well.
Mistake (3): thinking about Sage too much. To my discharge, ZZ in Sage is an object,
like 1/2, not a type or anything.
This has unconsciously contributed to my confusion greatly. Also the fact that there is a coercion from ZZ to QQ, and somewhere I must have taken A <: B to mean "there is a coercion from A to B".
Pierre
Julia is OOP!
or use it as a parameter
(e.g. the domain of a Map object). Sometimes the Objects (in the
categorical sense) are more interesting than their elements.
If your Objects are parameterized types,
I'm curious how you handle in
Julia the fact that R[x] is a Euclidean domain iff R is a field.
(As
you mention, trying to use the type hierarchy in Python to model
categorical relationships in Sage has generally lead to pain which is
why we're moving away from that...)
Julia is OOP!
If it is,
how do you do the following in Julia: define a class (i'm using Python parlance) Foo
with some attributes,
say if x is an instance of Foo it has an integer x.n ; and then define a class Bar that inherits from Foo, that is, has all the attributes from Foo, and some more, for example if y is an instance of Bar it has y.n because it is an instance of Foo, and say it also has a string y.name. In python
class Foo:
n= 0
class Bar(Foo):
name= ""
These classes don't even have methods, so it's not exactly complicated OOP. Yet, I'm not sure how to do this (emulate this?) the Julia way. Presumably, not with the <: relation at all. I'm wondering how multi-dispatch comes in, given there are no methods involved.
On Mon, Aug 25, 2014 at 1:24 PM, Bill Hart <goodwi...@googlemail.com> wrote:
>> > Correct. But classes are essentially objects in Python because
>> > everything is
>> > an object, pretty much. This really muddies the water. Better to think
>> > of
>> > classes and objects as being separate concepts.
>>
>> I, for one, find it very convenient (especially compared to using
>> reflection and other hacks that are needed to reason about type at
>> runtime in C++ or Java). Not that some languages aren't even better.
>>
>> > Also better to think of ZZ as a class in Sage and elements of ZZ as
>> > objects,
>> > even though actually ZZ happens to be an object too because Python.
>>
>> No. ZZ is an object because you might want to do things with it, like
>> ask for it's properties (is it infinite?)
>
> Sorry if I have that wrong. But when I type ZZ?? into Sage, it pulls up the
> definition of a class.
If a is an instance of class A, typing a?? will give you the definition of A.
> Anyway, it's irrelevant because Python.
>
> In Julia types are also able to be passed around as first class objects.
> Which means you can define multimethods which take types as parameters. I do
> this all the time in Nemo.
>
> So I can send a type to a function which gives me some properties of the
> type, if I so wish. I thought about adding properties like is_infinite to
> types in Nemo. But I found that I had been thinking the wrong way about
> things all along.
Good, because I was worried there for a minute you were praising Julia
for not treating types as first class objects.
>> or use it as a parameter
>> (e.g. the domain of a Map object). Sometimes the Objects (in the
>> categorical sense) are more interesting than their elements.
>
> If you are a category theorist yes. Most mathematicians think categories are
> just sets.
>
>>
>> If your Objects are parameterized types,
>>
>> I'm curious how you handle in
>> Julia the fact that R[x] is a Euclidean domain iff R is a field.
>
> I'm curious, is the category of Euclidean domains a full subcategory of
> Rings?
I think so; the Euclidean property doesn't place any constraint on the
morphisms.
I am still curious how you would express that
UnivariatePolynomialRing{QQ} <: EuclideanDomans but the same doesn't
hold for UnivariatePolynomialRing{ZZ} in Julia.
>> (As
>> you mention, trying to use the type hierarchy in Python to model
>> categorical relationships in Sage has generally lead to pain which is
>> why we're moving away from that...)
>
> Oh I can't think why.
I buy that the type system is more powerful in Julia (yay), but I'm
still skeptical that it is the best way to express the relationship
between objects and categories, and at the same time elements and
objects, and attach useful information to those categories and
objects.
- Robert
myfun(a :: Union(EuclideanPoly, Poly))ortypealias AnyPoly Union(EuclideanPoly, Poly)myfun(a :: AnyPoly)
I am still curious how you would express that
UnivariatePolynomialRing{QQ} <: EuclideanDomans but the same doesn't
hold for UnivariatePolynomialRing{ZZ} in Julia.