11 views

James Harris

Oct 6, 2021, 12:01:00 PM10/6/21
to
Simple question: What are the differences between Ada's
Universal_Integer and a typical bigint type?

By bigint I mean a signed integer which expands and contracts to be as
wide as needed.

I read a comment that one cannot define an Ada object as being of type
Universal_Integer but I wondered why not. Wouldn't it make sense to have
Ada programs (or programs in another language, for that matter) treat
integer constants and expressions as of type bigint?

One additional point: In an expression which combines a constant (or
constant expression) with a declared object the value of the undeclared
constant expression would be automatically converted. For example, in

x + 4

the undeclared bigint 4 would be automatically converted to the type of
x as long as it was in range.

Would there be any practical problems with treating integer literals in
that way?

--
James Harris

Dmitry A. Kazakov

Oct 6, 2021, 12:27:36 PM10/6/21
to
On 2021-10-06 18:00, James Harris wrote:
> Simple question: What are the differences between Ada's
> Universal_Integer and a typical bigint type?

The incoming standard will have bigint package in the standard library.

> By bigint I mean a signed integer which expands and contracts to be as
> wide as needed.
>
> I read a comment that one cannot define an Ada object as being of type
> Universal_Integer but I wondered why not.

The reason was not to burden small targets.

But the main difference is that a universal type is considered a member
of each type hierarchy, e.g. each integer type is a subtype of
Universal_Integer. This is why these are all correct:

A : Array (1..10) of Boolean;
I : Integer := A'Length; -- Universal_Integer becoming Integer
J : Integer_64 := A'Length; -- Universal_Integer becoming Integer_64

> Wouldn't it make sense to have
> Ada programs (or programs in another language, for that matter) treat
> integer constants and expressions as of type bigint?

No, because of above. Bigint is a normal type, so you have to explicitly
convert from and to it.

> One additional point: In an expression which combines a constant (or
> constant expression) with a declared object the value of the undeclared
> constant expression would be automatically converted. For example,  in
>
>   x + 4
>
> the undeclared bigint 4 would be automatically converted to the type of
> x as long as it was in range.

That would require mechanics Ada does not have, namely ad-hoc sub- and
supertypes. The most close thing to that is C++ type conversion operators.

<rant on> IMO, The major problem that prevented C++ from becoming a
great language was templates. Instead of investing into the type system,
e.g. user-defined conversions etc, they buried themselves in the mess.
Ada has a similar problem with generics.
<rant off>

> Would there be any practical problems with treating integer literals in
> that way?

Ada community is very resistant towards OO, or better to say, towards
advanced type systems. They are like Bart and you in these issues.

In any case that would be a huge language change with consequences
extremely difficult to foresee. If there were another language to try
this stuff, Ada could learn from it. Unfortunately, language designers
are busy solving imaginary problems and chasing ghosts of last century.
You know these people... (:-))

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

anti...@math.uni.wroc.pl

Oct 7, 2021, 5:38:23 PM10/7/21
to
James Harris <james.h...@gmail.com> wrote:
> Simple question: What are the differences between Ada's
> Universal_Integer and a typical bigint type?
>
> By bigint I mean a signed integer which expands and contracts to be as
> wide as needed.
>
> I read a comment that one cannot define an Ada object as being of type
> Universal_Integer but I wondered why not. Wouldn't it make sense to have
> Ada programs (or programs in another language, for that matter) treat
> integer constants and expressions as of type bigint?

For Ada there are problems, mostly due to original constraints on
design. For other languages there is no essential problems.
I program every day in Spad, where type 'Integer' is in fact
bigint type. There is also type 'SingleInteger' which represent
machine sized-integers (due to particulars of implementation
range of 'SingleInteger' is slightly smaller than pure machine
integers).

> One additional point: In an expression which combines a constant (or
> constant expression) with a declared object the value of the undeclared
> constant expression would be automatically converted. For example, in
>
> x + 4
>
> the undeclared bigint 4 would be automatically converted to the type of
> x as long as it was in range.

ATM Spad needs explicit convertion to smaller type.

> Would there be any practical problems with treating integer literals in
> that way?

In general when using integers of differing sizes there is correctness
versus efficiency tradeof. Large types means that there is little
(or no) possibility of overflow. However, when smaller types are
large enough they frequently are much more efficient. Consider

f() == max()\$SingleInteger + 2

Currently Spad compiler generates code based on result type: if
return type of 'f' is declared as 'Integer' (that is bignum)
declared as 'SingleInteger', then Spad compiler generates
addition in 'SingleInteger', which will overflow. For this
example Spad rules in fact work resonably well, but in general
it is tricky to decide which type to use (Spad tends to err
on side of correctness and use bignums). In Spad there is
possibilty to manualy choose types, that is you can write:

x +\$SingleInteger qconvert(2)@SingleInteger

where '+\$SingleInteger' means '+' for type 'SingleInteger' and
'qconvert(2)@SingleInteger' effectively changes type of constant.
Note that Spad in used mostly for mathematical computation
and there is tendecy for users to choose values close to
type limits. So, there is quite nontrivial risk that
addition or multiplication in fixed width type will overflow.

--
Waldek Hebisch

James Harris

Feb 12, 2022, 12:40:22 PMFeb 12
to
On 06/10/2021 17:27, Dmitry A. Kazakov wrote:
> On 2021-10-06 18:00, James Harris wrote:
>> Simple question: What are the differences between Ada's
>> Universal_Integer and a typical bigint type?
>
> The incoming standard will have bigint package in the standard library.
>
>> By bigint I mean a signed integer which expands and contracts to be as
>> wide as needed.
>>
>> I read a comment that one cannot define an Ada object as being of type
>> Universal_Integer but I wondered why not.
>
> The reason was not to burden small targets.

I don't see the problem. Couldn't expressions involving /only/ literals
be carried out at compile time?

Similarly, couldn't compile-time expressions (including literals) which
are combined with an identifier be imkplicitly 'converted' at compile
time into the type of the identifier?

If all the BigInt stuff can be carried out at compile time I cannot see
how it would be a burden to small targets.

>
> But the main difference is that a universal type is considered a member
> of each type hierarchy, e.g. each integer type is a subtype of
> Universal_Integer. This is why these are all correct:
>
>    A : Array (1..10) of Boolean;
>    I : Integer    := A'Length; -- Universal_Integer becoming Integer
>    J : Integer_64 := A'Length; -- Universal_Integer becoming Integer_64
>
>> Wouldn't it make sense to have Ada programs (or programs in another
>> language, for that matter) treat integer constants and expressions as
>> of type bigint?
>
> No, because of above. Bigint is a normal type, so you have to explicitly
> convert from and to it.

I am thinking to specify (in future) that all integer literals are of
BigInt type but to have them combinable with identifiers by implicit
compile-time conversions. For example,

int A := 90 ;90 would be of type BigInt but converted to type int

I guess you wouldn't much care for that. :-)

--
James Harris