Background: The current version of the LRM defines the minimum
required range implementation for the type INTEGER as -2**31 - 1 to 2
** 31 - 1 . Many tool vendors extend that by one on the negative side,
to incorporate the full 32 bit two's complement range of values that
most computer hardware supports.
The VHDL standard operators for INTEGER type all promote the results
to the full range of INTEGER, regardless of the subtypes supplied as
operands. Simulation carries and stores in memory the full 32 bit
signed value, regardless of the range of values defined by the
variable's subtype's range. Synthesis carries the full 32 bit signed
value through intermediate results, then truncates it for storage in a
variable/signal of a lesser subtype. Synthesis then optimizes out the
intermediate results bits that do not influence those bits that are
ultimately visible in registers, outputs, etc.
The way I see it, we have three options for implementing a larger
integer style quantity in VHDL:
1) We could simply extend the minimum required range of INTEGER for
compliant implementations.
2) Or, we could redefine INTEGER as a same-sized subtype of some new,
larger super-integer base type.
3) Or, we could define new base_type(s) that are larger than INTEGER.
Each of these options has side effects in usability and performance,
unless we significantly alter the strong typing mechanisms inherent in
VHDL, which I do not advocate.
I am hoping an open discussion of these side effects will lead to a
consensus for the path forward. Of course we also need to discuss what
the range of the new integer should be, since that may weigh in the
discussion of side effects, particularly performance.
Andy
I generally expect INTEGER to simulate quickly. To me, that implies
that the best expanded type size would be 64 bits, so that math
operations on it can still be performed natively on 64 bit processors.
I'd say that, in order to avoid breaking any current assumptions as to
the size of integer, I'd want to have a LONG_INTEGER of +/-(2**64 - 1)
size. Then the LRM could redefine INTEGER as being a subtype of
LONG_INTEGER, but tool vendors could choose whether to work with it
as a subtype implementation or to add dedicated speed-up code for it at
their discretion.
--
Rob Gaddi, Highland Technology
Email address is currently out of order
I would strongly advocate to define an integer base type with
arbitrarily large bounds. This avoids the current situation
were designers have to use unsigned/signed artificially
when they really would like to use a large integer.
For the rationale and a prototype, see:
http://www.jandecaluwe.com/hdldesign/counting.html
Quoting from the essay:
"""
The MyHDL solution can serve as a prototype for other HDLs.
In particular, the definition of VHDL�s integer could be
changed so that its range becomes unbound instead of
implementation-defined. Elaborated designs would always
use integer subtypes with a defined range, but these would
now be able to represent integers with arbitrarily large values.
"""
For backward-compatibility, the best would probably be to
define an abstract base type with a new name. The current
integer would be become a subtype of this, but with bounds
larger than the currently required minium. Implementation-definable
bounds should be eradicated in any case.
Jan
--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
> I generally expect INTEGER to simulate quickly. To me, that implies
> that the best expanded type size would be 64 bits, so that math
> operations on it can still be performed natively on 64 bit processors.
>
> I'd say that, in order to avoid breaking any current assumptions as to
> the size of integer, I'd want to have a LONG_INTEGER of +/-(2**64 - 1)
> size. Then the LRM could redefine INTEGER as being a subtype of
> LONG_INTEGER, but tool vendors could choose whether to work with it
> as a subtype implementation or to add dedicated speed-up code for it at
> their discretion.
I would call this line of reasoning premature optimization, which
is the source of all evil as we know. The evil being that we are
forced to use low level types such as signed/unsigned artificially
as soon as the representation's bit width exceeds 32 or 64 bits.
Why should low level computer architecture considerations have
such a drastic influence on my modeling abstractions?
Even a language like Python used to make the difference between
"normal" and "long" integers, until they realized that it's all
artificial to users and got rid of it.
I don't see a conceptual technical difficulty in having a base
integer with arbitrarily large bounds, and simply requiring that
only subtypes with defined bounds can be used at elaboration time.
A compiler should have little difficulty to find out
which integer subtypes can be optimized for speed and how.
The current VHDL scalar subtyping mechanism would not permit a
compiler to opitmize performance based on the subtype range, because
that information is not available to operators and subprograms who
receive these operands/arguments. You can query an argument's base
type, but not its subtype or range thereof. Changing the scalar
subtyping mechanism for just integer would smell like a hack.
Whether or not we attempt to expand the scalar subtyping model in VHDL
is a discussion we should have, but it should be clear that preserving
backwards compatibility with the current integer type and its subtypes
would not permit this.
Andy
In a word: simulation performance!
I understand your point about the virtues of arbitrarily sized
numerical representations, but we already have that, in the fixed
point package, with only a few shortfalls*, one of which is the
horrible simulation performance (compared to integers).
Every modeler is forced to make a choice between scalability and
performance in their model. If you want extreme scalability, you
sacrifice performance, and if you want extreme performance, you
sacrifice scalability. This will be true for as long as we are
modelling things the size of the very machines that must execute these
models (as the hardware performance of the computers running the
simulation increases, so does the complexity of the models we will be
simulating). VHDL is no different from any other modelling and
simulation language in this respect.
What I would like to do is expand the scalability of integer somewhat,
while retaining the significant performance advantage of integers over
vector based, arbitrary length representations. I believe that would
put an upper bound on the maximum size of integer to somewhere between
64 and 256 bits signed. At 256 bits signed, most real-world structures
(particularly those where performance is required) are representable,
yet the performance would easily exceed that of 4 bit sfixed.
* The fixed point packages support arithmetic completeness with one
exception: ufixed - ufixed = ufixed (why did they bother to expand the
result by one bit, but not make it signed?) Also, the lack of the
ability to override the assignment operator means that manual size
adjustment prior to storage is almost always required. While the
former does not require changes to the language, the latter certainly
does, and is a whole argument in and of itself. A performance middle
ground between integer and vector based representations could even be
staked out. We could create either an optimized version of the fixed
point package for std.bit (ala numeric_bit), or a fixed point approach
using a base type unconstrained array of integers. Both of these may
be implemented without changing the language; however, the same
restrictions on resizing prior to assignment would apply.
Andy
>There is a conversation going on via email about the best way to
>expand the range of the current integer type in VHDL in the next
>version of the LRM. So I thought I would submit some options to this
>group to see what would be preferred.
>
>Background: The current version of the LRM defines the minimum
>required range implementation for the type INTEGER as -2**31 - 1 to 2
>** 31 - 1 . Many tool vendors extend that by one on the negative side,
>to incorporate the full 32 bit two's complement range of values that
>most computer hardware supports.
And your testbenches die horribly when you port to a simulator that doesn't...
More painful for me is that NATURAL is a 31-bit type, but I can see the
rationale for not wanting Natural values taht cannot be converted to Integer.
>The way I see it, we have three options for implementing a larger
>integer style quantity in VHDL:
>
>1) We could simply extend the minimum required range of INTEGER for
>compliant implementations.
>
>2) Or, we could redefine INTEGER as a same-sized subtype of some new,
>larger super-integer base type.
>
>3) Or, we could define new base_type(s) that are larger than INTEGER.
A lot of what VHDL got right in the first place came from Ada.
Now I've never used Ada until recently, being put off in the early days by its
"complexity" (it's much simpler than C++) and ability to bring a 1980's
workstation to its knees.
But now that I finally tried it I like it a lot, and VHDL looks more cut-down
and restrictive than it used to, by comparison. And Ada has developed nicely
over the years, the Ada-2005 dialect could make a strong case for being "C++
done right". Ada-2005 would be a good place to start for adding object-based
concepts in a clean, safe and efficient manner to VHDL for example. But that's
another story.
Relevant here is the suggestion that you could do a lot worse than extending
VHDL by following the approach in Ada's integer type system..
This could unify the three approaches.
In Ada, you are encouraged to define your own integer (as well as other) types.
There is a hypothetical "universal_integer" type with essentially infinite
range; it is never directly used but it forms the base type for all integer
types (which are subtypes of it with declared ranges).
If I declare an integer type the compiler will implement it using an appropriate
base type (e.g. 8-bit or 64-bit integer) or return a compilation error if it
can't (e.g if I declare a 65536 bit integer). The point is that the limit isn't
hard-wired into the standard (which merely guarantees you get at least 16
bits!).
From the user point of view, just write it. It'll either work or fail to
compile; it won't surprise you at runtime. (Unless you add pragmas to explicitly
suppress overflow checks, then fly it in Ariane-5).
If nobody can compile anything useful, the market demands better compilers.
(I doubt there's been a 16-bit Ada compiler in a while...)
http://www.adaic.com/standards/05rm/html/RM-3-5-4.html
or Chapter 6 in John Barnes "Programming in Ada 2005" (Addison-Wesley)
There may be nothing wrong in requiring the range that an implementation is
required to support for user-declared integer types, to be at least 64 bits.
(approach 1 above) . Beyond 64 bits, let the market decide. At least until
VHDL-2015.
Ada has a standard INTEGER type for convenience. Its definition is not part of
the standard but declared in the standard library (you can discover the range
using attributes, just as in VHDL). It isn't an upper limit on the range
accepted by the compiler, though some compilers probably treat it as such.
If this approach is adopted, STANDARD.INTEGER would be identical with the
current VHDL integer for compatibility - approach (2) above.
There would be nothing wrong with adding a longer integer type to the standard
library (approach 3) though I'd rather not see long_integer, long_long_integer,
long_long_long_integer and short_integer, thankyouvery much; there's enough of
that elsewhere. I would prefer Integer_32 and Integer_64 (etc) with Integer
then declared to be an alias for Integer_32.
(incidentally, VHDL "alias" is somewhat similar to the Ada "renames" keyword,
only not as useful)
>Each of these options has side effects in usability and performance,
>unless we significantly alter the strong typing mechanisms inherent in
>VHDL, which I do not advocate.
(shudder) NOOO....
>I am hoping an open discussion of these side effects will lead to a
>consensus for the path forward. Of course we also need to discuss what
>the range of the new integer should be, since that may weigh in the
>discussion of side effects, particularly performance.
As far as performance is concerned, in this approach the compiler selects the
most appropriate base type for your declaration, so you only take the hit for
128-bit integers if you declared them, and nowhere else.
- Brian
With this approach, is there not a worry get get an integer equivalent
of the std_logic_arith package that is non-ieee standard but becomes a
de-facto standard?
The hypothetical "universal integer" type is also in the VHDL LRM,
though it has a range defined as "implementation defined".
> If I declare an integer type the compiler will implement it using an appropriate
> base type (e.g. 8-bit or 64-bit integer) or return a compilation error if it
> can't (e.g if I declare a 65536 bit integer). The point is that the limit isn't
> hard-wired into the standard (which merely guarantees you get at least 16
> bits!).
>
How does that differ from VHDL, except that "you get at least 16 bits"
is replaced by "you get at least a bit less than 32 bits"?
Tools are already free to provide a larger integer range. Cadence
Incisive supports 64 bit integers I believe.
> From the user point of view, just write it. It'll either work or fail to
> compile; it won't surprise you at runtime.
Again how is that different from VHDL?
The only issue I can see is that tool vendors have chosen not to
implement larger integers than the minimum required in the standard. I
guess they don't see any competitive advantage in doing so,
regards
Alan
P.S. Perhaps the answer is to reduce the minimum required range to 16
bits like Ada :-;
--
Alan Fitch
Senior Consultant
Doulos � Developing Design Know-how
VHDL * Verilog * SystemVerilog * SystemC * PSL * Perl * Tcl/Tk * Project
Services
Doulos Ltd. Church Hatch, 22 Marketing Place, Ringwood, Hampshire, BH24
1AW, UK
Tel: + 44 (0)1425 471223 Email: alan....@doulos.com
Fax: +44 (0)1425 471573 http://www.doulos.com
------------------------------------------------------------------------
This message may contain personal views which are not the views of
Doulos, unless specifically stated.
It might be possible. But I don't think so because (a) we're not talking about
adding new types, just effectively adding new subtypes of universal_integer so
no opportunities for overloading operators (b) so there's no vacuum to fill with
a non-standard package, and (c) hopefully, people have learned from last time
(which may be optimism on my part!).
And besides as Alan points out, VHDL adopted more of the Ada type system than I
remembered.
- Brian
>Brian Drummond wrote:
>> In Ada, you are encouraged to define your own integer (as well as other) types.
>> There is a hypothetical "universal_integer" type with essentially infinite
>> range; it is never directly used but it forms the base type for all integer
>> types (which are subtypes of it with declared ranges).
>>
>
>The hypothetical "universal integer" type is also in the VHDL LRM,
>though it has a range defined as "implementation defined".
I stand corrected; VHDL adopted more of the Ada approach than I remember.
I should have gone back to the LRM before posting.
>> If I declare an integer type the compiler will implement it using an appropriate
>> base type (e.g. 8-bit or 64-bit integer) or return a compilation error if it
>> can't (e.g if I declare a 65536 bit integer). The point is that the limit isn't
>> hard-wired into the standard (which merely guarantees you get at least 16
>> bits!).
>
>How does that differ from VHDL, except that "you get at least 16 bits"
>is replaced by "you get at least a bit less than 32 bits"?
Perhaps it doesn't in theory.
However it doesn't seem to present so much problem in practice.
Possibly the market, though fairly small, has enough clout to get what it wants?
Probably the GNAT Ada compiler, riding on the back of GCC, gets 64-bits for
free, and the commercial compilers don't want to be outdone by a freebie...
>Tools are already free to provide a larger integer range. Cadence
>Incisive supports 64 bit integers I believe.
Good...
>> From the user point of view, just write it. It'll either work or fail to
>> compile; it won't surprise you at runtime.
>
>Again how is that different from VHDL?
If a value such as -2**31 is NOT part of the range of integer, then a
calculation that produces it will abort on all conforming systems (raise the
"constraint error" exception in Ada), not "work" on Modelsim (propagating
potential trouble down the chain) and abort on Xilinx ISE Simulator.
(I'm not advocating adding exceptions to VHDL here; that would be a huge step.
Exceptions are just the means Ada uses to abort; VHDL has other means)
In other words, Integer does what it says; if you have a problem with that, use
another type. If you want your integer types to overflow silently, Ada offers
modular types, though these are effectively unsigned.
In VHDL you may be free to use other types, but (if they are > 32 bits) you
only have one (so far?) choice of vendor.
>The only issue I can see is that tool vendors have chosen not to
>implement larger integers than the minimum required in the standard. I
>guess they don't see any competitive advantage in doing so,
The question then is whether a change in the standard (or a reinforcement in its
wording) is the best way to provide the pressure that is currently lacking.
Maybe it is.
>P.S. Perhaps the answer is to reduce the minimum required range to 16
>bits like Ada :-;
heh, I *hope* that wasn't what I was suggesting!
Although... a ridiculously small limit does encourage vendors to comfortably
exceed it, or be laughed at.
regards,
- Brian
Let's be clear; a vendor can conform to the current standard AND
provide > 32 bit capacity integers. The standard requires they support
AT LEAST the almost 32 bit two's complement range.
The competitiveness (or apparent lack thereof) in the VHDL tools
business is based on the fact that none of the vendors can expect
their tool to be used to the exclusion of all others. I applaud
Cadence if they support 64 bit integers, but what good does it do them
if no other simulation or synthesis tool also supports it? Can anyone
really use such features in the real world (e.g. the world that
provides most of the users and buyers of these tools) unless they are
supported by other tools too? I suppose that if a user only uses the
64 bit integers in their testbench, and only ever uses the cadence
simulator that supports them, then he might be tempted to use it. But
most of the big customers do not limit themselves to one simulator
(hey, simulator bugs happen, and you sometimes have to switch
simulators to help find or confirm the bug).
This is a far different thing than a synthesis tool vendor adding new
features to their tool like recognizing certain existing constructs
and patterns in a particular way to create optimal implementations.
They are not doing anything that will harm compatibility with any
simulators, and if it does not work with their competitors' synthesis
tool (yet!), so much the better. To date, the most effective tool in
getting a synthesis vendor to add a feature is to tell them "your
competitor B does it." Maybe that would work with getting a synthesis
tool to adopt 64 bit integers "because the cadence simulator supports
it", but that tends to be a slow, trickle-down effect for which we
have been waiting far too long.
This is why the simplest way to get >32 bit integers is to simply
expand the minimum required range in the LRM. Every supplier knows
they need to be able to tell their customers that they are compliant
to the standard, and this would just be part of it.
However, whether the simplest way is also the best way, that is the
heart of the matter...
Andy
I wonder whether Jan's idea is more likely to succeed. You could add a
new integer type to the language called e.g. biginteger which has
unlimited size. The tool vendors can then compete on performance for any
"reasonable" size (e.g. 64 bits), but the user would be allowed to
create any size integer they wanted.
If the minimum is increased to 64 bits, then presumably that will be a
limitation eventually and we'll all wish there were 128 bit integers (or
whatever).
Creating unlimited integers would be more similar to systemc where there
are two sorts of integers - up to 64 bits, and "big integers" of any
size. big integers simulate slower, but different vendors optimize their
performance if you use less than 64 bits.
regards
Alan
The concept of "unlimited" already exists with unconstrained vectors,
but the internals of VHDL prohibit assignment between different sized
subtypes of the same base array type.
The vhdl system provides for the communication of the sizes of the
vector arguments to subprograms (and those of vector ports to
entities). This information is not provided for scalar subtypes. For
example, if I have a subprogram or entity with an integer type formal
argument or port, and the user maps a natural subtyped signal to that
argument/port, I cannot tell inside the body what the range of that
signal really is.
Perhaps the concept of an unconstrained integer type could borrow the
ability that vectors have to communicate their actual range to an
unconstrained port. It would be sort of a hybrid between an scalar and
an array.
The operators for such an unconstrained integer type should expand the
results to capture the true arithmetic intent of the operator (similar
to the fixed point package, except to include unsigned - unsigned =
signed), rather than silently roll over or truncate, etc.
The assignment operators for this unconstrained integer type should
also automatically resize (with errors if numeric accuracy were
compromised) the RHS expression result to the capacity of the
constrained object on the LHS.
If we had such an "unconstrained integer type", should we expand it to
support fixed point (i.e. unconstrained integer would be a subset and
special case of unconstrained fixed point). I know ada has a fixed
point scalar type system (and a syntax for specifying precision).
Perhaps we could borrow that? I have advocated for a while that ufixed
could supplant numeric_std.unsigned quite easily.
This still leaves on the table the existing scalar integer capacity.
Given that current computer architectures are starting to support 128
or even 256 bit primitive data widths, I would expect that requiring
their support now would not hinder simulation performance excessively
on computers that would likely be used in situations requiring high
performance. A limitation to 64 bit two's complement, while maximizing
performance on a wide variety of computers, does not allow 64 bit
unsigned, which would be necessary in many applications. I say again,
even a 32 bit machine can run 256 bit integer arithmetic faster than 8
bit ufixed or unsigned.
Andy
>On Mar 4, 5:56�am, Brian Drummond <brian_drumm...@btconnect.com>
>wrote:
>> If a value such as -2**31 is NOT part of the range of integer, then a
>> calculation that produces it will abort on all conforming systems
>
>Let's be clear; a vendor can conform to the current standard AND
>provide > 32 bit capacity integers. The standard requires they support
>AT LEAST the almost 32 bit two's complement range.
Sorry if I was unclear; my comment above was NOT a statement about VHDL...
>The competitiveness (or apparent lack thereof) in the VHDL tools
>business is based on the fact that none of the vendors can expect
>their tool to be used to the exclusion of all others. I applaud
>Cadence if they support 64 bit integers, but what good does it do them
>if no other simulation or synthesis tool also supports it?
Not much unless they are also in the synthesis business...
>This is why the simplest way to get >32 bit integers is to simply
>expand the minimum required range in the LRM.
I believe you are right that this has to be at least a part of the changes.
- Brian