On Tuesday, October 27, 2020 at 9:11:28 AM UTC-4, KJ wrote:
> On Sunday, October 25, 2020 at 6:07:33 AM UTC-4, gnuarm.del...@gmail.com
> > Looking for a good way to support initialized block rams in my design I found that VHDL-2008 includes some new predefined array types such as integer_vector, an array of integers.
> > Integers are either 32 or even 64 bits, so that would be a rather wide memory. Is there a way to restrict the range of the integer of such an array?
> Use a subtype. Example subtype MyInteger is natural range 0 to 255. You can then define MyIntegerVector to be an array of MyInteger.
> > I'd like to have a single file for the definition of a block ram because it will require a synthesis attribute which likely won't port well. So rather than scatter this issue around the design, a single module seems like a good idea. The memory module also needs to support initialization. Not sure of the best way to support that. I'm thinking a constant array passed in through a generic.
> Yes, passing the integer array in as a generic will work.
> > I'm still fuzzy on the details of how to make this all work together. I guess I could have a file with the constant array definition. Then the application code can instantiate the memory array with the constant initialization array as a generic.
> > I've always heard that memory is best written as an array of integers for simulation efficiency. But that requires limiting the range of the integer. If the integer_vector type is used it would seem the word size is fixed at whatever the synthesis vendor provides in their defaults. Or is there a way to restrict the range of the integers without a special type for each width?
> As I mentioned you can use an integer subtype to limit the range of the integers. However, another way is to use integer and then limit the range when you go to fit it into the actual memory which likely has a std_logic_vector interface to it. That means you will limit the range when you convert the integer to an unsigned when you tell it the number of bits.
Yes, I understand the nature of VHDL. The issue is that if the width of the data is specified in the type definition of the memory, the memory code is specific to that data width and many modules will be required, one for each data width, so not the best option.
> The 'best' approach will depend on exactly what your memory block interface looks like but here are a couple of considerations:
> - If you use a subtype but you accidentally include a number that is out of range such as 256 for subtype MyInteger then the compiler will flag an error.
I believe the simulator will flag the error at run time. In fact it is a fatal error and stops the simulation.
> - If you use std_logic_vector(to_unsigned(...)) to fit it into the memory then the numeric_std library will toss a warning when it tries to convert 256 to an 8 bit value, but convert it to 0. Then you're depending on your ability to notice the warning to find the design error.
The best way I've found is to allow the memory declaration to remain unbounded integers. In fact, I use the predefined type integer_vector. The I/O will be vectors for address and data. Conversions between the vectors and the memory integers define the word widths in a way that the tools then set the implemented size of the memory without direct specification of the integer range.
Once I realized this was already happening in the conversions it was simple to think of the memory widths through the I/O data width specification generic.
So the solution is to do nothing with the integers making up the memory and let the conversions set the word widths.
+ Get 1,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209