Rocket-chip Fixed Point on accelerators

52 views
Skip to first unread message

victor araújo

unread,
Nov 13, 2023, 11:50:13 AM11/13/23
to RISC-V SW Dev
Hi all.
I'm currently building an accelerator for RISCV using rocket-chip, and I'm facing the following error:

[error] /home/victortoon/Documentos/chipyard/chipyard/generators/rocket-chip/src/main/scala/tile/LazyRoCC.scala:149:44: value asDouble is not a member of chisel3.UInt
[error]   val angle = FixedPoint.fromDouble(addend.asDouble, 16.W, 8.BP)
[error] 

addend is the value that comes from the register RS1, and it's and UInt value, I need it to be a Fixed Point value, But I'm not able to convert it that easily. I searched a lot and found the asDouble method, but it looks like the compiler is not recognizing it. What could I use in this case?

Joseph Tarango (Joe)

unread,
Nov 14, 2023, 4:21:13 PM11/14/23
to RISC-V SW Dev, araujo.v...@gmail.com
Hello,

The challenge looks to be an include methods issue. 

I would recommend to create custom hardware or scala code to convert to the fixed point method. Depending on the type of fixed point mathematics, you will need to create different versions I.E. Q16.16 point & UQ16.16 https://en.wikipedia.org/wiki/Q_(number_format). Also as words of caution when performing mixed precision and non-standard mathematics; make sure the IEEE float errors are propagated and exceptions thrown/caught. There are numerous edge cases and the range, precision, and rounding will need to be taken into account. When adding custom hardware it may be useful to take a look at my RISC-V repository https://github.com/jtarango/freedom for how I used Makefiles and created custom components. Below are some useful notes from code, I have written for a dynamic point library not yet published.

Fixed point representation is the first set of digits are for the bits to the left hand side of the point and the other are on the right hand side.
I.E Q2.3 = 00.000
Bit sizes for fixed point types
TYPE     | Sign | Exponent | Significand | Total Bits |               Range                |    Precision    |
Q8.8    0        0            16           32        [-           2^8,             2^7]   +-       2^-8
Q16.16    0        0            32           64        [-          2^16,            2^15]   +-       2^-16
Q32.32    0        0           128          128        [-          2^32,            2^31]   +-       2^-32

Bit sizes for floating point types
TYPE     | Sign | Exponent | Significand | Total Bits |               Range                |    Precision    |
bfloat    1        8             7           16        [-  126*log_10(2),  127*log_10(2)]   +-log_10(2^-8)
float    1        8            23           32        [-  126*log_10(2),  127*log_10(2)]   +-log_10(2^-24)
double    1       11            52           64        [- 1022*log_10(2), 1023*log_10(2)]   +-log_10(2^-53)
long double    1       15           112          128        [-16382*log_10(2),16383*log_10(2)]   +-log_10(2^-113)

Precision of the representation for printing values.
1) A 16-bit, brain-precision IEEE754 number has 8 mantissa bits, which gives about 7+1 * log10(2) = 2.40 ~ 3 digits of precision.
2) A 32-bit, single-precision IEEE754 number has 24 mantissa bits, which gives about 23+1 * log10(2) = 7.22 ~ 8 digits of precision.
3) A 64-bit, double precision IEEE754 number has 53 mantissa bits, which gives about 52+1 * log10(2) = 15.95 ~ 16 digits of precision.
4) A 128-bit, long double precision IEEE754 number has 112 mantissa bits, which gives about 112+1 * log10(2) = 34.01 ~ 35 digits of precision.

Example C methods

q16f16_t DPtoQ16(const dp_t val) {
int64_t shift = ((int64_t)1 << (int64_t)16);
int64_t dValue = (int64_t) val;
q16f16_t result = (q16f16_t)(dValue * shift);
    return result;
}

dp_t Q16toDP(const q16f16_t val) {
    return (dp_t) val / 0x00010000U;
}
Reply all
Reply to author
Forward
0 new messages