I am working on a Z80 project and I needed 72 bits of precision for 64 elements of the form log2(1+2^-i) (so log2(3/2), log2(5/4),...). I needed to convert these to hexadecimal, and it worked until I tried the following for i=56:
int(256*log(1+2^-56,2))
This returns 1, when in fact it should be 0. Actually, instead of multiplying by 256, multiplying by 65536, or 600, or many other numbers would also return the integer part as 1.
As a note, I used RealField(80) as my precision.