You do nothing wrong. What you're seeing is a "feature" of IEEE 754
floating points.
There's no exact binary representation for the decimal numbers 25.89,
0.1, 0.2 or 55.98, and many others. So when converted to double and
back to decimal again with precision 17 you get a different end
result.
Precision 17 is required, because otherwise some numbers would lose
precision. For example:
printf("%.17g", 1.0000000000000002)
-> 1.0000000000000002
(this is what e.g. json_dumps uses internally)
But:
printf("%.16g", 1.0000000000000002)
-> 1.0
So if Jansson used less precision, you would get a rounded result.
Petri