A flaw was found with BigDecimal that could be exploited to cause the
interpreter to segfault because it mishandled certain large input
values. As ActiveRecord uses BigDecimal, this issue could potentially
affect many Rails installations, though this is not a Rails specific
issue. The result is that an attacker could potentially deliver a
denial of service attack to many applications. This vulnerability has
been assigned the CVE name CVE-2009-1904.
This issue affects all Ruby 1.8.6 versions prior to pl369. New
versions can be found at ruby-lang.org:
* ftp://ftp.ruby-lang.org/pub/ruby/1.8/ruby-1.8.6-p369.tar.gz
MD5(ruby-1.8.6-p369.tar.bz2)= c3c1f3dd0dfbd2e17a04e59c2f12cfc8
SHA256(ruby-1.8.6-p369.tar.bz2)=
fb6974ab8a0de52511e846eacf113432b5227a867e3c9741d65775f162e13715
SIZE(ruby-1.8.6-p369.tar.bz2)= 3981771
MD5(ruby-1.8.6-p369.tar.gz)= 8c140ae28b4c3947b92dfad69109d90b
SHA256(ruby-1.8.6-p369.tar.gz)=
edd3707a24dcf53d36a8722fe5addeb0d9077481a272451a9e662374863ba42c
SIZE(ruby-1.8.6-p369.tar.gz)= 4578207
MD5(ruby-1.8.6-p369.zip)= 139d301368533207919befed9ddd92e7
SHA256(ruby-1.8.6-p369.zip)=
fde0c634f98a08ba4c1fb012a22122579f22154134d281ef4e22692a1c1c6ff4
SIZE(ruby-1.8.6-p369.zip)= 5620911
Thank you,
Kirk Haines
Engine Yard
kha...@engineyard.com
irc: wyhaines
Hmm...so here's my exploration in JRuby:
With a really large BigDecimal:
* to_f runs forever
* to_i runs forever
The latter appears to affect 1.9.1 as well...
➔ ruby19 -rbigdecimal -e "p BigDecimal.new('5E6999999').to_i"
<runs forever>
I have not tested the patched 1.8.6/1.8.7 versions to see if to_i has
this issue.
I'll have fixes for both JRuby issues on master and jruby-1_3 branches shortly.
- Charlie
Well to_f can shortcut such cases to generate Inf, but to_i should generate
large Bignums, so there may be difficulties to solve to_i slowness for those cases.
1.9.1 appears to produce Float::MAX or Float::MIN for exponents
outside float range, not Inf. It seems like the possibility of it
running forever trying to produce an impossibly large Bignum should be
addressed in some similar way.
- Charlie
In message "Re: [Security] Ruby 1.8.6-pl369 released"
on Thu, 11 Jun 2009 01:28:06 +0900, Charles Oliver Nutter <hea...@headius.com> writes:
|1.9.1 appears to produce Float::MAX or Float::MIN for exponents
|outside float range, not Inf. It seems like the possibility of it
|running forever trying to produce an impossibly large Bignum should be
|addressed in some similar way.
1.9.2 BigDecimal#to_f produces Inf for too big numbers. I am not sure
what we can do for BigDecimal#to_i yet.
matz.
So Inf is official behavior then? I'm in process of at least getting
JRuby up to 1.9.x behavior for these cases.
Perhaps we can examine the size of the exponent for to_i to determine
whether it's within a reasonable range?
- Charlie
In message "Re: [Security] Ruby 1.8.6-pl369 released"
on Thu, 11 Jun 2009 01:55:07 +0900, Charles Oliver Nutter <hea...@headius.com> writes:
|> 1.9.2 BigDecimal#to_f produces Inf for too big numbers. I am not sure
|> what we can do for BigDecimal#to_i yet.
|
|So Inf is official behavior then? I'm in process of at least getting
|JRuby up to 1.9.x behavior for these cases.
Yes, if it doesn't cause any serious issue.
|Perhaps we can examine the size of the exponent for to_i to determine
|whether it's within a reasonable range?
Could be, if we can define "reasonable range".
matz.
Perhaps it could be set by a variable (like $KCODE).
It might be useful to make it somewhat $SAFE dependent, too.
However, what is the issue here, exactly?
Is it that a BigDecimal can have a large exponent for which zeroes
have to be generated while converting to a Bignum?
Thanks
Michal
In message "Re: [Security] Ruby 1.8.6-pl369 released"
on Thu, 11 Jun 2009 07:19:45 +0900, Michal Suchanek <hram...@centrum.cz> writes:
|> Could be, if we can define "reasonable range".
|
|Perhaps it could be set by a variable (like $KCODE).
I, sort of, hesitate to introduce a new global state to the library.
|It might be useful to make it somewhat $SAFE dependent, too.
Possible.
|However, what is the issue here, exactly?
|
|Is it that a BigDecimal can have a large exponent for which zeroes
|have to be generated while converting to a Bignum?
Basically. A huge BigDecimal could occupies large amount of memory,
and consumes a lot of computational time. It is fundamental, but
bothering sometimes.
matz.
The only way to work around this is by a global state of "reasonably
large exponent" which can be either fixed or configurable at runtime.
If that solution is accepted it should better be configurable. It
could be even set to Inf by default for compatiblitity and people who
anticipate this could be a problem or were bitten by the issue already
can set it lower.
>
> |It might be useful to make it somewhat $SAFE dependent, too.
>
> Possible.
>
> |However, what is the issue here, exactly?
> |
> |Is it that a BigDecimal can have a large exponent for which zeroes
> |have to be generated while converting to a Bignum?
>
> Basically. A huge BigDecimal could occupies large amount of memory,
> and consumes a lot of computational time. It is fundamental, but
> bothering sometimes.
>
Then introducing an exponent to Bignum could probably resolve this
particular issue.
Truncating a Decimal should be easy and just shifting the exponent so
that no decimal digits remain should be reasonably fast as well.
This would, however, mean an incompatible change to Bignum because it
would have to track the number of zeroes which were left out. And
adding 1 to the resulting Bignum would still require the zeroes to be
generated.
Thanks
Michal
FWIW, the Java BigDecimal "to_f"/"to_i" behavior runs forever too, so
they've never opted to add a safeguard all these year.
JRuby may just defer to that behavior, but I did add an exponent check
in to_f for now.
- Charlie
I like that idea.
-greg