I was reading through the gob source again (kudos to whoever
architected it — it's beautiful!), and it looks to me like it might be
vulnerable to fairly easy resource exhaustion attacks when decoding
slices.
The decodeSlice method allocates an underlying array to whatever size
specified. Thus if you accept gob-encoded requests from untrusted
sources, e.g. over the internet, it would be pretty easy to exhaust
memory on the receiving server.
One solution I can think of is to specify an allocation limit on the
decoder when dealing with untrusted sources and keep a running tally
of allocated storage.
Or perhaps I've just misread the problem. Thoughts?
--
All the best, tav
plex:espians/tav | t...@espians.com | +44 (0) 7809 569 369
http://tav.espians.com | http://twitter.com/tav | skype:tavespian
You're right - there are certainly DoS vectors to servers that blindly
decode gob streams. It's something that should probably be addressed,
but it will require some careful thought.
To solve this problem right now you could create a pass-through Reader
implementation that understands the gob format (at a basic level), and
puts a hard limit on the size of objects that can be sent. It should
be sufficient to simply close the connection to any client that sends
an object that is too big.
If you wrote the client code (even if you don't trust it), you should
be able to choose a maximum payload size.
Andrew
it would be trivial to add a method to Decode to limit the size it receives, since all messages have a header to set the length.
but not now.
-rob