Directives decodeRequest
and decodeRequestWith
which handle compressed request data don’t limit the amount of uncompressed data flowing out of it. In combination with common request directives like entity(as)
, toStrict
, or formField
, this can lead to excessive memory usage ultimately leading to an out of memory situation when highly compressed data is received (so-called “Zip Bomb”).
Any code that uses decodeRequest
or decodeRequestWith
is likely to be affected.
Until we publish a fix that will limit the amount of memory by default, you can use this custom directive instead of decodeRequest
to guard against excessive decompressed data:
Scala:
def safeDecodeRequest(maxBytes: Long): Directive0 =
decodeRequest & mapRequest(_.mapEntity {
case c: HttpEntity.Chunked ⇒ c.copy(chunks = HttpEntity.limitableChunkSource(c.chunks))
case e ⇒ e
}) & withSizeLimit(maxBytes)
And replace all decodeRequest
usages with
safeDecodeRequest(maxDecompressedBytesToSupport)
Java:
RequestEntity chunkedWithLimit(RequestEntity entity) {
if (entity.isChunked())
return
HttpEntities.createChunked(
entity.getContentType(),
akka.http.scaladsl.model.HttpEntity.limitableByteSource(((HttpEntity.Chunked)entity).getDataBytes().asScala()).asJava()
);
else
return entity;
}
Route safeDecodeRequest(long maxBytes, Supplier<Route> inner) {
return
decodeRequest(() ->
mapRequest(req -> req.withEntity(chunkedWithLimit(req.entity())), () ->
withSizeLimit(maxBytes, inner)
)
);
}
And replace all decodeRequest(innerRoute)
with
safeDecodeRequest(maxDecompressedBytesToSupport, innerRoute)
See https://gist.github.com/jrudolph/2be2e6fcde5f7f395b1dacdb6b70baf7 for full code including imports.
The CVSS score of this vulnerability is 7.3 (High), based on vector AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H/E:H/RL:W/RC:C.
Rationale for the score:
All released Akka HTTP versions are affected:
We will release fixed versions as soon as possible.