required buffer size for LZ4_decompress_safe_partial

218 views
Skip to first unread message

Zbigniew Jędrzejewski-Szmek

unread,
Dec 11, 2015, 12:21:31 AM12/11/15
to lz...@googlegroups.com
Hi,

we encountered an unexpected decompression failure for when using
LZ4_decompress_safe_partial to "peek" into the compressed data.
The problem was found in systemd's journalctl, but I prepared a
simplified test case, attached.

The data that was compressed was "HUGE=x...", with the "x" until
the data is 4096*1024 bytes long. This compresses to
"\6fHUGE=x\x01\x00\xff...\xff\x22\x50xxxxx", a total of 16464 bytes.

It can be decompressed using LZ4_decompress_safe or
LZ4_decompress_safe_partial, but only when the output buffer is
big enough. We are interested in reading the first ~20 bytes,
but LZ4_decompress_safe_partial succeeds when the
output buffer long enough to fit all of the uncompressed data,
otherwise it fails returning -16459.

Is there a way around this, i.e. to be able to decompress
partial data with a buffer that is not longer than a few kilobytes,
even with pathological data?

Thanks,
Zbyszek
test-decompress-partial.c

Yann Collet

unread,
Dec 11, 2015, 5:48:52 AM12/11/15
to LZ4c
Hi Zbigniew


Alas, this is the way LZ4_decompress_safe_partial() works for now.

LZ4_decompress_safe_partial() will stop decoding as soon as it has decoded "enough" data, as instructed in its interface.
However, it doesn't guarantee to stop there.
It will complete its current "sequence" before testing for the size condition.
So, in worst case situations, it may indeed decode the full block before returning.

There is currently no public solution to this.
A new function would be required to fulfill this requirement.


Regards
Reply all
Reply to author
Forward
0 new messages