On variations of compression ratio in new LZ4_compress_fast()

428 views
Skip to first unread message

Francesc Alted

unread,
May 11, 2015, 12:07:03 PM5/11/15
to lz...@googlegroups.com
Hi Yann,

I have been trying r129 (aka 1.7.0) and I can confirm that the new codebase has speed-up my benchmarks by as much as 15%.  I am using GCC 4.9.1 here, so I suppose this is due to the vector optimization recently introduced, which really makes a difference.

Now, I have been experimenting with the new  LZ4_compress_fast() function and I must say that it can accelerate things by another 20% wrt new baseline (and getting less compression ratios, as expected).

However, I have seen that the compression ratio varies quite significantly depending on the acceleration parameter used.  For example, my Blosc compressor uses compression levels from 1 (minimum compression) to 9 (maximum compression), and using this formula 'accel = (int)((1. / clevel) * 100) - 10' for computing the acceleration, I am getting this:

$ bench/bench lz4 single 4
Blosc version: 1.6.2.dev ($Date:: 2015-05-06 #$)
List of supported compressors in this build: blosclz,lz4,lz4hc,snappy,zlib
Supported compression libraries:
  BloscLZ: 1.0.4
  LZ4: 1.7.0
  Snappy: 1.1.1
  Zlib: 1.2.8
Using compressor: lz4
Running suite: single
--> 4, 2097152, 8, 19, lz4
********************** Run info ******************************
Blosc version: 1.6.2.dev ($Date:: 2015-05-06 #$)
Using synthetic data with 19 significant bits (out of 32)
Dataset size: 2097152 bytes     Type size: 8 bytes
Working set: 256.0 MB           Number of threads: 4
********************** Running benchmarks *********************
memcpy(write):            518.9 us, 3854.1 MB/s
memcpy(read):             248.2 us, 8058.7 MB/s
Compression level: 0
comp(write):      281.3 us, 7108.8 MB/s   Final bytes: 2097168  Ratio: 1.00
decomp(read):     212.3 us, 9419.8 MB/s   OK
Compression level: 1
comp(write):      418.8 us, 4775.4 MB/s   Final bytes: 855696  Ratio: 2.45
decomp(read):     307.3 us, 6509.3 MB/s   OK
Compression level: 2
comp(write):      362.5 us, 5516.5 MB/s   Final bytes: 623504  Ratio: 3.36
decomp(read):     254.2 us, 7868.4 MB/s   OK
Compression level: 3
comp(write):      372.4 us, 5370.9 MB/s   Final bytes: 691856  Ratio: 3.03
decomp(read):     243.1 us, 8226.7 MB/s   OK
Compression level: 4
comp(write):      332.5 us, 6014.6 MB/s   Final bytes: 489528  Ratio: 4.28
decomp(read):     249.8 us, 8006.7 MB/s   OK
Compression level: 5
comp(write):      385.3 us, 5191.1 MB/s   Final bytes: 433104  Ratio: 4.84
decomp(read):     273.4 us, 7314.5 MB/s   OK
Compression level: 6
comp(write):      477.9 us, 4184.9 MB/s   Final bytes: 248764  Ratio: 8.43
decomp(read):     348.7 us, 5735.6 MB/s   OK
Compression level: 7
comp(write):      563.1 us, 3551.7 MB/s   Final bytes: 182880  Ratio: 11.47
decomp(read):     467.5 us, 4278.2 MB/s   OK
Compression level: 8
comp(write):      615.0 us, 3252.1 MB/s   Final bytes: 220464  Ratio: 9.51
decomp(read):     537.5 us, 3721.0 MB/s   OK
Compression level: 9
comp(write):      603.1 us, 3316.3 MB/s   Final bytes: 132154  Ratio: 15.87
decomp(read):     646.4 us, 3093.9 MB/s   OK

Round-trip compr/decompr on 7.5 GB
Elapsed time:       3.4 s, 5010.3 MB/s

while using the ' LZ4_compress_default()' I am getting a much smoother increase in the compression ratio (due to Blosc assigning larger blocks to larger compression ratios).  Here it is an example:

$ bench/bench lz4 single 4 2097152 8 19
Blosc version: 1.6.2.dev ($Date:: 2015-05-06 #$)
List of supported compressors in this build: blosclz,lz4,lz4hc,snappy,zlib
Supported compression libraries:
  BloscLZ: 1.0.4
  LZ4: 1.7.0
  Snappy: 1.1.1
  Zlib: 1.2.8
Using compressor: lz4
Running suite: single
--> 4, 2097152, 8, 19, lz4
********************** Run info ******************************
Blosc version: 1.6.2.dev ($Date:: 2015-05-06 #$)
Using synthetic data with 19 significant bits (out of 32)
Dataset size: 2097152 bytes     Type size: 8 bytes
Working set: 256.0 MB           Number of threads: 4
********************** Running benchmarks *********************
memcpy(write):            529.9 us, 3774.5 MB/s
memcpy(read):             244.8 us, 8171.4 MB/s
Compression level: 0
comp(write):      286.1 us, 6989.8 MB/s   Final bytes: 2097168  Ratio: 1.00
decomp(read):     213.7 us, 9358.4 MB/s   OK
Compression level: 1
comp(write):      661.8 us, 3021.9 MB/s   Final bytes: 417200  Ratio: 5.03
decomp(read):     334.0 us, 5988.7 MB/s   OK
Compression level: 2
comp(write):      592.7 us, 3374.6 MB/s   Final bytes: 417200  Ratio: 5.03
decomp(read):     309.4 us, 6463.4 MB/s   OK
Compression level: 3
comp(write):      589.1 us, 3395.0 MB/s   Final bytes: 417200  Ratio: 5.03
decomp(read):     305.7 us, 6542.9 MB/s   OK
Compression level: 4
comp(write):      544.4 us, 3674.1 MB/s   Final bytes: 307168  Ratio: 6.83
decomp(read):     375.6 us, 5324.8 MB/s   OK
Compression level: 5
comp(write):      539.7 us, 3705.6 MB/s   Final bytes: 307168  Ratio: 6.83
decomp(read):     377.5 us, 5298.3 MB/s   OK
Compression level: 6
comp(write):      535.1 us, 3737.8 MB/s   Final bytes: 251108  Ratio: 8.35
decomp(read):     388.8 us, 5144.7 MB/s   OK
Compression level: 7
comp(write):      615.8 us, 3247.9 MB/s   Final bytes: 217632  Ratio: 9.64
decomp(read):     517.9 us, 3861.8 MB/s   OK
Compression level: 8
comp(write):      597.8 us, 3345.4 MB/s   Final bytes: 217632  Ratio: 9.64
decomp(read):     502.2 us, 3982.4 MB/s   OK
Compression level: 9
comp(write):      602.2 us, 3321.3 MB/s   Final bytes: 132154  Ratio: 15.87
decomp(read):     684.9 us, 2920.0 MB/s   OK

Round-trip compr/decompr on 7.5 GB
Elapsed time:       4.0 s, 4224.9 MB/s

I suppose the variation in the compression ratio in new LZ4_compress_fast() is probably due to the new 'sampling' method you introduced, and that this is quite difficult to tune it to produce smoother compression ratio variations, but wanted to confirm.

At any rate, very nice improvements in r129 that I am very excited about and plan to incorporate in Blosc very soon.

Thanks!

--
Francesc Alted

Yann Collet

unread,
May 11, 2015, 12:24:15 PM5/11/15
to lz...@googlegroups.com, fal...@gmail.com, fal...@gmail.com
Thanks for feedback Francesc. These are extremely interesting results, indeed.


Regarding usage LZ4_compress_fast(),
the first think that catch my attention is that Blosc with baseline (LZ4_compress_default())
is already so incredibly fast, it's questionable how much can still be squeezed from RAM speed.
This, mainly to underline that the benefits from the new "acceleration" parameters can have a hard time to show up when starting from such a high speed.

Second, I suspect that accelerations level in your tests are a bit high.
OK, it works, but to be fair, I did not really expected large values to be used.
I would therefore recommend testing with smaller ones,
for example starting with a trivial : acceleration = 10 - compressionLevel; 
and then maybe 19 - 2*compressionLevel, etc.

I suspect the improvement should prove smoother.
And I admit that beyond values of 20, I have not thoroughly investigated;
with larger values, I expect the sampling results to become more and more dependent on pure luck...


Regards

Francesc Alted

unread,
May 11, 2015, 1:24:21 PM5/11/15
to Yann Collet, lz...@googlegroups.com
Ok, so using 'accel = 10 - clevel' gives a quite smooth increase in compression levels:

$ bench/bench lz4 single 4
Blosc version: 1.6.2.dev ($Date:: 2015-05-06 #$)
List of supported compressors in this build: blosclz,lz4,lz4hc,snappy,zlib
Supported compression libraries:
  BloscLZ: 1.0.4
  LZ4: 1.7.0
  Snappy: 1.1.1
  Zlib: 1.2.8
Using compressor: lz4
Running suite: single
--> 4, 2097152, 8, 19, lz4
********************** Run info ******************************
Blosc version: 1.6.2.dev ($Date:: 2015-05-06 #$)
Using synthetic data with 19 significant bits (out of 32)
Dataset size: 2097152 bytes     Type size: 8 bytes
Working set: 256.0 MB           Number of threads: 4
********************** Running benchmarks *********************
memcpy(write):            489.7 us, 4084.0 MB/s
memcpy(read):             240.6 us, 8312.1 MB/s
Compression level: 0
comp(write):      292.3 us, 6841.6 MB/s   Final bytes: 2097168  Ratio: 1.00
decomp(read):     207.1 us, 9657.2 MB/s   OK
Compression level: 1
comp(write):      432.2 us, 4627.0 MB/s   Final bytes: 554512  Ratio: 3.78
decomp(read):     289.5 us, 6909.3 MB/s   OK
Compression level: 2
comp(write):      456.6 us, 4380.5 MB/s   Final bytes: 498960  Ratio: 4.20
decomp(read):     307.1 us, 6512.9 MB/s   OK
Compression level: 3
comp(write):      444.7 us, 4497.9 MB/s   Final bytes: 520824  Ratio: 4.03
decomp(read):     257.9 us, 7754.1 MB/s   OK
Compression level: 4
comp(write):      437.4 us, 4572.8 MB/s   Final bytes: 332112  Ratio: 6.31
decomp(read):     305.7 us, 6542.9 MB/s   OK
Compression level: 5
comp(write):      423.2 us, 4726.4 MB/s   Final bytes: 327112  Ratio: 6.41
decomp(read):     296.9 us, 6736.0 MB/s   OK
Compression level: 6
comp(write):      512.0 us, 3906.2 MB/s   Final bytes: 226308  Ratio: 9.27
decomp(read):     346.8 us, 5767.1 MB/s   OK
Compression level: 7
comp(write):      622.8 us, 3211.3 MB/s   Final bytes: 211880  Ratio: 9.90
decomp(read):     498.6 us, 4011.4 MB/s   OK
Compression level: 8
comp(write):      608.7 us, 3285.9 MB/s   Final bytes: 220464  Ratio: 9.51
decomp(read):     518.8 us, 3854.8 MB/s   OK
Compression level: 9
comp(write):      605.8 us, 3301.4 MB/s   Final bytes: 132154  Ratio: 15.87
decomp(read):     660.9 us, 3026.3 MB/s   OK

Round-trip compr/decompr on 7.5 GB
Elapsed time:       3.6 s, 4728.9 MB/s

while 'accel = 20 - 2 * clevel - 1' gives a little decrease only at clevel=8, but with an overall noticeable increase in speed:

$ bench/bench lz4 single 4
Blosc version: 1.6.2.dev ($Date:: 2015-05-06 #$)
List of supported compressors in this build: blosclz,lz4,lz4hc,snappy,zlib
Supported compression libraries:
  BloscLZ: 1.0.4
  LZ4: 1.7.0
  Snappy: 1.1.1
  Zlib: 1.2.8
Using compressor: lz4
Running suite: single
--> 4, 2097152, 8, 19, lz4
********************** Run info ******************************
Blosc version: 1.6.2.dev ($Date:: 2015-05-06 #$)
Using synthetic data with 19 significant bits (out of 32)
Dataset size: 2097152 bytes     Type size: 8 bytes
Working set: 256.0 MB           Number of threads: 4
********************** Running benchmarks *********************
memcpy(write):            433.5 us, 4613.4 MB/s
memcpy(read):             236.7 us, 8449.9 MB/s
Compression level: 0
comp(write):      281.8 us, 7097.5 MB/s   Final bytes: 2097168  Ratio: 1.00
decomp(read):     222.7 us, 8981.9 MB/s   OK
Compression level: 1
comp(write):      420.7 us, 4753.5 MB/s   Final bytes: 576480  Ratio: 3.64
decomp(read):     313.9 us, 6371.8 MB/s   OK
Compression level: 2
comp(write):      376.8 us, 5307.2 MB/s   Final bytes: 575888  Ratio: 3.64
decomp(read):     246.9 us, 8101.3 MB/s   OK
Compression level: 3
comp(write):      381.1 us, 5248.2 MB/s   Final bytes: 641808  Ratio: 3.27
decomp(read):     254.8 us, 7849.7 MB/s   OK
Compression level: 4
comp(write):      356.3 us, 5613.3 MB/s   Final bytes: 478968  Ratio: 4.38
decomp(read):     251.0 us, 7967.8 MB/s   OK
Compression level: 5
comp(write):      368.6 us, 5425.6 MB/s   Final bytes: 422664  Ratio: 4.96
decomp(read):     299.8 us, 6670.2 MB/s   OK
Compression level: 6
comp(write):      467.9 us, 4274.1 MB/s   Final bytes: 267444  Ratio: 7.84
decomp(read):     348.4 us, 5741.2 MB/s   OK
Compression level: 7
comp(write):      547.7 us, 3651.5 MB/s   Final bytes: 188112  Ratio: 11.15
decomp(read):     461.0 us, 4338.8 MB/s   OK
Compression level: 8
comp(write):      662.9 us, 3017.0 MB/s   Final bytes: 211880  Ratio: 9.90
decomp(read):     505.4 us, 3956.9 MB/s   OK
Compression level: 9
comp(write):      593.8 us, 3368.0 MB/s   Final bytes: 132154  Ratio: 15.87
decomp(read):     666.1 us, 3002.7 MB/s   OK

Round-trip compr/decompr on 7.5 GB
Elapsed time:       3.4 s, 5030.6 MB/s

Using 'accel = 30 - 3 * clevel - 2' is also pretty smooth again (I suppose by pure chance), but the increase in the overall speed is negligible, so I think I am going to stay with: 'accel = 20 - 2 * clevel - 1' for the time being.

Finally, I must say that I find a bit misleading these sentences in the documentation for LZ4_compress_fast():


--
Francesc Alted

Francesc Alted

unread,
May 11, 2015, 1:37:02 PM5/11/15
to Yann Collet, lz...@googlegroups.com
Ok, I pressed the wrong button and send the message too soon.  So, for the sentence that I find misleading:

    An acceleration value of "0" means "use Default value" (see lz4.c)
    An acceleration value of "1" is the same as regular LZ4_compress_default()

My benchmarks are saying that both 0 and 1 give approximately the same results.  Could you explain what the difference is with more clarity?  Also, it might be a good idea to add the explanation right in the lz4.h so that users do not have to go into the code (lz4.c) in order to understand the difference.  Finally, negative values for acceleration are silently set to 0, and maybe raising a warning would be better (user setting a bad acceleration parameter?).

And a final request (and I know that I am asking too much already :)  I have got aware of the the r129 release just because I have seen your tweet about it (but I could easily miss it), but I would expect at least an announcement in the list (it is much more difficult to miss this for me).

Thanks,

Francesc

--
Francesc Alted

Yann Collet

unread,
May 11, 2015, 2:49:01 PM5/11/15
to lz...@googlegroups.com, fal...@gmail.com, fal...@gmail.com, yann.co...@gmail.com
Just a quick note on your results :

I note that, since on table 2, acceleration = 20 - 2*cLevel - 1,
and on table 1, acceleration = 10 - cLevel,
we should have :

Table 1 Clevel 9 = Table2 Clevel 9
Table 1 Clevel 7 = Table2 Clevel 8
Table 1 Clevel 5 = Table2 Clevel 7
Table 1 Clevel 3 = Table2 Clevel 6
Table 1 Clevel 1 = Table2 Clevel 5

but :

Table 1 Clevel 9 = 15.87 vs Table2 Clevel 9 = 15.87 => OK
Table 1 Clevel 7 = 9.90 vs Table2 Clevel 8 = 9.90 => OK
Table 1 Clevel 5 = 6.41 vs Table2 Clevel 7 = 11.15 => Not equal, large difference !
Table 1 Clevel 3 = 4.03 vs Table2 Clevel 6 = 7.84 => Not equal, fairly large difference
Table 1 Clevel 1 = 3.78 vs Table2 Clevel 5 = 4.96 => Not equal

So I suspect there may be some other parameters at stake.
Otherwise, the differences would be surprising.


    An acceleration value of "0" means "use Default value" (see lz4.c)
>     An acceleration value of "1" is the same as regular LZ4_compress_default()
> My benchmarks are saying that both 0 and 1 give approximately the same results.  Could you explain what the difference is with more clarity?  

OK, you are right.

"0" means "default", which is a value set at the top of lz4.c (tuning parameters).
By default, this value is now 1.
Hence "0" ==> "1"

Initially, the default value used to be different, it was 17.
The idea was, since there was already LZ4_compress(), the LZ4_compress_fast() was supposed to be a faster variant, hence the "default" value was large enough for casual users to feel a difference.

This choice changed later on.

For clarity, the entire API has been modified, to adopt LZ4_compress_fast() naming convention everywhere.
Older variants have been classified "obsolete", to reduce confusion related to demultiplying the number of "almost equivalent" prototypes.
They are still supported though, so no worry, no existing program will complain.

So now, the recommended prototypes are :
LZ4_compress_fast()
LZ4_compress_fast_extState()
LZ4_compress_fast_continue()

They replace previous functions, such as LZ4_compress_continue(), 
which is not secure enough, because the size of destination buffer is implied by documentation, instead of enforced through interface,
or LZ4_compress_limitedOutput_continue(), which has a good prototype, but an horrible name.


But the new prototypes now feature an "accelerator" parameter.

Since this parameter is uncommon, I wanted to keep one "simple" prototype, the top one, without this complexity, 
so that casual users don't need to understand nor manipulate this new concept.
Hence :
LZ4_compress_default()
(which is basically LZ4_compress_limitedOutput() with a nicer name).

By now, default==17 does no longer sounds like a good idea.
So it is changed to default==1, because this way, it mimics the behavior of LZ4_compress_default(). Which sounds logical.


And therefore, I can update the comments so that the final choice sounds clearer for the reader.


> negative values for acceleration are silently set to 0

Good point. I shall update the documentation to cover it.


> I have got aware of the the r129 release just because I have seen your tweet about it (but I could easily miss it), but I would expect at least an announcement in the list

That's a very good point. Announcing a release can certainly be improved.
When you say "the list", which list do you refer to ? Do you mean this board (lz4c) ?


Regards


Le lundi 11 mai 2015 19:37:02 UTC+2, Francesc Alted a écrit :
Ok, I pressed the wrong button and send the message too soon.  So, for the sentence that I find misleading:

    An acceleration value of "0" means "use Default value" (see lz4.c)
    An acceleration value of "1" is the same as regular LZ4_compress_default()

My benchmarks are saying that both 0 and 1 give approximately the same results.  Could you explain what the difference is with more clarity?  Also, it might be a good idea to add the explanation right in the lz4.h so that users do not have to go into the code (lz4.c) in order to understand the difference.  Finally, negative values for acceleration are silently set to 0, and maybe raising a warning would be better (user setting a bad acceleration parameter?).

And a final request (and I know that I am asking too much already :)  I have got aware of the the r129 release just because I have seen your tweet about it (but I could easily miss it), but I would expect at least an announcement in the list (it is much more difficult to miss this for me).

Thanks,

Francesc




--
Francesc Alted



--
Francesc Alted

Francesc Alted

unread,
May 12, 2015, 11:52:54 AM5/12/15
to Yann Collet, lz...@googlegroups.com
Hi Yann,

2015-05-11 20:49 GMT+02:00 Yann Collet <yann.co...@gmail.com>:
Just a quick note on your results :

I note that, since on table 2, acceleration = 20 - 2*cLevel - 1,
and on table 1, acceleration = 10 - cLevel,
we should have :

Table 1 Clevel 9 = Table2 Clevel 9
Table 1 Clevel 7 = Table2 Clevel 8
Table 1 Clevel 5 = Table2 Clevel 7
Table 1 Clevel 3 = Table2 Clevel 6
Table 1 Clevel 1 = Table2 Clevel 5

but :

Table 1 Clevel 9 = 15.87 vs Table2 Clevel 9 = 15.87 => OK
Table 1 Clevel 7 = 9.90 vs Table2 Clevel 8 = 9.90 => OK
Table 1 Clevel 5 = 6.41 vs Table2 Clevel 7 = 11.15 => Not equal, large difference !
Table 1 Clevel 3 = 4.03 vs Table2 Clevel 6 = 7.84 => Not equal, fairly large difference
Table 1 Clevel 1 = 3.78 vs Table2 Clevel 5 = 4.96 => Not equal

So I suspect there may be some other parameters at stake.

Yes, I already mentioned that Blosc selects different blocksizes (i.e. buffers to be compressed independently) for different compression levels, so these differences are expected.
 
Otherwise, the differences would be surprising.


    An acceleration value of "0" means "use Default value" (see lz4.c)
>     An acceleration value of "1" is the same as regular LZ4_compress_default()
> My benchmarks are saying that both 0 and 1 give approximately the same results.  Could you explain what the difference is with more clarity?  

OK, you are right.

"0" means "default", which is a value set at the top of lz4.c (tuning parameters).
By default, this value is now 1.
Hence "0" ==> "1"

Initially, the default value used to be different, it was 17.
The idea was, since there was already LZ4_compress(), the LZ4_compress_fast() was supposed to be a faster variant, hence the "default" value was large enough for casual users to feel a difference.

This choice changed later on.

For clarity, the entire API has been modified, to adopt LZ4_compress_fast() naming convention everywhere.
Older variants have been classified "obsolete", to reduce confusion related to demultiplying the number of "almost equivalent" prototypes.
They are still supported though, so no worry, no existing program will complain.

So now, the recommended prototypes are :
LZ4_compress_fast()
LZ4_compress_fast_extState()
LZ4_compress_fast_continue()

They replace previous functions, such as LZ4_compress_continue(), 
which is not secure enough, because the size of destination buffer is implied by documentation, instead of enforced through interface,
or LZ4_compress_limitedOutput_continue(), which has a good prototype, but an horrible name.

Ok, glad to see the API simplification happening.  This is increasing the consistency as well, which is cool.  Maybe you should mention which are the recommended prototypes somewhere right into the lz4.h?  That should be helpful for new users.
 


But the new prototypes now feature an "accelerator" parameter.

Since this parameter is uncommon, I wanted to keep one "simple" prototype, the top one, without this complexity, 
so that casual users don't need to understand nor manipulate this new concept.
Hence :
LZ4_compress_default()
(which is basically LZ4_compress_limitedOutput() with a nicer name).

By now, default==17 does no longer sounds like a good idea.
So it is changed to default==1, because this way, it mimics the behavior of LZ4_compress_default(). Which sounds logical.


And therefore, I can update the comments so that the final choice sounds clearer for the reader.

Excellent.
 


> negative values for acceleration are silently set to 0

Good point. I shall update the documentation to cover it.


> I have got aware of the the r129 release just because I have seen your tweet about it (but I could easily miss it), but I would expect at least an announcement in the list

That's a very good point. Announcing a release can certainly be improved.
When you say "the list", which list do you refer to ? Do you mean this board (lz4c) ?

Yeah, sorry.  I tend to see the google groups as mailing lists and take them as kind of the preferred channel to talk to users, but my vision may be a bit deformed by my previous experience in other projects and I understand that there are many ways to announce releases.  Which is your 'preferred' way to announce them?

Thanks,

--
Francesc Alted

Yann Collet

unread,
May 12, 2015, 12:15:47 PM5/12/15
to lz...@googlegroups.com, fal...@gmail.com, yann.co...@gmail.com, fal...@gmail.com
> Maybe you should mention which are the recommended prototypes somewhere right into the lz4.h?

I expect lz4.h to be relatively clear for that :

all recommended prototypes are clearly commented,

non-recommended prototypes are below the limit "Obsolete" :

Some of them even trigger compilation warnings (when they are old enough).


And therefore, I can update the comments so that the final choice sounds clearer for the reader.



> Which is your 'preferred' way to announce them?

I would suggest setting an Atom feed on :

It will track & notify every release (and only releases).


Regards


Le mardi 12 mai 2015 17:52:54 UTC+2, Francesc Alted a écrit :
Hi Yann,

Francesc Alted

unread,
May 12, 2015, 12:28:04 PM5/12/15
to Yann Collet, lz...@googlegroups.com
2015-05-12 18:15 GMT+02:00 Yann Collet <yann.co...@gmail.com>:
> Maybe you should mention which are the recommended prototypes somewhere right into the lz4.h?

I expect lz4.h to be relatively clear for that :

all recommended prototypes are clearly commented,

Yep.  Perhaps I would move `LZ4_compress_fast()` to the 'Simple Functions' sections, but that's up to you indeed.
 

non-recommended prototypes are below the limit "Obsolete" :

Some of them even trigger compilation warnings (when they are old enough).


And therefore, I can update the comments so that the final choice sounds clearer for the reader.

Comment update available here :


> Which is your 'preferred' way to announce them?

I would suggest setting an Atom feed on :

It will track & notify every release (and only releases).

Great.  Will do.

Best,
--
Francesc Alted
Reply all
Reply to author
Forward
0 new messages