Comprare to other

1,026 views
Skip to first unread message

brule....@gmail.com

unread,
Sep 19, 2015, 7:00:25 AM9/19/15
to Brotli
Hello,

I wish integrate to:
It's not clear how compil it, how use it to compress/decompress, no binary?

Cheers,

Evgenii Kliuchnikov

unread,
Sep 19, 2015, 5:59:44 PM9/19/15
to brule....@gmail.com, Brotli
Hello.

 Building is easy:
cd brotli/bro
export CFLAFS="-O2"
export CXXFLAGS="-O2"
make

"bro" tool can compress and decompress files.
You can see/modify its sources to make your own measurements.



Eugene Klyuchnikov | SW Engineer | eus...@google.com | Fight fire with fire!

--
You received this message because you are subscribed to the Google Groups "Brotli" group.
To unsubscribe from this group and stop receiving emails from it, send an email to brotli+un...@googlegroups.com.
To post to this group, send email to bro...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/brotli/25e0bf07-d6f8-454a-b9ec-062a377de0db%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

BRULE Herman

unread,
Sep 19, 2015, 8:00:17 PM9/19/15
to Evgenii Kliuchnikov, Brotli

cd brotli/tools ?

This should be into the README

See my test file attached, I need to test with typical packet data too

The premilinary test is good, similar compression than xz, fast compression.

My target is network protocol, mostly this:

http://catchchallenger.first-world.info/wiki/Base_protocol_messages#C203

http://catchchallenger.first-world.info/wiki/Login_protocol_messages#Reply_0205

 

Cheers,

 

--

alpha_one_x86/BRULE Herman <alpha_...@first-world.info>

Main developer of Supercopier/Ultracopier/CatchChallenger, Esourcing and server management

IT, OS, technologies, research & development, security and business department

datapack.tar.xz
signature.asc

jdde...@gmail.com

unread,
Sep 22, 2015, 10:39:50 AM9/22/15
to Brotli, brule....@gmail.com
Minor request for the bro command...  Can brotli be made to be as close as possible to a drop-in replacement for gzip/bzip2/zopfli, in terms of command line options? 

ie:
$ bro DontTazeMe

   ... compresses and outputs a file DontTazeMe.bro

Also, adding abbreviated options:  -f (--force) , -k (for 'keep' ie, don't delete compressed file after decompressing) and -d (for decompress), -v (for verbose)... and a built-in description of the quality and repeat parameters would be GREATLY appreciated.

These minor changes in functionality means that I could use bro as a drop-in replacement for other tools when preping data for publishing on the web.

Thanks!

neura...@gmail.com

unread,
Sep 23, 2015, 12:14:19 PM9/23/15
to Brotli, brule....@gmail.com, jdde...@gmail.com
Hi,

We had high hopes with Brotli.  I've long advocated with browsers to adopt LZMA but it didn't seem like it was going to happen thus I was excited when I saw Brotli.

But unfortunately, we are noticing that Brotli is slow and inefficient compared to LZMA on stuff like trimesh binary data streams that are common in 3D computer graphics/WebGL applications.

I've posted details of this here with a representative test file:


It may be that Brotli should not be described as general purpose but rather focused on text compression or structured text compression?  As it stands now, the comparisons with LZMA in its introduction and readme are somewhat inaccurate unless we are seeing a bug.

Best regards,
Ben Houston

Evan Nemerson

unread,
Sep 23, 2015, 12:46:54 PM9/23/15
to neura...@gmail.com, Brotli
This may be a bit off-topic for this list, but…

I'm putting together a new corpus for benchmarking compression
algorithms (see [1]), and I would like to include some 3D data ([2]).
I've never heard of trimesh, but TBH it's not really my area. If
you're willing I'd like to discuss your use case in a bit more detail
at [2].

As for whether Brotli should be considered general-purpose, it does
perform well on many non-text files in my benchmark ([3]). It's really
not fair to assume that Brotli is only meant for text compression based
on results from one type of data which the authors may not yet have
even looked at optimizing for. Even if Brotli can't be optimized for
that type of data, that doesn't make it a special-purpose codec. All
the codecs I've tried have some files where their performance is much
better than others relative to other codecs.


[1] https://github.com/nemequ/squash-corpus
[2] https://github.com/nemequ/squash-corpus/issues/14
[3] https://quixdb.github.io/squash-benchmark/

-Evan
> > > > <https://groups.google.com/d/msgid/brotli/25e0bf07-d6f8-454a-b9
> > > > ec
> > > > -062a377de0db%40googlegroups.com?utm_medium=email&utm_source=fo
> > > > oter>
> > > > .
Message has been deleted

dud...@gmail.com

unread,
Sep 24, 2015, 8:43:18 AM9/24/15
to Brotli, brule....@gmail.com

The strongest Brotli's competitor is probably lzturbo 1.2 (unfortunately closed source, https://sites.google.com/site/powturbo/home ) - here are some benchmarks from
http://encode.ru/threads/2313-Brotli?p=44970&viewfull=1#post44970

 

“Input:
812,392,384 bytes, HTML top 10k Alexa crawled (8,998 with HTML response)

Output:
152,477,067 bytes, 2.128 sec., 0.973 sec., lzturbo -30 -p1
152,477,067 bytes, 2.132 sec., 0.971 sec., lzturbo -30 -p1 -b4
150,773,269 bytes, 2.151 sec., 1.047 sec., lzturbo -30 -p1 -b16
150,219,553 bytes, 2.332 sec., 1.204 sec., lzturbo -30 -p1 -b800
141,215,050 bytes, 2,751 sec., 0.938 sec., lzturbo -31 -p1 -b4
140,211,060 bytes, 6,970 sec., 1.775 sec., bro -q 1
138,051,401 bytes, 2.761 sec., 1.001 sec., lzturbo -31 -p1 -b16
137,211,547 bytes, 7.808 sec., 1.712 sec., bro -q 2
136,523,335 bytes, 2.830 sec., 1.094 sec., lzturbo -31 -p1
135,723,691 bytes, 8.318 sec., 1.677 sec., bro -q 3
135,656,476 bytes, 2.972 sec., 1.153 sec., lzturbo -31 -p1 -b800
131,871,664 bytes, 14.052 sec., 0.899 sec., lzturbo -32 -p1 -b4
131,401,865 bytes, 9,917 sec., 1.677 sec., bro -q 4
127,045,472 bytes, 9.549 sec., 0.957 sec., lzturbo -32 -p1 -b16
123,169,077 bytes, 8.472 sec., 1.090 sec., lzturbo -32 -p1
122,480,456 bytes, 19.411 sec., 1.645 sec., bro -q 5
119,969,489 bytes, 27.663 sec., 1.602 sec., bro -q 6
116,217,001 bytes, 40.748 sec., 1.630 sec., bro -q 7
115,985,847 bytes, 386.192 sec., 0.850 sec., lzturbo -39 -p1 -b4
115,163,486 bytes, 55.523 sec., 1.614 sec., bro -q 8
114,602,026 bytes, 8.403 sec., 1.218 sec., lzturbo -32 -p1 -b800
114,345,025 bytes, 78,418 sec., 1.594 sec., bro -q 9
109,425,530 bytes, 436.824 sec., 0.893 sec., lzturbo -39 -p1 -b16
105,933,030 bytes, 571,280 sec., 5.168 sec., lzturbo -49 -p1 -b4
104,094,380 bytes, 2313.780 sec., 1.693 sec., bro -q 10
100,898,120 bytes, 534.627 sec., 1.097 sec., lzturbo -39 -p1
99,699,129 bytes, 625.001 sec., 4.902 sec., lzturbo -49 -p1 -b16
92,303,359 bytes, 727.475 sec., 4.729 sec., lzturbo -49 -p1
90,239,627 bytes, 680.976 sec., 1.170 sec., lzturbo -39 -p1 -b800
82,891,405 bytes, 882.513 sec., 4.597 sec., lzturbo -49 -p1 -b800

Used:
lzturbo 1.2 Aug 11 2014”

 

They should be also compared on different types of files. Sadly squash benchmark includes only open source.

Evan Nemerson

unread,
Sep 24, 2015, 1:11:37 PM9/24/15
to dud...@gmail.com, Brotli
On Thu, 2015-09-24 at 02:52 -0700, dud...@gmail.com wrote:
> The strongest Brotli's competitor is probably lzturbo 1.2
> (unfortunately
> closed source, https://sites.google.com/site/powturbo/home ) - here
> are
> some benchmarks from
> http://encode.ru/threads/2313-Brotli?p=44970&viewfull=1#post44970

The fact that it is closed-source pretty much eliminates it as a
possible solution for a *lot* of applications, probably most of them. I
certainly can't see Firefox considering it as a content-encoding.

Even discounting the philosophical implications of using non-free
software, the fact that lzturbo is proprietary means that it simply
isn't a competitor to Brotli, even on purely technical grounds:

* AFAIK lzturbo also isn't distributed as a library, only an
executable.  In order to use it software needs to fork()/exec().
* lzturbo doesn't work on ARM, MIPS, or any other architecture the
author hasn't decided to compile it for.
* lzturbo doesn't work on Linux, BSD, or any other operating system
the author hash't decided to compile it for.
* If you use it you are hoping the author will continue to provide bug
fixes for all eternity; you can't really do it yourself without the
source code.

On more political grounds, it is worth noting that there is also
something of a cloud hanging over lzturbo; there were allegations of
GPL violations (copying code from FreeArc). The allegations are denied
by the lzturbo author, and I'm not convinced of their veracity, but TBH
I would be very wary of using lzturbo in software I distribute for fear
of opening myself up to a lawsuit. IMHO the only way to alleviate
these concerns would be for the author to open up the source code so
people can verify that it doesn't contain anything copied from copyleft
sources.

> "Input:
> 812,392,384 bytes, HTML top 10k Alexa crawled (8,998 with HTML
> response)

A more realistic test for this would be a small program which
compresses/decompresses each of those 10k files individually vs. a
script which executes lzturbo 10k times.

> Unfortunately squash benchmark require open-source,

That's not true. I just posted some information about this to
<https://tinyurl.com/nnbof27>.

> maybe there should be
> also compare on other types of files?

I would like to. That's what the first part of the e-mail you replied
to was about. I'm trying to put together a new corpus at
<https://github.com/nemequ/squash-corpus>. If you have suggestions I
would be happy to discuss them in the issue tracker for that project.

I don't want to hijack Brotli's mailing list to discuss Squash. I'm
happy to continue discussing the lzturbo vs. brotli thing here, but if
you want to talk about Squash more we should do so at one of the places
listed here: <https://quixdb.github.io/squash/#support>. If you want
to talk about the corpus, we should use the GitHub issue tracker.

-Evan

dud...@gmail.com

unread,
Sep 24, 2015, 5:40:15 PM9/24/15
to Brotli, dud...@gmail.com
Hi Evan,

I think Squash is truly amazing benchmark, exactly what is needed - I only mentioned it because I think it provides throughout tests and comparisons (also with Brotli) lzturbo needs as probably one of the most interesting compressors today - what this thread seems to be about.
The test file from benchmark I have cited seems highly relevant and for two standard uses:

dynamically generated:

135,723,691 bytes, 8.318 sec., 1.677 sec., bro -q 3
114,602,026 bytes, 8.403 sec., 1.218 sec., lzturbo -32 -p1 -b800

prepacked:

104,094,380 bytes, 2313.780 sec., 1.693 sec., bro -q 10
90,239,627 bytes, 680.976 sec., 1.170 sec., lzturbo -39 -p1 -b800

suggests that lzturbo not only would allow to additionally reduce bandwidth 15%, but is also essentially faster.
Sure it might not handle well some data like DNA ( http://encode.ru/threads/1890-Benchmarking-Entropy-Coders?p=44461&viewfull=1#post44461 ) or lots of small files, but it seems at least worth considering, testing.
And remember that it's just one man's project - give him time, people and it probably can be further improved.

Sure closed-source compressor shouldn't be a candidate for a standard, but if these differences maintain, we are talking about huge world-wide savings - Google has bought much larger than one-person companies, event if he would want e.g 1M$, it would be nothing comparing to savings ... and probably much less than the cost of development of Brotli itself and approaching lzturbo performance ...

We are talking about the standard for many years to come - shouldn't being the best possible be the main priority?

Best,
Jarek

Evan Nemerson

unread,
Sep 24, 2015, 7:21:02 PM9/24/15
to dud...@gmail.com, Brotli
On Thu, 2015-09-24 at 14:40 -0700, dud...@gmail.com wrote:
> Hi Evan,
>
> I think Squash is truly amazing benchmark, exactly what is needed

Thanks, I'm glad you like it :)

> - I only
> mentioned it because I think it provides throughout tests and
> comparisons
> (also with Brotli) lzturbo needs as probably one of the most
> interesting
> compressors today - what this thread seems to be about.
> The test file from benchmark I have cited seems highly relevant and
> for two
> standard uses:

> dynamically generated:
> 135,723,691 bytes, 8.318 sec., 1.677 sec., bro -q 3
> 114,602,026 bytes, 8.403 sec., 1.218 sec., lzturbo -32 -p1 -b800
>
> prepacked:
> 104,094,380 bytes, 2313.780 sec., 1.693 sec., bro -q 10
> 90,239,627 bytes, 680.976 sec., 1.170 sec., lzturbo -39 -p1 -b800
>
> suggests that lzturbo not only would allow to additionally reduce
> bandwidth
> 15%, but is also essentially faster.
> Sure it might not handle well some data like DNA (
> http://encode.ru/threads/1890-Benchmarking-Entropy-Coders?p=44461&vie
> wfull=1#post44461
> ) or lots of small files, but it seems at least worth considering,
> testing.

I think lots of small files is the *far* more interesting test. How
large is the average file on the web? Remember, web pages aren't
delivered as one large bundle containing all the HTML, CSS, JavaScript,
images, and other resources. Each of those files is delivered
individually. And what about RPC requests/responses, which are
probably less than a kilobyte on average?

To be a good fit for the web, a compression codec needs to perform well
with relatively small pieces of data. And yes, the current corpora are
quite terrible at representing this use case. I really need to get on
with creating mine…

I don't have any data to back this up, but my guess is that for every
large file which is compressed/decompressed, tens of thousands of small
snippets are quietly processed.

The largest pieces of data which (I'm guessing) are *commonly*
processed is game data; things like textures and 3D models, and AFAIK
these are usually only a few megabytes each, nothing like the 100 MiB
files you are talking about.

The only real exception I can think of is software and software
updates, which can get fairly large, but usually don't need to be.

I'm not saying lzturbo would be a bad fit for this type of thing but
unless the author releases the source code, or at least a shared
library, it's difficult to judge.

> And remember that it's just one man's project - give him time, people
> and
> it probably can be further improved.
>
> Sure closed-source compressor shouldn't be a candidate for a
> standard, but
> if these differences maintain, we are talking about huge world-wide
> savings
> - Google has bought much larger than one-person companies, event if
> he
> would want e.g 1M$, it would be nothing comparing to savings ... and
> probably much less than the cost of development of Brotli itself and
> approaching lzturbo performance ...

I'm not sure it is fair to ask Google to buy lzturbo then turn around
and release the source code for free. They tried that with On2, and
I'm not sure they got their money's worth; people still use h.264. I'm
hopeful that things will improve for h.265 vs. VP9 (largely thanks to
the new h.265 patent pool), but is it really fair to count on Google
for this sort of thing?

Anyways, that decision is not ours, it is Google's. All we can really
do is make the data available.

> We are talking about the standard for many years to come - shouldn't
> being
> the best possible be the main priority?

Brotli is still a fairly new codec, and I'm sure there are lots of
optimizations still waiting to be made. The fact that it is open
source means other people can help with this… look at the great work
Cloudflare, Intel, and Google have done with zlib compression lately
(the zlib and zopfli libraries).


-Evan

powt...@gmail.com

unread,
Sep 30, 2015, 1:13:50 PM9/30/15
to Brotli, dud...@gmail.com
Hi,
glad to see the comparison brotli vs. LzTurbo.

I've also tested the latest brotli version and several other compressors. See: "Compressor Benchmark"
The benchmark is an "in memory benchmark" without any i/o involved and 2 different file types "binary application" and 
a "text" file. Brotli compression ratio is only good for small or text files. Decoding is only fast on text files because of
cache effects by using a dictionary. Further testing on mobile devices are necessary for a precise conclusion.

Additionally, brotli compression is too slow, making it practically unusable for dynamically generated contents.
Only LzTurbo has a broad range of compression options from faster than Lz4 to better compression than lzma.
LzTurbo can also be improved by using a static/dynamic dictionary or preprocessing for text files.


> * lzturbo doesn't work on ARM, MIPS, or any other architecture the 
>   author hasn't decided to compile it for. 
> * lzturbo doesn't work on Linux, BSD, or any other operating system 
>   the author hash't decided to compile it for. 
Since the start, the LzTurbo package is including a Windows and Linux version.
No assembly instructions are used in LzTurbo, so compiling
for other environments or cpu's is not a problem.



powt...@gmail.com

unread,
Oct 6, 2015, 8:27:03 AM10/6/15
to Brotli, dud...@gmail.com, powt...@gmail.com
Internet Scenario Benchmark

LzTurbo compress 5 times and decompress 3 times faster than brotli.
LzTurbo decompress more than 6 times faster than lzham

                 size  ratio%   C MB/s       D MB/s     MB=1.000.000
             15180334    15.2     0.43       482.07    brotli 11  v0.2.0
             15309122    15.3     2.27       127.23    lzma 9  v15.08
             16541706    16.5     2.07      1463.39    lzturbo 39  v1.3
             16921859    16.9     2.96       230.54    lzham 4  v1.0
             17860382    17.9    43.51       495.78    zlib 9  v1.2.8
             18033576    18.0   135.62      1454.31    lzturbo 32  v1.3
            100000000   100.0  5984.00      6043.00    libc memcpy

Ben Houston

unread,
Oct 6, 2015, 8:39:31 AM10/6/15
to powt...@gmail.com, Brotli, Jarek Duda
Hi Pow Turbo,

You have impressive numbers. You should figure out a way to get your
tool out into the world. I'd follow the pattern established by the 7z
website. Have an SDK, have a command line utility for both windows
and linux, have a unique extension for the compressed results. You
could even fork the 7z source force repository and add *.lzt support
to the various 7z tools while renaming it to lzt.exe or what not.
Just have to give appropriate credit

Right now your website is bare bones and I believe this hurts
adoption. I think you could potentially open source your tool while
having a way to monetize it via the ability to let users purchase
commercially supported licenses. This is a classic monetization
scheme. I would try to get a team to help with this aspect of things
if you are mostly a C++ coder. Having at least a decent website,
which LZTurbo doesn't yet have, will help. Just contract it out.
Best regards,
Ben Houston (Cell: 613-762-4113, Skype: ben.exocortex, Twitter: @exocortexcom)
https://Clara.io - Online 3D Modeling and Rendering
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Brotli" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/brotli/3gAzcOsv0E0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> brotli+un...@googlegroups.com.
> To post to this group, send email to bro...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/brotli/386e05ec-6735-4d38-a408-d5e64fa84a45%40googlegroups.com.

dud...@gmail.com

unread,
Oct 6, 2015, 10:53:07 AM10/6/15
to Brotli, dud...@gmail.com
Brotli was added to Squeeze Chart benchmark for various file types:
http://www.squeezechart.com/

dud...@gmail.com

unread,
Oct 28, 2015, 3:50:43 PM10/28/15
to Brotli, dud...@gmail.com
Yann's ZSTD_HC is comming (open source!) - initial tests by Przemysław Skibiński:
http://encode.ru/threads/2345-zstd-HC

zstd v0.2 427 ms (239 MB/s), 51367682, 189 ms (541 MB/s) zstd_HC v0.2 -1 996 ms (102 MB/s), 48918586, 244 ms (419 MB/s) zstd_HC v0.2 -2 1152 ms (88 MB/s), 47193677, 247 ms (414 MB/s) zstd_HC v0.2 -3 2810 ms (36 MB/s), 45401096, 249 ms (411 MB/s) zstd_HC v0.2 -4 4009 ms (25 MB/s), 44631716, 243 ms (421 MB/s) zstd_HC v0.2 -5 4904 ms (20 MB/s), 44529596, 244 ms (419 MB/s) zstd_HC v0.2 -6 5785 ms (17 MB/s), 44281722, 244 ms (419 MB/s) zstd_HC v0.2 -7 7352 ms (13 MB/s), 44108456, 241 ms (424 MB/s) zstd_HC v0.2 -8 9378 ms (10 MB/s), 43978979, 237 ms (432 MB/s) zstd_HC v0.2 -9 12306 ms (8 MB/s), 43901117, 237 ms (432 MB/s) brotli 2015-10-11 level 0 1202 ms (85 MB/s), 47882059, 489 ms (209 MB/s) brotli 2015-10-11 level 3 1739 ms (58 MB/s), 47451223, 475 ms (215 MB/s) brotli 2015-10-11 level 4 2379 ms (43 MB/s), 46943273, 477 ms (214 MB/s) brotli 2015-10-11 level 5 6175 ms (16 MB/s), 43363897, 528 ms (193 MB/s) brotli 2015-10-11 level 6 9474 ms (10 MB/s), 42877293, 463 ms (221 MB/s)

tomb...@gmail.com

unread,
Nov 6, 2015, 11:40:28 AM11/6/15
to Brotli, brule....@gmail.com
I'd be interested in a comparison with "Zstandard"
https://github.com/Cyan4973/zstd

fast lossless compression algorithm, targeting real-time compression scenarios at zlib-level compression ratio
not yet reached "stable format" status
now safe to test Zstandard even within production environments

Some things like compression level flags are in flux, and there are frequent small tweaks to compression.  But seems large sweeping changes are over.

powt...@gmail.com

unread,
Nov 16, 2015, 2:05:32 PM11/16/15
to Brotli, dud...@gmail.com, powt...@gmail.com
Now, make your own benchmark with your data and compare >50 codecs

TurboBench Compressor Benchmark incl. LzTurbo

- The only benchmark program including LzTurbo
- 100% in-memory benchmark, no I/O overhead
- Include (>50) allmost all popular, latest or fastest compressors in one compiled package 
Minimum plugins call overhead
- Set one, a group or several compressors to benchmark at the command line
Multiple input files with recursive directories
- Concatenate multiple small files into one multiblock file
- Benchmark multiblock file as one large block, but each block processed separatelly
- Avoid cache szenario found in other benchmarks with small files
- Set block size, file size limit,...
- Set number of iterations, number of runsbenchamrks, set max. time per run
- Automatic sort by compressed length
- 64 bits Linux and Windows versions (gcc 5.2/mingw64 gcc5.2)
- Benchmarking Entropy CodersLz77 and BWT compressors
Texthtmlcsvmarkdown and other output formats
- 100% C/C++, w/o inline assembly
- Enable disable groups or individual codecs
- No other compressor benchmark includes more codecs or offer more precision and features
Transfer speed sheet for different connections or devices: GPRS,3G,4G,DSL,Network,HDD,SSD,RAM

benchmark program source code: TuboBench Compressor Benchmark
Download TurboBench

Benchmark: app3.tar binary Portable Apps Suite
Code:

   C Size  ratio%     C MB/s     D MB/s   Name              GPRS 56    2G 256    2G 456    3G 752     3G 1M    DSL 2M     4G 4M  WIFI 30M  CAB 100M  ETH 1000 HDD 150MB SSD 550MB   SSD 1GB   SSD 2GB     4GB/s     8GB/s  File   (bold = pareto)  MB=1.000.0000 
 32798929    32.8       2.87      64.79   lzma 9              0.021     0.098     0.173     0.286     0.379     0.754     1.491     9.727    24.011    55.385    56.759    62.384    63.445    64.111    64.450    64.620  app3.tar
 32925788    32.9       1.63      69.89   lzturbo 49          0.021     0.097     0.173     0.285     0.378     0.752     1.488     9.802    24.617    59.036    60.606    67.090    68.324    69.100    69.495    69.694  app3.tar
 33761620    33.7       2.60     272.82   lzham 4             0.021     0.095     0.169     0.278     0.370     0.739     1.474    10.683    32.628   157.141   169.090   233.717   249.830   260.818   266.683   269.716  app3.tar
 34104666    34.1       2.23    1345.13   lzturbo 39          0.021     0.094     0.167     0.276     0.367     0.733     1.466    10.917    35.714   288.258   331.694   733.731   922.395  1094.357  1206.854  1272.245  app3.tar
 35638896    35.6       1.20    1139.73   zstd 20             0.020     0.090     0.160     0.264     0.351     0.702     1.403    10.436    34.059   268.406   307.599   655.849   810.742   947.491  1034.758  1084.711  app3.tar
 37025201    37.0      67.45    1373.21   lzturbo 32          0.019     0.087     0.154     0.254     0.338     0.676     1.350    10.064    32.982   271.199   313.073   713.905   910.655  1095.090  1218.479  1291.223  app3.tar
 37313258    37.3       2.44    2172.46   lzturbo 29          0.019     0.086     0.153     0.252     0.335     0.670     1.340    10.014    33.023   290.493   339.512   878.687  1200.376  1546.337  1806.690  1972.765  app3.tar
 41668560    41.6       0.22     247.07   brotli 11           0.017     0.077     0.137     0.226     0.300     0.599     1.195     8.692    26.774   135.544   146.571   208.146   224.028   234.985   240.876   243.933  app3.tar
 45799999    45.8      28.07     347.45   brotli 6            0.015     0.070     0.125     0.205     0.273     0.546     1.089     8.007    25.328   152.940   168.678   269.539   299.789   321.864   334.167   340.678  app3.tar
 46304388    46.3      38.46     342.36   brotli 5            0.015     0.069     0.123     0.203     0.270     0.540     1.077     7.919    25.045   151.020   166.532   265.816   295.551   317.237   329.319   335.711  app3.tar
 46480016    46.4     188.78    1162.12   lzturbo 31          0.015     0.069     0.123     0.202     0.269     0.538     1.076     8.020    26.310   218.568   252.773   586.594   754.808   915.191  1023.979  1088.685  app3.tar
 46875269    46.8      50.33     944.34   zstd 9              0.015     0.068     0.122     0.201     0.267     0.534     1.067     7.941    25.959   208.105   239.184   523.458   654.782   773.347   850.334   894.877  app3.tar
 48836109    48.8     117.72     879.33   zstd 5              0.014     0.066     0.117     0.193     0.256     0.512     1.024     7.620    24.896   198.402   227.803   494.003   615.346   724.027   794.159   834.580  app3.tar
 49324183    49.3     141.35     332.13   brotli 1            0.014     0.065     0.116     0.191     0.253     0.507     1.012     7.440    23.567   143.825   158.833   255.965   285.419   307.008   319.076   325.472  app3.tar
 49860700    49.8      16.93     294.78   zlib 9              0.014     0.064     0.114     0.189     0.251     0.501     1.000     7.341    23.126   135.550   148.961   232.662   257.034   274.614   284.338   289.462  app3.tar
 49915412    49.9     299.15    1048.86   lzturbo 30a         0.014     0.064     0.114     0.188     0.251     0.501     1.002     7.467    24.482   202.318   233.763   537.612   688.666   831.427   927.570   984.492  app3.tar
 49962678    49.9      34.50     293.32   zlib 6              0.014     0.064     0.114     0.188     0.250     0.500     0.998     7.325    23.073   135.092   148.438   231.654   255.860   273.312   282.962   288.047  app3.tar
 50027825    50.0      51.46    1935.63   lzturbo 22          0.014     0.064     0.114     0.188     0.250     0.500     1.000     7.474    24.692   221.488   259.839   701.593   983.853  1304.599  1558.669  1726.818  app3.tar
 50311200    50.3     323.12    1055.79   lzturbo 30          0.014     0.064     0.113     0.187     0.249     0.497     0.994     7.409    24.298   201.284   232.670   537.343   689.761   834.397   932.126   990.110  app3.tar
 50337788    50.3       6.93    1439.88   lz5 9               0.014     0.064     0.113     0.187     0.249     0.497     0.994     7.419    24.435   211.974   247.094   621.569   835.154  1057.147  1219.182  1320.373  app3.tar
 52597358    52.5     260.06    2085.20   lzturbo 21          0.013     0.061     0.108     0.179     0.238     0.476     0.951     7.112    23.521   213.529   251.092   696.891   994.999  1347.168  1636.838  1834.015  app3.tar
 52928477    52.9      68.88     276.17   zlib 1              0.013     0.061     0.108     0.178     0.236     0.472     0.942     6.914    21.776   127.371   139.937   218.227   240.978   257.375   266.440   271.216  app3.tar
 53112430    53.1     319.53     940.51   zstd 1              0.013     0.060     0.107     0.177     0.236     0.471     0.941     7.015    22.983   188.393   217.363   493.102   627.412   752.701   836.192   885.290  app3.tar
 54265487    54.2       2.01    3861.28   lzturbo 19          0.013     0.059     0.105     0.173     0.231     0.461     0.922     6.905    22.921   217.583   258.190   803.435  1248.281  1886.644  2534.780  3060.477  app3.tar
 55400947    55.3     465.65    1913.55   lzturbo 20a         0.013     0.058     0.103     0.170     0.226     0.452     0.903     6.752    22.322   202.008   237.397   654.070   929.322  1251.061  1512.962  1689.839  app3.tar
 55764172    55.7     413.75    1532.40   lz5 1               0.013     0.057     0.102     0.169     0.224     0.449     0.897     6.702    22.114   195.721   229.015   600.432   826.675  1073.976  1262.872  1384.640  app3.tar
 55923645    55.9     141.72    3696.98   lzturbo 12          0.013     0.057     0.102     0.168     0.224     0.447     0.895     6.700    22.239   210.971   250.309   777.434  1206.015  1818.731  2438.059  2938.353  app3.tar
 57606731    57.6     268.38    3476.04   lzturbo 11          0.012     0.056     0.099     0.163     0.217     0.434     0.869     6.504    21.585   204.429   242.462   749.599  1158.502  1737.821  2317.184  2780.707  app3.tar
 59090242    59.0     637.36    2101.90   lzturbo 20          0.012     0.054     0.097     0.159     0.212     0.423     0.847     6.333    20.964   192.370   226.694   645.548   938.015  1297.150  1604.260  1819.669  app3.tar
 60016380    60.0     597.00    3368.99   lzturbo 10a         0.012     0.053     0.095     0.157     0.208     0.417     0.834     6.243    20.720   196.332   232.884   721.003  1115.576  1676.133  2238.547  2689.823  app3.tar
 61460109    61.4     707.63    3475.58   lzturbo 10          0.011     0.052     0.093     0.153     0.204     0.407     0.814     6.097    20.240   192.319   228.257   712.211  1108.995  1681.465  2266.440  2743.702  app3.tar
 61938605    61.9     668.09    2998.54   lz4 1               0.011     0.052     0.092     0.152     0.202     0.404     0.808     6.048    20.066   189.261   224.282   685.616  1050.120  1555.491  2048.384  2434.022  app3.tar
100098564   100.0    8852.86    8528.82   memcpy              0.007     0.032     0.057     0.094     0.125     0.250     0.500     3.748    12.482   123.194   147.407   516.681   895.055  1620.090  2722.945  4127.976  app3.tar

Reply all
Reply to author
Forward
0 new messages