Add new ch_type value: ELFCOMPRESS_ZSTD

634 views
Skip to first unread message

Fangrui Song

unread,
Jun 27, 2022, 2:15:43 AM6/27/22
to Generic System V Application Binary Interface
zstd has impressive compression and decompression speed while usually compressing better than zlib (https://facebook.github.io/zstd/#benchmarks).
Many folks are investigating whether it can replace zlib compressed debug info sections.

* https://discourse.llvm.org/t/rfc-zstandard-as-a-second-compression-method-to-llvm/63399
* https://github.com/facebook/zstd/issues/2832

(I noted on https://maskray.me/blog/2021-12-19-why-isnt-ld.lld-faster
"No format except zlib is standard. This is a big ecosystem issue. ld.lld is part of the ecosystem and can drive it, but with significant buy-in from debug information consumers. As a generic ELF feature, a new format needs a generic-abi discussion.")

I propose that we add a row below "Figure 4-13: ELF Compression Types, ch_type"

    ELFCOMPRESS_ZSTD  2

Then add this description:

    ELFCOMPRESS_ZSTD
    The section data is compressed with the Zstandard algoritm. The compressed Zstandard data bytes begin with the byte immediately following the compression header, and extend to the end of the section. Additional documentation for Zstandard may be found at http://www.zstandard.org


I wish that we can reserve a generic ABI value for it. Otherwise, I will seek for a GNU ABI value in the range ELFCOMPRESS_LOOS - ELFCOMPRESS_HIOS. The value will be used by Linux and likely Fuchsia. I'll need to notify FreeBSD/NetBSD/OpenBSD/etc folks.

Szabolcs Nagy

unread,
Jun 27, 2022, 9:03:34 AM6/27/22
to 'Fangrui Song' via Generic System V Application Binary Interface
The 06/26/2022 23:15, 'Fangrui Song' via Generic System V Application Binary Interface wrote:
> zstd has impressive compression and decompression speed while usually
> compressing better than zlib (https://facebook.github.io/zstd/#benchmarks).
> Many folks are investigating whether it can replace zlib compressed debug
> info sections.
>
> *
> https://discourse.llvm.org/t/rfc-zstandard-as-a-second-compression-method-to-llvm/63399
> * https://github.com/facebook/zstd/issues/2832
>
> (I noted on https://maskray.me/blog/2021-12-19-why-isnt-ld.lld-faster
> "No format except zlib is standard. This is a big ecosystem issue. ld.lld
> is part of the ecosystem and can drive it, but with significant buy-in from
> debug information consumers. As a generic ELF feature, a new format needs a
> generic-abi discussion.")
>
> I propose that we add a row below "Figure 4-13: ELF Compression Types,
> ch_type"
>
> ELFCOMPRESS_ZSTD 2
>
> Then add this description:
>
> ELFCOMPRESS_ZSTD
> The section data is compressed with the Zstandard algoritm. The
> compressed Zstandard data bytes begin with the byte immediately following
> the compression header, and extend to the end of the section. Additional
> documentation for Zstandard may be found at http://www.zstandard.org

i dont know much about zstd so ignore if not relevant:

several modern compression formats can be extended or used with custom
dictionaries which can create hidden dependency on private information.
(i would try to avoid this for ELF, but it's also possible to allow
extensions, however i think the general expectation should be stated
as it can cause portability issues).


about speed:

i've seen compression formats fine tuned for a particular generation of
a particular architecture, so benchmark claims on a single cpu may not
be relevant to everybody.


another common failure i've seen is lack of documentation about worst
case resource usage (e.g. zlib was designed so decompress works with
very small memory footprint). this is probably not the job of ELF to
fix, but eventually users will want to know if there is a dos attack
surface.


>
>
> I wish that we can reserve a generic ABI value for it. Otherwise, I will
> seek for a GNU ABI value in the range ELFCOMPRESS_LOOS - ELFCOMPRESS_HIOS.
> The value will be used by Linux and likely Fuchsia. I'll need to notify
> FreeBSD/NetBSD/OpenBSD/etc folks.
>
> --
> You received this message because you are subscribed to the Google Groups "Generic System V Application Binary Interface" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to generic-abi...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/generic-abi/64bec546-9acd-40b0-a48f-29cd656a235bn%40googlegroups.com.

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

Gregory Szorc

unread,
Jun 27, 2022, 1:00:44 PM6/27/22
to gener...@googlegroups.com
The privacy issue only surfaces if you pre-define shared dictionaries baked into all readers. The complexity and benefits of global/shared dictionaries probably aren't merrited for ELF. However, per ELF dictionaries could provide compelling size and performance wins.

If you use dictionaries with zstd, the reader needs to explicitly handle their existence. i.e. you can't just inline a dictionary into an RFC 8878 zstd frame and expect the reader to use it. So if ELF were to define ELFCOMPRESS_ZSTD as "an RFC 8878 Section 3.1.1 frame" an inline dictionary could not be present.

However, there's nothing stopping someone from inventing a mechanism for storing dictionaries elsewhere in ELF and automatically using them if a zstd frame references them. This mechanism could be defined by this group or deferred to others. If it were defined by this group later, you'd presumably need a new ch_type variant to maintain backwards compatibility with readers not aware of dictionary support. If there are compelling reasons to support dictionaries today, it might be worth defining the dictionary lookup mechanism now so zstd support in ELF isn't forever fragmented across multiple ch_type a few years from now. That mechanism is probably along the lines of defining zstd dictionaries in a new section type.
 

about speed:

i've seen compression formats fine tuned for a particular generation of
a particular architecture, so benchmark claims on a single cpu may not
be relevant to everybody.


Since specific ELF files target a specific architecture, ELF writers can choose the compression format that works best for that architecture. The benchmarks I've seen indicate that zstd has compelling performance and size advantages on at least x86-64 and aarch64.
 

another common failure i've seen is lack of documentation about worst
case resource usage (e.g. zlib was designed so decompress works with
very small memory footprint). this is probably not the job of ELF to
fix, but eventually users will want to know if there is a dos attack
surface.

Excessive memory usage is a valid concern with zstd. The compressor chooses values like the window size, which map to how much contiguous memory a decompressor needs to allocate. zstd supports 1+ GB values here. But at default/lower compression levels, you are talking 100s-1000s of kilobytes. Highly reasonable for modern general purpose computers. But can be problematic for e.g. embedded devices.

Because the upper bound of memory usage is large, compressors/writers need to be smart about choosing a value compatible with their target audience and decompressors/readers should leverage zstd APIs or logic to define a max allowed memory threshold. The zstd frame header has metadata about memory requirements, so readers know about memory issues after only reading ~10 bytes.

I believe at least one Linux distro using zstd for package compression initially chose a too-large compression level and effectively prevented some memory constrained devices from working correctly. So this is a very real footgun.

>
>
> I wish that we can reserve a generic ABI value for it. Otherwise, I will
> seek for a GNU ABI value in the range ELFCOMPRESS_LOOS - ELFCOMPRESS_HIOS.
> The value will be used by Linux and likely Fuchsia. I'll need to notify
> FreeBSD/NetBSD/OpenBSD/etc folks.
>
> --
> You received this message because you are subscribed to the Google Groups "Generic System V Application Binary Interface" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to generic-abi...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/generic-abi/64bec546-9acd-40b0-a48f-29cd656a235bn%40googlegroups.com.

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

--
You received this message because you are subscribed to the Google Groups "Generic System V Application Binary Interface" group.
To unsubscribe from this group and stop receiving emails from it, send an email to generic-abi...@googlegroups.com.

ali_e...@emvision.com

unread,
Jun 27, 2022, 1:14:16 PM6/27/22
to gener...@googlegroups.com
Hi,

When I designed SHF_COMPRESSED, I had several concerns:

Longevity:
It would stink if ELF accumulated support
for compression algorithms that don't make it for the
long haul. I expect to be able to use the ELF objects
I create today forever, without being forced to recompile
them, and I really don't like seeing dead features
accumulate, though of course, some of that is inevitable.

Simplicity:
More choices do not always make things better.
Compression is useful, but the incremental differences
between the various options are not as important, as
the mere fact of being able to compress somehow.

Another angle on simplicity is the question of what
options are allowed for a given choice. A binary on/off
is simple. Multiple options for using something are not.

Fragmentation:
Every platform supports ZLIB. If there are more possibilities,
it seems clear that not every platform will get all the
supported options, and that might make building FOSS that
uses these features messier.

ZLIB is nearly perfect for ELF. It has passed the test of time.
It does reasonably well on all inputs, with reasonable overhead.
It does not have a plethora of options that would somehow need to
be documented, and be passed through from the link-editor (ld)
command line interface. And very importantly, its creators
invented it with the stated goal of not changing it, and they've
lived up to that promise over many years.

In fact, I considered not including the ch_type field
in the compression header, codifying that ELF compression
is always ZLIB. In the end, I decided that wasn't a good
idea, because I knew that the pressure for more options
would be irresistible and would force yet another format
change someday, but also because I didn't want to preclude
genuine improvements.

My initial reaction to this proposal was "Oh no, the day
where ELF compression splinters has arrived." However, I've
done a small google search, and immediately found some
reasons for optimism:

1) The ZLIB folks, whom I trust, have given it some
encouragement:

https://www.zlib.net/

* Zstandard, a better compression algorithm
(not tested by us, but appears to be a better
alternative to zlib in both dimensions of
compression and speed, as well as decompression speed)

2) The authors seem to have a proper appreciation of ZLIB:

https://engineering.fb.com/2016/08/31/core-data/smaller-and-faster-data-compression-with-zstandard/

3) It has a C API, it seems to be stable, and there seems
to be a simple zlib-like API that doesn't require many
options:

https://raw.githack.com/facebook/zstd/release/doc/zstd_manual.html

So I'm nervous, but not opposed. If it's going to happen, it
should probably be in the gABI.

There are some options, such as the ability to create an initial
dictionary, or to set compression levels that I would not be happy
to need to support. I don't want the interface to be more than this
(Solaris) example:

% cc foo.c -z compress-sections=zstd

If that's your intent, then that's good. If you're thinking
of adding more knobs than that, then I'd like to hear all
the details, but my bias is strongly against complicating
sub-options.

Overall, I think this is a worthwhile experiment. If the gABI
had an "experimental" designation, by which we might enable
initial experiments, while holding back the long term commitment
for a few years, I think that would be a good approach. Ideally,
we'd see the following before we make a final commitment:

1) The format has remained 100% backward compatible.

2) There is active ongoing use, and nothing else is
coming along to usurp it.

3) The compression benefits are real and significant,
even when tricks like precomputed dictionaries
are not used.

Note that we already knew all of these for ZLIB before we
formally adopted it.

Thanks.

- Ali

ali_e...@emvision.com

unread,
Jun 27, 2022, 1:26:03 PM6/27/22
to gener...@googlegroups.com
On 6/27/22 11:00 AM, Gregory Szorc wrote:
> i dont know much about zstd so ignore if not relevant:

Same here.

>
> several modern compression formats can be extended or used with custom
> dictionaries which can create hidden dependency on private information.
> (i would try to avoid this for ELF, but it's also possible to allow
> extensions, however i think the general expectation should be stated
> as it can cause portability issues).
>
>
> The privacy issue only surfaces if you pre-define shared dictionaries baked into all readers. The complexity and benefits of global/shared dictionaries probably aren't merrited for ELF. However, per
> ELF dictionaries could provide compelling size and performance wins.
>
> If you use dictionaries with zstd, the reader needs to explicitly handle their existence. i.e. you can't just inline a dictionary into an RFC 8878 zstd frame and expect the reader to use it. So if ELF
> were to define ELFCOMPRESS_ZSTD as "an RFC 8878 Section 3.1.1 frame" an inline dictionary could not be present.
>
> However, there's nothing stopping someone from inventing a mechanism for storing dictionaries elsewhere in ELF and automatically using them if a zstd frame references them. This mechanism could be
> defined by this group or deferred to others. If it were defined by this group later, you'd presumably need a new ch_type variant to maintain backwards compatibility with readers not aware of
> dictionary support. If there are compelling reasons to support dictionaries today, it might be worth defining the dictionary lookup mechanism now so zstd support in ELF isn't forever fragmented across
> multiple ch_type a few years from now. That mechanism is probably along the lines of defining zstd dictionaries in a new section type.

These are exactly the kind of "complicating sub-options"
that I mentioned in my previous reply. I could imagine
having a special ZSTD dictionary section, or other options,
but I would not be happy to see them added. If zstd can't
beat zlib in average use by significant margins, without
needing that sort of complexity, that would be a reason
to avoid it, for me.

- Ali

Roland McGrath

unread,
Jun 27, 2022, 2:11:08 PM6/27/22
to gener...@googlegroups.com
I don't think issues relating to the desirability of a particular compression algorithm are germane.  The format inherently supports user selection of compression algorithm, and it's up to users to decide what they want to use, and up to particular implementations to decide what algorithms they will support and what their default selections will be.  The subject that warrants wide consensus and discussion in this forum is simply the identification of which algorithm and compression format standard we are talking about, and how ELF headers will indicate that format.

On Mon, Jun 27, 2022 at 6:03 AM Szabolcs Nagy <szabol...@arm.com> wrote:

Fangrui Song

unread,
Jun 27, 2022, 4:59:44 PM6/27/22
to gener...@googlegroups.com, Cary Coutant, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
Thanks for the positive feedback.
Cc Yann Collet (zstd maintainer) to comment on these aspects.

Just to be clear, do you mean that `ELFCOMPRESS_ZSTD  2` is fine to you?
Cc Cary Coutant (de facto maintainer of generic-abi).

>There are some options, such as the ability to create an initial
>dictionary, or to set compression levels that I would not be happy
>to need to support. I don't want the interface to be more than this
>(Solaris) example:
>
> % cc foo.c -z compress-sections=zstd
>
>If that's your intent, then that's good. If you're thinking
>of adding more knobs than that, then I'd like to hear all
>the details, but my bias is strongly against complicating
>sub-options.

For GCC/Clang: perhaps we can introduce a new driver option -gz=zstd.
This is the only user-facing driver option we need. If the ecosystem
builds well, in 5 or 10 years, perhaps we can change -gz to default to
-gz=zstd.

For other tools, we need (https://sourceware.org/pipermail/gnu-gabi/2022q2/000498.html):

* GNU assembler: add --compress-debug-sections=zstd
* GNU ld: add --compress-debug-sections=zstd
* (dwp (part of binutils): ugh, there is currently no compression related options)
* objcopy/llvm-objcopy: add --compress-debug-sections=zstd
* readelf -x: recognize SHF_COMPRESSED using zstd
* gdb and DWARF consumers (breakpad, dwz, etc): support Zstandard
* elfutils: support Zstandard

See my analysis in the end. zstd appears superior to zlib in all
metrics.

>Overall, I think this is a worthwhile experiment. If the gABI
>had an "experimental" designation, by which we might enable
>initial experiments, while holding back the long term commitment
>for a few years, I think that would be a good approach. Ideally,
>we'd see the following before we make a final commitment:
>
> 1) The format has remained 100% backward compatible.

I think this is satisfied. Cc Yann Collet as a zstd maintainer.

> 2) There is active ongoing use, and nothing else is
> coming along to usurp it.

Satisfied. On LLVM side there is strong enough motivation and Cole
Kissane (sorry, I don't know their email address!) is actively working on
adopting Zstandard in various parts (ELF .debug*, clang ast serialization).

On GNU side, the substantial changes (gas, ld, objcopy
--(de)?compress-debug-sections, readelf -x) will be in binutils. Cc
Nick Clifton.

> 3) The compression benefits are real and significant,
> even when tricks like precomputed dictionaries
> are not used.

I just did some rudimentary analysis on clang 14.0 builds, with DWARF v4
and DWARF v5. I use a locally modified llvm-objcopy --(de)?compress-debug-sections
with zstd. No dictionary is used.

## DWARF v4

compress zlib 28.510 s ± 0.149 s
compress zstd 8.077 s ± 0.067 s
uncompress zlib 5.761 s ± 0.047 s
uncompress zstd 4.134 s ± 0.028 s
pure copy 2.187 s ± 0.047 s

size (zlib vs zstd). zstd is smaller in most sections
```
% ~/projects/bloaty/Release/bloaty clang.zstd -- clang.zlib
FILE SIZE VM SIZE
-------------- --------------
+8.0% +5.24Ki [ = ] 0 .debug_loc
-17.5% -147Ki [ = ] 0 .debug_abbrev
-11.0% -318Ki [ = ] 0 .debug_ranges
-17.7% -5.83Mi [ = ] 0 .debug_str
-43.5% -14.9Mi [ = ] 0 .debug_line
-11.3% -18.3Mi [ = ] 0 .debug_info
-6.9% -39.5Mi [ = ] 0 TOTAL
```

I am not clear about the memory footprint of zstd, but for the binary I
tested. zstd uses less memory for both compression and uncompression.

compress: wall time/maximum RSS

zlib 28.66 sec 2239372 KiB
zstd 8.51 sec 2158624 KiB

uncompress

zlib 6.29 sec 2316900 KiB
zstd 4.59 sec 2298432 KiB

## DWARF v5

compress zlib 12.719 s ± 0.057 s
compress zstd 4.901 s ± 0.179 s
uncompress zlib 4.408 s ± 0.017 s
uncompress zstd 3.742 s ± 0.023 s

size (zlib vs zstd). zstd is smaller in most sections
```
% ~/projects/bloaty/Release/bloaty clang.zstd -- clang.zlib
FILE SIZE VM SIZE
-------------- --------------
+5.1% +515Ki [ = ] 0 .debug_frame
+11% +19.2Ki [ = ] 0 .debug_loclists
+3.8% +2.54Ki [ = ] 0 .debug_str_offsets
+22% +16 [ = ] 0 .debug_ranges
+6.2% +8 [ = ] 0 .debug_gdb_scripts
+0.0% +5 [ = ] 0 .gdb_index
-1.2% -2 [ = ] 0 .comment
-1.0% -184 [ = ] 0 .debug_abbrev
-8.4% -4.44Ki [ = ] 0 .debug_line_str
-7.6% -58.6Ki [ = ] 0 .debug_info
-30.9% -75.3Ki [ = ] 0 .debug_str
-4.7% -422Ki [ = ] 0 .debug_rnglists
-25.5% -907Ki [ = ] 0 .debug_addr
-45.3% -26.0Mi [ = ] 0 .debug_line
-2.9% -27.0Mi [ = ] 0 TOTAL
```


>--
>You received this message because you are subscribed to a topic in the Google Groups "Generic System V Application Binary Interface" group.
>To unsubscribe from this topic, visit https://groups.google.com/d/topic/generic-abi/satyPkuMisk/unsubscribe.
>To unsubscribe from this group and all its topics, send an email to generic-abi...@googlegroups.com.
>To view this discussion on the web visit https://groups.google.com/d/msgid/generic-abi/c2724d01-e1ff-ab68-4875-ad22c3dad34f%40emvision.com.

Fangrui Song

unread,
Jun 27, 2022, 6:26:59 PM6/27/22
to gener...@googlegroups.com, Cary Coutant, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
(For CCed folks who may not know this forum (discussing the generic ABI for the ELF format).
The context is https://groups.google.com/g/generic-abi/c/satyPkuMisk .
You may reply by email instead of by the web interface. If you haven't joined the forum,
your message will be moderated (i.e. others may not see it immediately).)
Amend this chapter. The "## DWARF v5" above indicates the executable for a
-gsplit-dwarf build. The bulk of debug info is in separate .dwo files, so the
section sizes are small.

Here are some statistics for a non-split DWARF v5 build, using zlib level 6,
zlib level 1, and zstd (default). zlib 1.2.11 and zstd 1.5.2.

compress zlib 6 41.381 s ± 0.028 s
compress zlib 1 19.962 s ± 0.109 s
compress zstd 11.021 s ± 0.035 s
uncompress zlib 6 8.385 s ± 0.074 s
uncompress zlib 1 8.835 s ± 0.027 s
uncompress zstd 5.803 s ± 0.019 s


compress: wall time/maximum RSS
zlib 1 20.46 sec 3460028 KiB
zstd 11.67 sec 3258104 KiB

unompress: wall time/maximum RSS
zlib 1 9.08 sec 3287524 KiB
zstd 5.91 sec 3276272 KiB


In lld, because the default zlib level is too slow. We defaulted to 1. With
zstd, while 1 is much faster than the default (3), because the performance is
so impressive, perhaps we can enjoy more of the size benefits:)

```
% ~/projects/bloaty/Release/bloaty clang.zstd -- clang.zlib
FILE SIZE VM SIZE
-------------- --------------
+22% +16 [ = ] 0 .debug_ranges
+6.2% +8 [ = ] 0 .debug_gdb_scripts
+0.6% +1 [ = ] 0 .comment
-4.8% -8 [ = ] 0 .debug_aranges
-0.2% -1.05Ki [ = ] 0 .debug_loclists
-18.4% -5.46Ki [ = ] 0 .debug_gnu_pubtypes
-21.8% -13.4Ki [ = ] 0 .debug_gnu_pubnames
-28.3% -19.1Ki [ = ] 0 .debug_line_str
-2.0% -189Ki [ = ] 0 .debug_rnglists
-44.1% -1.01Mi [ = ] 0 .debug_abbrev
-26.2% -1.43Mi [ = ] 0 .debug_addr
-20.5% -2.67Mi [ = ] 0 .debug_frame
-5.9% -11.5Mi [ = ] 0 .debug_info
-36.0% -18.0Mi [ = ] 0 .debug_str
-64.5% -31.6Mi [ = ] 0 .debug_str_offsets
-50.8% -32.5Mi [ = ] 0 .debug_line
-10.1% -98.9Mi [ = ] 0 TOTAL
```

Fangrui Song

unread,
Jun 28, 2022, 2:20:36 PM6/28/22
to Yann Collet, gener...@googlegroups.com, Cary Coutant, Nick Clifton, Owen Anderson, David Blaikie, Felix Handte

There was a question in this mailing regarding Zstandard stability, and I guess it’s worthwhile clarifying that point.

 

 

Zstandard format is completely stable since v1.0 which was published in 2016, 6 years ago.

It means it’s both forward and backward compatible :
any payload produced by an old version >= 1.0 can be decoded by current version,
and same thing in the other direction, payloads produced by newer versions are compatible with older decoders.

 

The format is completely defined into an IETF RFC document :

https://datatracker.ietf.org/doc/html/rfc8878 ,

so there is no room to “go fancy” and introduce breaking changes.
This is essentially the same status as deflate and follows the same established model.

 

Based on this RFC, new implementations of Zstandard are also available, beyond the reference one we provide.

For example, there is an excellent Go re-implementation by Klaus Post, at https://github.com/klauspost/compress/tree/master/zstd#zstd,

another one in Java by Martin Traverso, a Rust decoder by Moritz Borcherding,

or a pure Javascipt decoder by Arjun Barrett ( https://www.npmjs.com/package/fzstd ).

Hardware accelerators are starting to appear.

All this to say, the format itself is here to stay, there’s an ecosystem around it.

 

 

There was another question around memory usage,
and while the Zstandard format allows high memory usage for specific scenarios,
let it be clear that it doesn’t have to, and by default it doesn’t.

There are also advanced options to control or even restrict memory usage to even lower levels whenever needed, notably for embedded scenarios, on both compression and decompression sides.

And there are also build options to address the binary size if that’s a concern.

 

For continued discussions on these advanced topics, I’m CCing my colleague Felix Handte, who can be helpful since he has a good grasp and experience on these topics.

 

 

Best Regards

 

Yann Collet

ali_e...@emvision.com

unread,
Jun 28, 2022, 3:45:09 PM6/28/22
to gener...@googlegroups.com, Cary Coutant, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
On 6/27/22 2:59 PM, 'Fangrui Song' via Generic System V Application Binary Interface wrote:
> On 2022-06-27, ali wrote:
>> So I'm nervous, but not opposed. If it's going to happen, it
>> should probably be in the gABI.
>
> Thanks for the positive feedback.
> Cc Yann Collet (zstd maintainer) to comment on these aspects.
>
> Just to be clear, do you mean that `ELFCOMPRESS_ZSTD  2` is fine to you?
> Cc Cary Coutant (de facto maintainer of generic-abi).


Yes, I meant that I'm OK with zstd being part of the gABI,
subject to meeting those requirements I listed (longevity,
simplicty, fragmentation).

I really don't want to see a long list of compression options
in ELF, but every generation or so, one might expect to see
an important advance that's worth hopping on board with.

>
>> There are some options, such as the ability to create an initial
>> dictionary, or to set compression levels that I would not be happy
>> to need to support. I don't want the interface to be more than this
>> (Solaris) example:
>>
>>    % cc foo.c -z compress-sections=zstd
>>
>> If that's your intent, then that's good. If you're thinking
>> of adding more knobs than that, then I'd like to hear all
>> the details, but my bias is strongly against complicating
>> sub-options.
>
> For GCC/Clang: perhaps we can introduce a new driver option -gz=zstd.
> This is the only user-facing driver option we need.  If the ecosystem
> builds well, in 5 or 10 years, perhaps we can change -gz to default to
> -gz=zstd.
>
>
<...details elided...>

I wasn't meaning to ask about actual GNU syntax for all those
tools. The point of my syntax was to show that there's nothing
specified other than 'zstd'. No optimization levels, no dictionary,
etc...

Earlier, the possibility of adding stuff like that later,
possibly with another ELF section, was mentioned. I'm
hoping that was just a discussion of theoretical possibilities,
and not the actual plan. Is that right? If so, then I'm
on board. If not, then I want to hear the details.


> I just did some rudimentary analysis on clang 14.0 builds, with DWARF v4
> and DWARF v5. I use a locally modified llvm-objcopy --(de)?compress-debug-sections
> with zstd. No dictionary is used.
>
> ## DWARF v4
>
> compress zlib      28.510 s ±  0.149 s
> compress zstd      8.077 s ±  0.067 s
> uncompress zlib    5.761 s ±  0.047 s
> uncompress zstd    4.134 s ±  0.028 s
> pure copy          2.187 s ±  0.047 s
>
> size (zlib vs zstd). zstd is smaller in most sections
> ```
> % ~/projects/bloaty/Release/bloaty clang.zstd -- clang.zlib
>     FILE SIZE        VM SIZE
>  --------------  --------------
>   +8.0% +5.24Ki  [ = ]       0    .debug_loc
>  -17.5%  -147Ki  [ = ]       0    .debug_abbrev
>  -11.0%  -318Ki  [ = ]       0    .debug_ranges
>  -17.7% -5.83Mi  [ = ]       0    .debug_str
>  -43.5% -14.9Mi  [ = ]       0    .debug_line
>  -11.3% -18.3Mi  [ = ]       0    .debug_info
>   -6.9% -39.5Mi  [ = ]       0    TOTAL

I'm not sure how to interpret this. Is it saying
that .debug_line compressed with zstd is 43.5% smaller
than when zlib is used?

And the overall win is 6.9% ?

- Ali

Fangrui Song

unread,
Jun 29, 2022, 3:28:33 PM6/29/22
to gener...@googlegroups.com, Cary Coutant, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
On 2022-06-28, ali_e...@emvision.com wrote:
>On 6/27/22 2:59 PM, 'Fangrui Song' via Generic System V Application Binary Interface wrote:
>>On 2022-06-27, ali wrote:
>>>So I'm nervous, but not opposed. If it's going to happen, it
>>>should probably be in the gABI.
>>
>>Thanks for the positive feedback.
>>Cc Yann Collet (zstd maintainer) to comment on these aspects.
>>
>>Just to be clear, do you mean that `ELFCOMPRESS_ZSTD  2` is fine to you?
>>Cc Cary Coutant (de facto maintainer of generic-abi).
>
>
>Yes, I meant that I'm OK with zstd being part of the gABI,
>subject to meeting those requirements I listed (longevity,
>simplicty, fragmentation).
>
>I really don't want to see a long list of compression options
>in ELF, but every generation or so, one might expect to see
>an important advance that's worth hopping on board with.

I know the paradox of choice:) Certainly that we should not add a
plethora of bzip2, xz, lzo, brotli, etc to generic-abi.
I don't think users are so fond of using a different format for a slight
different read/write/compression ratio/memory usage need.
They want to pick one format which performs well across a wide variety
of workloads. (Adding a new format also introduces complexity on their
build system side. They need to teach DWARF consumers they use. It's not
a small undertaking.)

The venerable zlib does show its age in these metrics and zstd is indeed
a game-changer in this area, being superior to it in every metric. zstd
is used in a number of databases, file systems, and storage systems.
(See "Zstandard is used by" on the website). I hope that they are good
testimony to your criteria. zstd's adoption is impressive.

FWIW: Nick Alcock working on the CTF format mentioned that the next CTF
format will be compressed with zstd, too
(https://sourceware.org/pipermail/gnu-gabi/2022q2/000499.html).

>>
>>>There are some options, such as the ability to create an initial
>>>dictionary, or to set compression levels that I would not be happy
>>>to need to support. I don't want the interface to be more than this
>>>(Solaris) example:
>>>
>>>   % cc foo.c -z compress-sections=zstd
>>>
>>>If that's your intent, then that's good. If you're thinking
>>>of adding more knobs than that, then I'd like to hear all
>>>the details, but my bias is strongly against complicating
>>>sub-options.
>>
>>For GCC/Clang: perhaps we can introduce a new driver option -gz=zstd.
>>This is the only user-facing driver option we need.  If the ecosystem
>>builds well, in 5 or 10 years, perhaps we can change -gz to default to
>>-gz=zstd.
>>
>>
><...details elided...>
>
>I wasn't meaning to ask about actual GNU syntax for all those
>tools. The point of my syntax was to show that there's nothing
>specified other than 'zstd'. No optimization levels, no dictionary,
>etc...

The layout and interpretation of the data that follows the compression
header is specific to each algorithm. Technically a tool has freedom to
provide a toggle. So far no ELF tool I know provides an option to
customize compression. For lld we have discussed not introducing an
option to prevent paradox of choice. That said, if someone wants to add a
toggle with sufficient justification, I'll not stop that.

>Earlier, the possibility of adding stuff like that later,
>possibly with another ELF section, was mentioned. I'm
>hoping that was just a discussion of theoretical possibilities,
>and not the actual plan. Is that right? If so, then I'm
>on board. If not, then I want to hear the details.

The dictionary-as-a-section idea is not part of this proposal, and I do
not recommend that people do this. I do not like it for two reasons.

First, adding dependencies among sections this way will make some binary
manipulation tools difficult. There is no good sh_info/sh_link to
indicate the dependency and the lack of proper dependency tracking
breaks ELF's spirit.

Second, large debug sections have very different characteristics. I
don't think a dictionary as a section can help compressing other debug
sections.

>>I just did some rudimentary analysis on clang 14.0 builds, with DWARF v4
>>and DWARF v5. I use a locally modified llvm-objcopy --(de)?compress-debug-sections
>>with zstd. No dictionary is used.
>>
>>## DWARF v4
>>
>>compress zlib      28.510 s ±  0.149 s
>>compress zstd      8.077 s ±  0.067 s
>>uncompress zlib    5.761 s ±  0.047 s
>>uncompress zstd    4.134 s ±  0.028 s
>>pure copy          2.187 s ±  0.047 s
>>
>>size (zlib vs zstd). zstd is smaller in most sections
>>```
>>% ~/projects/bloaty/Release/bloaty clang.zstd -- clang.zlib
>>     FILE SIZE        VM SIZE
>>  --------------  --------------
>>   +8.0% +5.24Ki  [ = ]       0    .debug_loc
>>  -17.5%  -147Ki  [ = ]       0    .debug_abbrev
>>  -11.0%  -318Ki  [ = ]       0    .debug_ranges
>>  -17.7% -5.83Mi  [ = ]       0    .debug_str
>>  -43.5% -14.9Mi  [ = ]       0    .debug_line
>>  -11.3% -18.3Mi  [ = ]       0    .debug_info
>>   -6.9% -39.5Mi  [ = ]       0    TOTAL
>
>I'm not sure how to interpret this. Is it saying
>that .debug_line compressed with zstd is 43.5% smaller
>than when zlib is used?
>
>And the overall win is 6.9% ?

The file size is 6.9% smaller if I recompress the zlib-compressed
(default level 6) `clang` executable with zstd (default level 3).
-43.5% means that zstd-compressed .debug_line is 43.5% smaller than
zlib-compressed .debug_line . Owen has asked zstd maintainers whether
zstd can improve compression ratio for sections like .debug_loc (it's
small anyway).

Ian Lance Taylor

unread,
Jun 29, 2022, 3:45:30 PM6/29/22
to gener...@googlegroups.com, Cary Coutant, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
On Wed, Jun 29, 2022 at 12:28 PM 'Fangrui Song' via Generic System V Application Binary Interface <gener...@googlegroups.com> wrote:

 (Adding a new format also introduces complexity on their
build system side. They need to teach DWARF consumers they use. It's not
a small undertaking.)

 I want to emphasize this point.  There are a lot of DWARF consumers out there.  Just rippling through each new version of DWARF to all of those consumers takes years.

Ian

ali_e...@emvision.com

unread,
Jul 1, 2022, 12:32:06 AM7/1/22
to gener...@googlegroups.com, Cary Coutant, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
I think we're largely in agreement, but let's
mop up the details.

> I know the paradox of choice:) Certainly that we should not add a
> plethora of bzip2, xz, lzo, brotli, etc to generic-abi.
> I don't think users are so fond of using a different format for a slight
> different read/write/compression ratio/memory usage need.
> They want to pick one format which performs well across a wide variety
> of workloads. (Adding a new format also introduces complexity on their
> build system side. They need to teach DWARF consumers they use. It's not
> a small undertaking.)

All of that is exactly right. Note though that the creators
of bzip2, xz, etc, are undoubtedly proud of their inventions,
and might have made claims very similar to the ones being
made for xstd here now. Those other things are in wide use,
and I wouldn't be shocked if someone wanted to add them also,
with similar justifications.

Hence the need to have asked why is Zstd different, and why
now the time to act on that, rather than waiting a bit more.

I'm more impressed by the ZLIB-like commitment to stability
and generality, than by the other (also impressive) compression
claims.


> The layout and interpretation of the data that follows the compression
> header is specific to each algorithm. Technically a tool has freedom to
> provide a toggle. So far no ELF tool I know provides an option to
> customize compression. For lld we have discussed not introducing an
> option to prevent paradox of choice. That said, if someone wants to add a
> toggle with sufficient justification, I'll not stop that.

This is why I've kept pressing the "what options are you
intending to use" question. I might be misunderstanding you,
but if any such toggle needs some extra information to be
recorded in the object, then the tools don't have much room
to move.

You proposed the following:

The section data is compressed with the Zstandard
algorithm. The compressed Zstandard data bytes begin
with the byte immediately following the compression
header, and extend to the end of the section.

Breaking this down:

- The compression header belongs to ELF (defined by the gABI).
Tools are not able to add fields to it without breaking
compatibility with the ABI.

- The data stream following the compression header
belongs to the Zstandard the library. The zstd
folks could choose to encode options somehow in
this data, and in many ways, that would be the
best option, because ELF can be blissfully unaware.
But only Zstandard can do this, not ELF tools.
Any such additions by a tool would create unregulated
chaos for other tools that don't know about it. The
ABI exists to prevent that, among other reasons.

As defined, there is no allowance for tools to add any
additional information.

Are you saying that the tools are free to add toggles
that don't require extra information in order to decompress?
If so, then yes, I agree, that's not a problem. As long
as we can throw the compressed result to the Zstd library
and get the original back, there's no problem.

If the tools need to record extra stuff in order for the
data to be recoverable, then we need to add an additional
ELF_ZSTD header in between the compression header, and the
data stream. I can help you with that offline if that's the direction
you are heading in, but since that's the sort of complexity
I've been arguing against, I'll stop here for now and let you
clarify the above.


> The dictionary-as-a-section idea is not part of this proposal, and I do
> not recommend that people do this. I do not like it for two reasons.
<...reasons elided...>

Great. Me either, for the same reasons, plus some others,
which are largely about largely about wanting to keep
it simple.


>>> size (zlib vs zstd). zstd is smaller in most sections
>>> ```
>>> % ~/projects/bloaty/Release/bloaty clang.zstd -- clang.zlib
>>>     FILE SIZE        VM SIZE
>>>  --------------  --------------
>>>   +8.0% +5.24Ki  [ = ]       0    .debug_loc
>>>  -17.5%  -147Ki  [ = ]       0    .debug_abbrev
>>>  -11.0%  -318Ki  [ = ]       0    .debug_ranges
>>>  -17.7% -5.83Mi  [ = ]       0    .debug_str
>>>  -43.5% -14.9Mi  [ = ]       0    .debug_line
>>>  -11.3% -18.3Mi  [ = ]       0    .debug_info
>>>   -6.9% -39.5Mi  [ = ]       0    TOTAL
>>
>> I'm not sure how to interpret this. Is it saying
>> that .debug_line compressed with zstd is 43.5% smaller
>> than when zlib is used?
>>
>> And the overall win is 6.9% ?
>
> The file size is 6.9% smaller if I recompress the zlib-compressed
> (default level 6) `clang` executable with zstd (default level 3).
> -43.5% means that zstd-compressed .debug_line is 43.5% smaller than
> zlib-compressed .debug_line . Owen has asked zstd maintainers whether
> zstd can improve compression ratio for sections like .debug_loc (it's
> small anyway).


The percentage of the whole is interesting. It would also
be interesting to see the overall result when one
considers only the compressed sections, since that
would shine a brighter light on how the two algorithms
compare on the same data. Throwing the uncompressed
size into the mix makes that harder to see.

We all have our own threshold, but to me, 5-10% wins
fall in the "nice but not a big deal" category, but
I start getting excited when I see numbers like 20%
or more. And I see several lines in the above that
make me think that the "compressed only" win is on
that order.

The fact that .debug_loc is larger by 5K doesn't seem
like something to worry much about. It's a rounding error
on the whole.

Thanks.

- Ali

Fangrui Song

unread,
Jul 8, 2022, 2:23:41 AM7/8/22
to gener...@googlegroups.com, Cary Coutant, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
On 2022-06-30, ali_e...@emvision.com wrote:
> I think we're largely in agreement, but let's
>mop up the details.
>
>>I know the paradox of choice:) Certainly that we should not add a
>>plethora of bzip2, xz, lzo, brotli, etc to generic-abi.
>>I don't think users are so fond of using a different format for a slight
>>different read/write/compression ratio/memory usage need.
>>They want to pick one format which performs well across a wide variety
>>of workloads. (Adding a new format also introduces complexity on their
>>build system side. They need to teach DWARF consumers they use. It's not
>>a small undertaking.)
>
>All of that is exactly right. Note though that the creators
>of bzip2, xz, etc, are undoubtedly proud of their inventions,
>and might have made claims very similar to the ones being
>made for xstd here now. Those other things are in wide use,
>and I wouldn't be shocked if someone wanted to add them also,
>with similar justifications.
>
>Hence the need to have asked why is Zstd different, and why
>now the time to act on that, rather than waiting a bit more.
>
>I'm more impressed by the ZLIB-like commitment to stability
>and generality, than by the other (also impressive) compression
>claims.

My friend Riatre sais that zstd is the first "universal" compression
algorithm which scales from low-ratio-very-fast to
high-ratio-pretty-slow. We can get better ratio than zlib with 5x
(un)compression performance with zstd -1, or xz-like and xz-like
performance with `zstd -9`, against a wide variety of input data. This
is unlike lzo/lz4 (as-fast-as-possible, not great ratio), lzma/xz
(high-ratio but extremely slow) or bz2 (beat everyone on text).


Quick statistics on .debug_info from a clang executable.

llvm-objcopy --dump-section .debug_info=debug_info clang-14 /dev/null

i=debug_info
o=$i.out

measure() { echo ===$@; rm -f $o; /usr/bin/time -f '%e sec %M KiB' $@ && stat -c %s $o; }
measure_no_o() { echo ===$@ \> $o; rm -f $o; /usr/bin/time -f '%e sec %M KiB' $@ > $o && stat -c %s $o; }
do_pigz() { measure pigz -fkz -p 1 -S .out $i; }
do_bzip2() { measure_no_o bzip2 -fc $@ $i; }
do_gzip() { measure gzip -fk -S .out $@ $i; }
do_lz4() { measure lz4 -fq $@ $i $o; }
do_lzop() { measure lzop -f $@ $i -o $o; }
do_xz() { measure_no_o xz -fc $@ $i; }
do_zstd() { measure zstd -fq $@ $i -o $o; }

do_pigz
do_bzip2 -1
do_gzip -1
do_gzip -3
do_lz4 --fast
do_lz4 -1
do_lz4 -3
do_lz4 -9
do_lzop -1
do_lzop -3
do_lzop -9
do_xz -0
do_xz -1
do_zstd --fast
do_zstd -1
do_zstd -3
do_zstd -9


% ./bench.sh
===pigz -fkz -p 1 -S .out debug_info
16.62 sec 2660 KiB
170159469
===bzip2 -fc -1 debug_info > debug_info.out
30.42 sec 2336 KiB
176073459
===gzip -fk -S .out -1 debug_info
7.37 sec 2112 KiB
182039904
===gzip -fk -S .out -3 debug_info
9.29 sec 2020 KiB
176894760
===lz4 -fq --fast debug_info debug_info.out
1.00 sec 9268 KiB
247057324
===lz4 -fq -1 debug_info debug_info.out
1.08 sec 8920 KiB
236745453
===lz4 -fq -3 debug_info debug_info.out
5.47 sec 8800 KiB
205950311
===lz4 -fq -9 debug_info debug_info.out
11.22 sec 8912 KiB
203155816
===lzop -f -1 debug_info -o debug_info.out
1.22 sec 2280 KiB
228433712
===lzop -f -3 debug_info -o debug_info.out
1.23 sec 2256 KiB
227210076
===lzop -f -9 debug_info -o debug_info.out
61.19 sec 2524 KiB
187900149
===xz -fc -0 debug_info > debug_info.out
22.53 sec 4988 KiB
136258600
===xz -fc -1 debug_info > debug_info.out
24.82 sec 10992 KiB
123116548
===zstd -fq --fast debug_info -o debug_info.out
1.12 sec 14232 KiB
201337963
===zstd -fq -1 debug_info -o debug_info.out
1.29 sec 13828 KiB
187108195
===zstd -fq -3 debug_info -o debug_info.out
2.22 sec 45364 KiB
159170590
===zstd -fq -9 debug_info -o debug_info.out
7.00 sec 95956 KiB
140573596


The pigz -z command is to use single-threaded zlib. zstd -3 is 7x as
fast with a higher compression ratio.
With a ratio similar to xz, the uncompression speed is like 8x.

bzip2/xz are too slow to be usable even with the lowest compression level (1).
lz4/lzo, while very fast, don't compress .debug_info very well.
Right.

>Are you saying that the tools are free to add toggles
>that don't require extra information in order to decompress?
>If so, then yes, I agree, that's not a problem. As long
>as we can throw the compressed result to the Zstd library
>and get the original back, there's no problem.

The tools are free to use different compression levels (I think 1 and 3
are quite good; higher levels are unlikely unless the tool very
aggressively optimizes for size).

They may provide toggles for users. The results are just lightly or
heavily compressed streams, which can be uncompressed by the zstd
library with no extra information (symbol/section/etc) from ELF.

>If the tools need to record extra stuff in order for the
>data to be recoverable, then we need to add an additional
>ELF_ZSTD header in between the compression header, and the
>data stream. I can help you with that offline if that's the direction
>you are heading in, but since that's the sort of complexity
>I've been arguing against, I'll stop here for now and let you
>clarify the above.

From a toolchain developer's perspective, there is really not much
difference from using zlib. zlib doesn't need extra information from
ELF. zstd doesn't need it, either.

The ELFCOMPRESS_ZSTD value just allows the various toolchain components
to quickly identify whether the compression format is supported or not.
Let me just provide some commands that interested folks can try out different workload:)

a() { llvm-objcopy --dump-section .$1=$1 clang-14 /dev/null; }; a debug_info; a debug_line; a debug_str; a debug_str; a debug_ranges
zz() { pigz -fkz $1; }; zz debug_info; zz debug_line; zz debug_str; zz debug_ranges
c() { zstd -fq $1; }; c debug_info; c debug_line; c debug_str; c debug_ranges

% stat -c clang-14
1149466952
% stat -c %s debug_{info,line,str,ranges} | awk '{s+=$1} END{print s}'
793016809
% stat -c %s debug_{info,line,str,ranges}.zz | awk '{s+=$1} END{print s}'
243689387
% stat -c %s debug_{info,line,str,ranges}.zst | awk '{s+=$1} END{print s}'
216344712

The default zstd -3 saves 26MiB more than zlib's default. The single-thread compression speed is 7x as previously measured.

>--
>You received this message because you are subscribed to the Google Groups "Generic System V Application Binary Interface" group.
>To unsubscribe from this group and stop receiving emails from it, send an email to generic-abi...@googlegroups.com.
>To view this discussion on the web visit https://groups.google.com/d/msgid/generic-abi/b9559695-7be4-2005-46d0-240d20127b70%40emvision.com.

ali_e...@emvision.com

unread,
Jul 8, 2022, 12:39:06 PM7/8/22
to gener...@googlegroups.com, Cary Coutant, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
On 7/8/22 12:23 AM, 'Fangrui Song' via Generic System V Application Binary Interface wrote:
>> As defined, there is no allowance for tools to add any
>> additional information.
>
> Right.

OK, I'm satisfied.

No one else has said much, so that either means that
they're all happy, or that they were all waiting for
me to finish. It's time to hear from others, and then
for Cary to take it up.


> The ELFCOMPRESS_ZSTD value just allows the various toolchain components
> to quickly identify whether the compression format is supported or not.

It's a useful approximation, but not perfect.

I don't think that the gABI requires that only supported
features be listed in elf.h. So while a missing ELFCOMPRESS_ZSTD
certainly means that it's not supported, the reverse might
not hold: The #define could be present, without the underlying
support, because the platform just wants their elf.h to contain
the full set of gABI definitions.

This doesn't seem like a big problem though. The worst that would
happen is that the user will get an "unknown compression type"
error at link time.

Thanks!

- Ali

Florian Weimer

unread,
Jul 8, 2022, 12:54:31 PM7/8/22
to ali_e...@emvision.com, gener...@googlegroups.com, Cary Coutant, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
* ali elfgabi:

>> The ELFCOMPRESS_ZSTD value just allows the various toolchain components
>> to quickly identify whether the compression format is supported or not.
>
> It's a useful approximation, but not perfect.
>
> I don't think that the gABI requires that only supported
> features be listed in elf.h. So while a missing ELFCOMPRESS_ZSTD
> certainly means that it's not supported, the reverse might
> not hold: The #define could be present, without the underlying
> support, because the platform just wants their elf.h to contain
> the full set of gABI definitions.

The new #define will be provided by glibc well before all the consumers
are ready for the new format. There's of course the inevitable compile
time vs run time difference. A typical GNU/Linux distribution ships
four link editors these days, and of course features like this one will
not land at the same time in all linkers.

There is significant cost in upgrading all the generic ELF consumers to
support a new compression format. The good thing is that after the
second or third compression format, internal interfaces tend to emerge
to make this more pluggable, so the cost for integration of further
compression formats should be greatly reduced. That's why format
proliferation isn't a major concern to me. A second format will be
added eventually, and we might as well start now with zstd.

Thanks,
Florian

Fangrui Song

unread,
Jul 8, 2022, 1:41:12 PM7/8/22
to gener...@googlegroups.com, ali_e...@emvision.com, Cary Coutant, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
>* ali elfgabi:
>
>>> The ELFCOMPRESS_ZSTD value just allows the various toolchain components
>>> to quickly identify whether the compression format is supported or not.
>>
>> It's a useful approximation, but not perfect.
>>
>> I don't think that the gABI requires that only supported
>> features be listed in elf.h. So while a missing ELFCOMPRESS_ZSTD
>> certainly means that it's not supported, the reverse might
>> not hold: The #define could be present, without the underlying
>> support, because the platform just wants their elf.h to contain
>> the full set of gABI definitions.
>
>The new #define will be provided by glibc well before all the consumers
>are ready for the new format. There's of course the inevitable compile
>time vs run time difference. A typical GNU/Linux distribution ships
>four link editors these days, and of course features like this one will
>not land at the same time in all linkers.

Right. Having `#define ELFCOMPRESS_ZSTD 2` in glibc provided elf.h
will allow a project using the macro (e.g. it is likely a DWARF
consumer) to be buildable on the newer glibc. If the project wants to
be buildable on older glibc, it can define the macro itself.

On the toolchain side, all the linkers (GNU ld, gold, lld, and mold) and
various DWARF consumers I know test ELFCOMPRESS_ZLIB instead of blindly
assuming a SHF_COMPRESSED section is compressed with zlib. If the format
is unknown, they will report a "unknown format" style diagnostic instead
of "failed to uncompress with zlib" style diagnostic.

>There is significant cost in upgrading all the generic ELF consumers to
>support a new compression format. The good thing is that after the
>second or third compression format, internal interfaces tend to emerge
>to make this more pluggable, so the cost for integration of further
>compression formats should be greatly reduced. That's why format
>proliferation isn't a major concern to me. A second format will be
>added eventually, and we might as well start now with zstd.
>
>Thanks,
>Florian

The adoption can be piecewise. For instance, once clang supports
ELFCOMPRESS_ZSTD output and lld supports ELFCOMPRESS_ZSTD input, the
group (if they use clang/lld, other compiler/linker combinations are
similar) can consider changing relocatable object files to use zstd.

The lack of support in DWARF consumers like gdb/lldb affects whether
they want to use zstd for executables/shared objects. Technically, if
they have taught tools like objcopy, they may start using zstd for
executables/shared objects and pay some overhead when the debugging need
arises.

We may eventually add a third ELFCOMPRESS_* value to the generic ABI.
As the lenghty ecosystem discussions onging shows, a new format needs to
convince various toolchain vendors and it will not be added lightly:)

Cary Coutant

unread,
Jul 22, 2022, 7:00:00 PM7/22/22
to Fangrui Song, Generic System V Application Binary Interface, ali_e...@emvision.com, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
I believe we have a consensus in favor of this new compression
algorithm. As long as we're not adding new options like this very
often, I think it's important to keep up with advances in the
technology, and the zstd compression format seems like a plus in all
respects.

I will add this to the ELF spec, as proposed below:
> I propose that we add a row below "Figure 4-13: ELF Compression Types,
> ch_type"
>
> ELFCOMPRESS_ZSTD 2
>
> Then add this description:
>
> ELFCOMPRESS_ZSTD
> The section data is compressed with the Zstandard algorithm. The
> compressed Zstandard data bytes begin with the byte immediately
> following the compression header, and extend to the end of the
> section. Additional documentation for Zstandard may be found at
> http://www.zstandard.org

I don't think we need to add anything regarding external dictionaries
and compression options, but please let me know if you think we
should.

Thanks, everyone, for all the thoughtful discussion on the topic!

-cary

Roland McGrath

unread,
Jul 22, 2022, 7:00:45 PM7/22/22
to gener...@googlegroups.com, Fangrui Song, ali_e...@emvision.com, Yann Collet, Nick Clifton, Owen Anderson, David Blaikie
Looks perfect to me!

--
You received this message because you are subscribed to the Google Groups "Generic System V Application Binary Interface" group.
To unsubscribe from this group and stop receiving emails from it, send an email to generic-abi...@googlegroups.com.

Fangrui Song

unread,
Sep 11, 2022, 11:40:23 PM9/11/22
to Generic System V Application Binary Interface
Circling back.

FreeBSD and glibc have defined ELFCOMPRESS_ZSTD in their elf.h. musl will supposedly define the macro soon.

clang -gz=zstd, llvm-objcopy --compress-debug-sections=zstd, ld.lld --compress-debug-sections=zstd are now available. lldb/llvm-symbolizer/llvm-dwarfdump/etc support is coming soon.
mold author is notified.
I have filed feature requests for binutils, gdb, gcc driver, and elfutils.

Reply all
Reply to author
Forward
0 new messages