Question about wrapping generated pb.c/pb.h into a shared library for performance

8 views
Skip to first unread message

Ofek Almog

unread,
Nov 25, 2025, 4:26:20 AM (11 days ago) Nov 25
to nanopb

Hi everyone,

I’m considering the following approach and would love your input:

After generating the *.pb.c / *.pb.h files, I’m thinking about wrapping their encode/decode functions inside my own interface layer, compiling them into a shared library (.so) with optimizations like -O2, and then dynamically loading that .so at runtime instead of linking the .pb.c files directly into my project.

Two main motivations:

  1. Potential performance gains – Could compiling the generated nanopb code as a separate optimized shared library provide any measurable performance improvement compared to compiling it as part of the main project?

  2. Modular loading – If I have several different protobuf message types but my application only needs one specific "filter" or proto at a given moment, dynamically loading only the relevant .so (instead of all protos compiled together) could reduce memory footprint and startup time.
    Does nanopb (or protobuf C in general) benefit from this, or is the overhead the same?

Would this architecture provide any real performance or modularity benefits, or is it essentially equivalent to compiling everything statically into the main binary?

Thanks!


Petteri Aimonen

unread,
Nov 25, 2025, 4:29:54 AM (11 days ago) Nov 25
to nan...@googlegroups.com
Hi,

My personal feeling is that for a project where dynamic loading of .so
is a reasonable approach, you may be better served with e.g. Google's
own protobuf C++ library. Nanopb necessarily makes performance tradeoffs
to allow static allocation and operating with limited RAM.

--
Petteri


On Tue, Nov 25, 2025 at 12:08:43AM -0800, Ofek Almog wrote:
>
>
> Hi everyone,
>
> I’m considering the following approach and would love your input:
>
> After generating the *.pb.c / *.pb.h files, I’m thinking about wrapping
> their encode/decode functions inside my own interface layer, compiling them
> into a shared library (.so) with optimizations like -O2, and then
> dynamically loading that .so at runtime instead of linking the .pb.c files
> directly into my project.
>
> Two main motivations:
>
> 1.
>
> *Potential performance gains* – Could compiling the generated nanopb
> code as a separate optimized shared library provide any measurable
> performance improvement compared to compiling it as part of the main
> project?
> 2.
>
> *Modular loading* – If I have several different protobuf message types
> but my application only needs one specific "filter" or proto at a given
> moment, dynamically loading *only* the relevant .so (instead of all
> protos compiled together) could reduce memory footprint and startup time.
> Does nanopb (or protobuf C in general) benefit from this, or is the
> overhead the same?
>
> Would this architecture provide any real performance or modularity
> benefits, or is it essentially equivalent to compiling everything
> statically into the main binary?
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups "nanopb" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to nanopb+un...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/nanopb/ab7eb524-6736-4860-b55e-c6e3afad9a71n%40googlegroups.com.

> Hi everyone,
>
> I’m considering the following approach and would love your input:
>
> After generating the *.pb.c / *.pb.h files, I’m thinking about wrapping
> their encode/decode functions inside my own interface layer, compiling
> them into a shared library (.so) with optimizations like -O2, and then
> dynamically loading that .so at runtime instead of linking the .pb.c
> files directly into my project.
>
> Two main motivations:
> 1. Potential performance gains – Could compiling the generated nanopb
> code as a separate optimized shared library provide any measurable
> performance improvement compared to compiling it as part of the
> main project?
> 2. Modular loading – If I have several different protobuf message
> types but my application only needs one specific "filter" or proto
> at a given moment, dynamically loading only the relevant .so
> (instead of all protos compiled together) could reduce memory
> footprint and startup time.
> Does nanopb (or protobuf C in general) benefit from this, or is the
> overhead the same?
>
> Would this architecture provide any real performance or modularity
> benefits, or is it essentially equivalent to compiling everything
> statically into the main binary?
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google
> Groups "nanopb" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to [1]nanopb+un...@googlegroups.com.
> To view this discussion visit
> [2]https://groups.google.com/d/msgid/nanopb/ab7eb524-6736-4860-b55e-c6e
> 3afad9a71n%40googlegroups.com.
>
> References
>
> 1. mailto:nanopb+un...@googlegroups.com
> 2. https://groups.google.com/d/msgid/nanopb/ab7eb524-6736-4860...@googlegroups.com?utm_medium=email&utm_source=footer


Reply all
Reply to author
Forward
0 new messages