building generator/proto/nanopb.proto

229 views
Skip to first unread message

Matthew Simmons

unread,
Feb 3, 2021, 9:08:31 AM2/3/21
to nan...@googlegroups.com
What's the purpose of generato/proto/{__init__.py, _utils.py}? From looking at the source, and from looking at where it's imported in nanopb_generator.py, it looks like the purpose is to recompile nanopb.proto when nanopb_generator.py is run. Is that right? If so, I'm curious -- what's the advantage to doing that versus using the Makefile to (manually) trigger the recompilation? Is the idea to make it possible to run nanopb_generator.py without needing to build it first?

I ask because I'm trying my hand at PR #500 -- packaging up nanopb_generator so other Bazel users can invoke nanopb dynamically (i.e. from build rules) to compile protos.

Matt

Petteri Aimonen

unread,
Feb 3, 2021, 10:09:33 AM2/3/21
to nan...@googlegroups.com
Hi,

Yeah, the purpose is to build it automatically when it gets updated.
Back before that, I got way too many bug reports when new fields were
added to it but people forgot to rebuild. It should be fine to
have a prebuilt nanopb_pb2.py there and then the __init__ code would do
nothing.

As for _utils, that checks if the grpcio-tools Python package is
installed and uses protoc from that. That way there is no need to have a
binary protoc installed in path, and everything can be installed from
pip.

Ideally I'd like to solve issue #601 somehow by building the
nanopb_pb2.py dynamically, without storing it in the generator folder.
That is kind of done in branch dev_dynamic_nanopb_pb2, but I haven't
dared to merge it yet because it could break in weird ways in some weird
situation.

--
Petteri

Matthew Simmons

unread,
Feb 3, 2021, 4:15:21 PM2/3/21
to nanopb
In the Bazel work I'm doing, the problem I'm running into is having to figure out how to circumvent all of the code that tries to build the py file. First because Bazel already knows how to build protos, and second because it reduces the set of things that need to be bundled together with the generator. Obviously it's your project, so do what you will, but personally I would do one of two things:

1. Check in the proto, and always build it dynamically via Make/Bazel/whatever. The generator simply does not run until it's been built, which includes compiling the proto.
2. Check in the prebuilt proto .py file, and always use it. Probably also check in a shell script that'll compile the proto for those who need it, but the generator only uses the prebuilt proto. Add some automation to ensure that the checked-in proto is in sync with the checked-in prebuilt .py file.

Both of those will allow for the removal of the code in proto/{__init__.py, _utils.py}, as well as (I think) the removal of lots of scaffolding in the generator that seems to be there solely to rebuild the proto. Both will also reduce the size of the set of generator dependencies. The second is least "pure" from a proto point of view, but is by far the least disruptive to your users given how (I assume) they've grown accustomed to running the generator. Assuming you don't want to break that pattern, it might be the better way to go.

Petteri Aimonen

unread,
Feb 4, 2021, 12:12:07 AM2/4/21
to nan...@googlegroups.com
Hi,

> 1. Check in the proto, and always build it dynamically via
> Make/Bazel/whatever. The generator simply does not run until it's been
> built, which includes compiling the proto.

Yeah, I agree that would be "clean", but it generated way too much
support tickets, even when the error message said "you need to run make
in folder generator/proto". And also difficult to debug tickets when
there were changes to .proto that the generator depended on for proper
functioning.

> 2. Check in the prebuilt proto .py file, and always use it. Probably also
> check in a shell script that'll compile the proto for those who need it,
> but the generator only uses the prebuilt proto. Add some automation to
> ensure that the checked-in proto is in sync with the checked-in prebuilt
> .py file.

Maybe. But in the past Python protobuf library has made incompatible
changes to the generated file, so that different systems required
regenerating the file to match their python-protobuf version.

It's critical for me to control the support workload, because even
with all my efforts to do that, unpaid support takes about 90% of the
time I put into nanopb.

For me the best solution seems to be that nanopb_generator.py takes the
protoc it needs anyway, and uses it to generate nanopb_pb2.py for
itself. I agree that in some cases writing it to the source code folder
is problematic, and it could be written to a temp folder instead.
After all, the nanopb.proto source file is also required in order to be
able to compile .proto files that import it.

--
Petteri
Reply all
Reply to author
Forward
0 new messages