Hi,
> 1. Check in the proto, and always build it dynamically via
> Make/Bazel/whatever. The generator simply does not run until it's been
> built, which includes compiling the proto.
Yeah, I agree that would be "clean", but it generated way too much
support tickets, even when the error message said "you need to run make
in folder generator/proto". And also difficult to debug tickets when
there were changes to .proto that the generator depended on for proper
functioning.
> 2. Check in the prebuilt proto .py file, and always use it. Probably also
> check in a shell script that'll compile the proto for those who need it,
> but the generator only uses the prebuilt proto. Add some automation to
> ensure that the checked-in proto is in sync with the checked-in prebuilt
> .py file.
Maybe. But in the past Python protobuf library has made incompatible
changes to the generated file, so that different systems required
regenerating the file to match their python-protobuf version.
It's critical for me to control the support workload, because even
with all my efforts to do that, unpaid support takes about 90% of the
time I put into nanopb.
For me the best solution seems to be that nanopb_generator.py takes the
protoc it needs anyway, and uses it to generate nanopb_pb2.py for
itself. I agree that in some cases writing it to the source code folder
is problematic, and it could be written to a temp folder instead.
After all, the nanopb.proto source file is also required in order to be
able to compile .proto files that import it.
--
Petteri