Protobuf version confusion.

150 views
Skip to first unread message

Test Last

unread,
Sep 2, 2020, 9:33:19 AM9/2/20
to Protocol Buffers
Hi All

I am getting some errors where at a certain amount of data the protobuf.has_scalar_value() will show false.
But I am sure that there is indeed more data because I can see it in the response I received from the server. 
However the server is a Java implementation of Apache Calcite running Protobuf V3.6.1.
At first my C++ program was running Protobuf V3.13.0 and I kept on getting errors when the data exceeds a certain amount of data. I AM NOT however sure that, that is indeed the case but the current error is very obscure at this stage.
So then to try and remedy the situation I pulled TAG Protobuf V3.6.1 and I compiled it.
Ran protoc on them and I got the following error WHICH I didn't receive in 3.13.0 AND ALSO not on the Java server side running same version. ALSO This is done on a newly installed Ubuntu server 20 OS.

protoc -I=. --cpp_out=. ./common.proto
protoc -I=. --cpp_out=. ./request.proto
protoc -I=. --cpp_out=. ./response.proto
g++ -g -fPIC -c common.pb.cc -L/usr/local/lib `pkg-config --cflags --libs protobuf` -Wl,--no-as-needed -lgrpc++_reflection -Wl,--as-needed -ldl -o common.o -std=c++14
In file included from /usr/include/x86_64-linux-gnu/bits/types/stack_t.h:23,
                from /usr/include/signal.h:303,
                from /usr/include/x86_64-linux-gnu/sys/param.h:28,
                from /usr/local/include/google/protobuf/stubs/port.h:64,
                from /usr/local/include/google/protobuf/stubs/common.h:46,
                from common.pb.h:9,
                from common.pb.cc:4:
common.pb.h:222:3: error: expected identifier before ‘__null
 222 |   NULL = 24,
     |   ^~~~
common.pb.h:222:3: error: expected ‘}’ before ‘__null

So my question is. How is it possible that Protobufs work on the Java server but not on my C++ implementation with the same Proto files and same version compiled for this OS?
Is it a bug in the V3.6.1 ?


Adam Cozzette

unread,
Sep 2, 2020, 12:43:44 PM9/2/20
to Test Last, Protocol Buffers
It looks like you have an enum value named NULL, which conflicts with the NULL macro in C++. I haven't looked into this but it is possible that we added some workaround for this somewhere between 3.6.1 and 3.13.0. There is no reason for your Java and C++ to have the same protobuf version, though, so I would recommend that if your C++ project was already on 3.13.0 and was building successfully, then keep it on 3.13.0 instead of downgrading to 3.6.1. If you could provide more details about the protobuf.has_scalar_value() problem then I could try to see what is going wrong.

--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/protobuf/4408fbbb-786c-4e03-93ef-7dfcd045a772n%40googlegroups.com.

Test Last

unread,
Sep 3, 2020, 1:00:21 PM9/3/20
to Protocol Buffers
Hi 

I couldn't upload a file ?
No matter what type it just comes back with error occurred so I have uploaded the files to a google drive.
In there is a my protobufs i am using. The functions used to process the protobuf data and also the data itself.
In the files Limit61.txt and limit60.txt are binary data as I receive it from the server. If you compile the Protobuf files you can run those BIN files through them and get "Hopefully" my results.
There are two separate version because the limit61.txt one breaks when the object it's trying to process is smaller than what its suppose to be and its value also read NULL when has_scalar_value() is called.
After that the next object just doesn't exist even though object.size() shows there is suppose to be more.
The limit60.txt version however does work. Now this is not the data set because it seems like if I give it enough of any data set it will eventually crash. What is the limit to the size of a protobuf object?

This happens on both Protobuf Versions 3.12.3(Version in Nuget Package manager for VSCode) and 3.13.0(Compiled by me for Linux Ubuntu 18 and 20)
Although the Ubuntu version of things seem to handle way less data. I have no idea of the inner workings of the protobuf so I am totally shooting in the dark as to what the cause might be.

Please help I really appreciate the assistance.
Thanks 
Laster

Test Last

unread,
Sep 7, 2020, 10:24:13 AM9/7/20
to Protocol Buffers
Could it possibly be Curl that somehow corrupts data if its to much?

Is there some curl flags I need to look out for perhaps?

Thanks
Laster

Test Last

unread,
Sep 22, 2020, 8:50:47 AM9/22/20
to Protocol Buffers
Ok I have tried everything I can with this problem.
I used different libs from curl to Pocco etc and I got the same result.
I tried to pass the byte array I receive into all the possible containers and functions I could but it was still the same issue.

We then implemented a RUST version of this using the protobuf libraries they provided and that worked flawlessly. 

I am really not sure why this happens I only know it happens when allot of data is being processes.
Reply all
Reply to author
Forward
0 new messages