I think the main motivation is that there is no way to "seek" inside a protocol buffer, and you must load the entire thing into memory in one go. Hence when you get really large messages, you may need to allocate huge amounts of memory (the memory for the serialized buffer, and the memory for the entire protocol buffer object).
1 MB is just a recommendation, but there are also some internal default limits set to 64 MB for "security" issues: If you parse an enormous message, it requires allocating a ton of RAM. Hence the limits can prevent servers from running out of memory. If you have huge messages, you'll need to call the appropriate APIs to change the limits.
https://developers.google.com/protocol-buffers/docs/reference/cpp/google.protobuf.io.coded_stream#CodedInputStream.SetTotalBytesLimit.details
Evan
--
http://evanjones.ca/