I don't see any mention of that in CWE 502, but I agree parsing untrusted data needs to be done carefully. I'll note that bounding arrays and strings is also very different than what is mentioned in CWE 502 as that can be done post-parser if you are only concerned with application behavior. The bigger concern with arrays and strings in my mind is memory consumption. In any case, this discussion is now quite specific to individual serialization formats and implementations and starts departing from what gRPC can provide.
The main bound for arrays and strings is max message size. The default size is a bit too generous, but is still better than unbounded. You are free to reduce the max message size. That is pretty good protection for things like JSON decoded to Lists/Maps and probably for Flatbuffers.
But for Protobuf and JSON decoded to schema-specific types, that works only weakly for arrays. The problem with arrays is not the array itself. Instead, the concern is more in line with a compression attack using a message that contains many fields. The serialized form may contain only a single field but in-memory it would consume memory for all its fields. The "fix" to this is complicated, although you can reduce the risk during your schema design (which is obviously error-prone). You can audit schemas though, and many of them are probably fine (assuming we're worried about attacks on a server and not untrusted servers), although you only need one with an issue to have a problem.