Funny how anyone and their dog complains about limits we have
imposed... but no one has an opinion when being asked. :)
(yeah yeah, people who report breakages are not on this mailing list, I know)
I decided to test out performance aspects with a trivial test and it
appears like size up 1 or even 10 million bytes does not significantly
add to immediate decoding cost (but there will be other possible
issues from memory retention as resulting 20 MB String -- in 10M bytes
case).
So we have quite a bit of leeway I think.
But I was hoping that maybe someone, somewhere knew of limits other
JSON parsers impose; or if not, XML or YAML parsers.
Anything comparable for textual (or binary?) data formats with
names/symbols: what limits are others imposing?
Or conversely: if anyone is processing JSON documents with Unusually
Long Names (something well beyond things like UUIDs)?
And if so, what would be the longest such name you have observed?
-+ Tatu +-
>
> Regards,
> PJ
>
> --
> You received this message because you are subscribed to the Google Groups "jackson-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
jackson-dev...@googlegroups.com.
> To view this discussion on the web visit
https://groups.google.com/d/msgid/jackson-dev/67a66825-32fe-40cd-8bcc-0606c13ad2e9n%40googlegroups.com.