Using the remote debugging agent and Node.js, I'm seeing backtrace responses that are so large that the response isn't even fully received after several minutes of Node process being pegged at 100% CPU. I've attached a file of a portion of such a response. Is there supposed to be some way to limit the size of the JSON responses sent by the debugger protocol?
There is currently no way to limit the response size. But then this begs the question how a large response should be handled. A JSON response chopped off at a certain point is only of limited value as it cannot be parsed reasonably.
Any suggestions on how this case could be handled?
The debugger agent already has mechanisms by which it reduces response size, such as omitting refs for nested object properties. Perhaps something like that but for objects with many properties (such as large arrays) might help here.
Another possibility is that the 'backtrace' command should not return complete frame objects, but rather only provide basic information such as sourceLineText, enclosing function, script, position, etc. The 'frame' command (or perhaps a new one: 'frames') could be used to obtain more information about specific frames of interest. Most of the time there's only a prefix of the top of the stack that represents the user's code that they care about, and they shouldn't be forced to suffer a slow (or broken) debugging experience because some low-level library function happened to have a large buffer allocated as a local variable.
Hi, I must plead the case for this bug. It's not merely a feature request; on the contrary, it makes the Debugger Protocol unusable. We've built an Emacs source-level debugging client (along the lines of edebug) for Node/V8, but until this problem is fixed, we can't use it.
Here's my understanding of the problem. V8 produces JSON describing the local state of each stack frame in a backtrace. When that local state includes a large array (or, probably, any object with too many properties), it doggedly keeps writing it out in JSON, far too much JSON to be handled, so the process hangs and there is no choice but to kill it and restart the session.
Such a debugger is unusable. There is no way to know before requesting a backtrace whether some large object is lurking beneath you in some stack frame and will blow up and hang the session if you issue the request. (This is particularly true if you're calling a library whose internals you know nothing about.) As it stands today, the Debugger Protocol's policy appears to be "Only use me with code that will never have any large arrays anywhere." This is an impossible burden for the user, so as things stand today, not only can we not release our client, we can't even use it ourselves.
The solution is to do what any remote debugging protocol must do when describing very large state: provide a description of only some of it (e.g. for an array, the first n values for reasonable n), plus some indication that m unprovided values remain. Additionally, one might offer a mechanism for requesting additional portions of the omitted state; but this is a nice-to-have, since the user can get at anything they want by evaluating an appropriate expression in the appropriate frame - that is, they can do so as long as the system hasn't become completely unresponsive! So, while "dole out large state in portions on demand" can fairly be described as a feature request, "don't hang forever" is surely a bug, and surely an important one.
We really like the Debugger Protocol and are eager to use it. It makes things possible that are otherwise not possible. We've got this great debugging client that is about to make my programming life so much easier that it makes me slobber, so I fervently hope you guys will understand why this is important and please fix it.
This is a follow-up to comment 4, in case anyone has this issue in the future. The solution we found is not to use the Debugger Protocol at all but simply call into V8's debug object model (i.e., the in-process debugger) directly. This means reimplementing a subset of the protocol, since we have to format our own JSON messages to send to the client. But that isn't too hard to do, and it works. It also has the advantages that one can send only the information one is going to use on the other side, and can reduce the number of round trips in certain cases (e.g. if you want to do a step-out repeatedly until hitting a recognized script).
Hi, we are trying to build a debugger as well and this issue renders the
remote debugging scenario completely unusable. Commenter 2 & 4 both make
reasonable suggestions. Please consider fixing so we don't have to go thru