All of the time can be accounted for in the execution of the
JavaScript code. I call the JS code from C++, and it calls into my C+
+,
but I have satisfied myself that the delays are not in the C++
code. The script code I'm running is marshaling and unmarshaling
objects to JSON. It appears that doing an eval on the JSON encoded
data is taking 3-4 seconds when I have a 1 Meg stack. That accounts
for a lot of the extra time right there.
Has anyone else seen anything like this? I searched the list archives
and didn't see anything that looked similar.
Here is a related question: Does anyone have any stack size
guidelines? I have read js_InternalInvoke and it sets up the stack
frame size as ((argc + 2) * 4) bytes, and I also see that js_Invoke
allocates space for local variables and missing arguments on the
stack. I might be fine with a 100K stack, but my customers will be
writing the scripts, so I don't really have any control over what they
do.
Any help would be greatly appreciated.
- Rush
- Rush
Hi Rush,
Short answer: Use 8192 and don't worry. :)
Long answer: The parameter to JS_NewContext() is not the stack size.
The documentation in the wiki is wrong. The parameter is actually the
chunk size of the stack pool--an obscure memory management tuning
knob.
Which knob *should* you be tweaking, then? Probably none.
SpiderMonkey's stack behavior is not what you might expect. When a
JavaScript function calls another JavaScript function, it doesn't use
any C stack at all: JavaScript stack frames are allocated from the
stack pool (heap memory).
The limit for recursion depth among JavaScript functions is 1000 calls
(MAX_INTERP_LEVEL, defined in jsinterp.c). This is way high. Every
time I've ever hit this limit, it was a bug in my JavaScript code.
By the way-- the slowness you're seeing only happens in debug builds.
In a debug build, each time arena memory is freed, SpiderMonkey does a
memset() across the entire unused portion of the arena chunk. (See
JS_CLEAR_UNUSED, defined in jsarena.h). This makes memory look tidy
in a debugger, but when the chunk size is very large, it's slow. In
release builds ("make BUILD_OPT=1 -f Makefile.ref"), the memset()
doesn't happen.
Cheers,
Jason Orendorff
Here is the link: http://developer.mozilla.org/en/docs/JS_NewContext
That one got me too.
8K versus 8Megs makes a BIG difference!
"Jason Orendorff" <jason.o...@gmail.com> wrote in message
news:1173335573.4...@p10g2000cwp.googlegroups.com...