Float."cos"
ParrotIO."open" # unimplemented
2) these methods can be called in various ways:
Px = cos Py # opcode syntax
cos Px, Py # same
Px = "cos"(Py) # function call
Px = Py."cos"() # method call
cl = getclass Float
Px = cl."cos"(y) # class method
m = getattribute cl, "cos"
Px = m(y) # bound class method
m = getattribute y, "cos"
Px = m() # bound object method
3) the function PMC lookup is now done in two different ways:
- if the function name is in the current namespace:
set_p_pc # get Sub PMC from constant table
- else
find_name Px, Sy
find the name Sy in lexicals, globals, builtins in that order.
This is used for PIR function calls like:
foo()
Related changes:
- PyNCI has now it's own invoke because NCI.invoke is shifting arguments
down for class methods. (Py* methods are passing the object in P5)
- sort tests didn't a proper find_global
- 1 dumper test used the wrong namespace
- invokecc clears now REG_PMC(2) so that a Sub PMC can detect, if it's
called as a method or not
Comments welcome,
leo
. . .
2) these methods can be called in various ways:
Px = cos Py # opcode syntax
cos Px, Py # same
Px = "cos"(Py) # function call
Px = Py."cos"() # method call
cl = getclass Float
Px = cl."cos"(y) # class method
m = getattribute cl, "cos"
Px = m(y) # bound class method
m = getattribute y, "cos"
Px = m() # bound object method
This doesn't seem to be working for me. I did a fresh CVS checkout just
now (so no "cvs update -d" issues -- ;-), and got the following test
results:
Failed Test Stat Wstat Total Fail Failed List of Failed
--------------------------------------------------------------------------------
t/pmc/nci.t 17 4352 56 17 30.36% 1-8 10 35 39 41 44-46 51-52
t/pmc/object-meths.t 1 256 28 1 3.57% 28
2 tests and 62 subtests skipped.
Failed 2/139 test scripts, 98.56% okay. 18/2259 subtests failed, 99.20% okay.
The failing t/pmc/object-meths.t test gets a segfault:
rogers@lap> ./parrot "/usr/src/parrot/t/pmc/object-meths_28.imc"
opcode 0.540302
Segmentation fault
rogers@lap>
A backtrace is appended. (I looked at a handful of failing t/pmc/nci.t
cases, and they all looked similar, i.e. also in Parrot_NCI_invoke.)
Is there anything else I should be looking at?
-- Bob Rogers
http://rgrjr.dyndns.org/
------------------------------------------------------------------------
(gdb) r /usr/src/parrot/t/pmc/object-meths_28.imc
Starting program: /usr/src/parrot/parrot /usr/src/parrot/t/pmc/object-meths_28.imc
[Thread debugging using libthread_db enabled]
[New Thread 1076589664 (LWP 17677)]
[New Thread 1087552432 (LWP 17680)]
[New Thread 1089653680 (LWP 17681)]
opcode 0.540302
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 1076589664 (LWP 17677)]
0x081d8e52 in Parrot_NCI_invoke (interpreter=0x830b008, pmc=0x84d38b8,
next=0x854a688) at classes/nci.c:122
(gdb) bt
#0 0x081d8e52 in Parrot_NCI_invoke (interpreter=0x830b008, pmc=0x84d38b8,
next=0x854a688) at classes/nci.c:122
#1 0x080f8c93 in Parrot_invokecc (cur_opcode=0x854a684, interpreter=0x830b008)
at core.ops:441
#2 0x0816b03c in runops_slow_core (interpreter=0x830b008, pc=0x854a684)
at src/runops_cores.c:147
#3 0x08169468 in runops_int (interpreter=0x830b008, offset=0)
at src/interpreter.c:742
#4 0x0816a345 in runops (interpreter=0x830b008, offs=0) at src/inter_run.c:81
#5 0x080d7ec4 in Parrot_runcode (interpreter=0x830b008, argc=1,
argv=0xbffff738) at src/embed.c:768
#6 0x080d7d02 in Parrot_runcode (interpreter=0x830b008, argc=1,
argv=0xbffff738) at src/embed.c:700
#7 0x0809db0c in main (argc=1, argv=0xbffff738) at imcc/main.c:603
(gdb)
> 2) these methods can be called in various ways:
> This doesn't seem to be working for me.
Oops. I did obviously some changes in classes/nci.c instead of the pmc
file.
Thanks for reporting - fixed,
leo
I'm way out of the loop and may have been dealt with in prior mail,
but are we doing real method calls for cos() and suchlike things?
That seems... sub-optimal for speed reasons. (Open I can see, that
makes sense, though I'm not sure I'd want it for any other file
operations, again for speed reasons)
I've been thinking it may be worth pulling out some groups of
semi-commonly used functions that should be fast but still
PMC-class-specific, and thus not methods, into sub-tables hanging off
the vtable. Most of the semi-high-level string functions (basically
everything that may be delegated to the charset/encoding layers)
would be a candidate for this. Possibly the standard trig functions
too.
This'd cost us a single pointer per sub-table per pmc class, and one
table of generic functions per sub-table, so it'd not be that
expensive, yet still allow classes to override the default functions
on a per-class basis without the overhead of full method dispatch.
--
Dan
--------------------------------------it's like this-------------------
Dan Sugalski even samurai
d...@sidhe.org have teddy bears and even
teddy bears get drunk
> I'm way out of the loop and may have been dealt with in prior mail,
> but are we doing real method calls for cos() and suchlike things?
> That seems... sub-optimal for speed reasons.
With the help of the PIC it'll translate to an one-time lookup per call
site aka bytecode location. The lookup result is additionally cached in
the method cache, which is pretty fast.
The "method call" will translate eventually to an opcode like:
call_PP "cos", Px, Py
(we gonna have a fair amount of methods with this signature, so that a
specialized opcode should be ok)
The actual call then looks like:
pic_cache = $1 # replaces the method name in bytecode
if ($3->vtable->type == pic_cache->type)
$2 = (pic_cache->nci.function)(interp, $3)
else
// consider ~3 more cache slots
else
// lookup the method
The speed overhead compared to a dedicated vtable slot is minimal as
that will call directly the C function - not the NCI wrapper.
A further optimization could inline the function call for e.g. Float.cos
if ($3->vtable->type == enum_class_Float) {
FLOATVAL l, r = PMC_num_val($3);
l = cos(r);
...
> I've been thinking it may be worth pulling out some groups of
> semi-commonly used functions that should be fast but still
> PMC-class-specific, and thus not methods, into sub-tables hanging off
> the vtable.
Well, yes I'd like to have the vtable split into pieces. The vtable
structure is just too big and by far not all functionality is needed by
all PMCs:
sizeof(VTABLE) 604 # 32-bit machine
> ... Most of the semi-high-level string functions (basically
> everything that may be delegated to the charset/encoding layers)
> would be a candidate for this. Possibly the standard trig functions
> too.
I'd say: all that maybe used internally by the interpreter should have a
vtable slot. Trig functions and other stuff can simply be done by a
method call. One advantage is expandability, e.g
.namespace ["Complex"]
.sub cos
...
is all to override (or create) a new method. During recompilation the
"call_PP" opcode can directly be replaced by another variant that calls
the PIR code from witin the same run-loop, without any delegation PMC
and secondary run-loop.
And the second is: having a vtable for all these functions needs an
opcode too. We have 1500 opcodes and are already hitting compiler limits
e.g. in the switch core. We can't afford to add another bunch of opcodes
to get PMC versions of all the native type opcodes.
The
call_PP "meth", Px, Py
opcode covers all such methods with that signature. Float.cos might be a
suboptimal example, but we need a lot more library code e.g. $(perldoc
POSIX) and what not. The whole library stuff needs just a namespace and
an implementation either as PIR, NCI, or in a PMC class file. The call
syntax from the PIR level looks like a method call or a plain opcode:
.sub main @MAIN
.local pmc o, m, cl
o = getstdout
$I0 = o."puts"("ok 1\n")
puts $I0, o, "ok 2\n"
$I0 = "puts"(o, "ok 3\n")
m = getattribute o, "puts"
$I0 = m("ok 4\n")
cl = getclass "ParrotIO"
$I0 = cl."puts"(o, "ok 5\n")
.end
leo