The patches below add tests for:
print_i_i
print_i_n
print_i_s
print_i_p
ne_n
and improve the tests for the lexical ops.
Simon
--- t/op/hacks.t.old Tue Oct 8 13:46:31 2002
+++ t/op/hacks.t Tue Oct 8 13:52:44 2002
@@ -1,6 +1,6 @@
#! perl -w
-use Parrot::Test tests => 1;
+use Parrot::Test tests => 2;
# It would be very embarrassing if these didn't work...
open FOO, ">temp.file";
@@ -19,5 +19,32 @@
1
2
OUTPUT
+
+open FOO, ">temp.file"; # Clobber previous contents
+close FOO;
+
+output_is(<<'CODE', <<'OUTPUT', "open & print");
+ set I0, -12
+ set N0, 2.2
+ set S0, "Foo"
+ new P0, .PerlString
+ set P0, "Bar\n"
+
+ open I1, "temp.file"
+ print I1, I0
+ print I1, N0
+ print I1, S0
+ print I1, P0
+ close I1
+
+ open I2, "temp.file"
+ readline S1, I2
+ close I2
+
+ print S1
+ end
+CODE
+-122.200000FooBar
+OUTPUT
unlink("temp.file");
-1; # HONK
\ No newline at end of file
+1; # HONK
--- t/op/number.t.old Tue Oct 8 13:59:03 2002
+++ t/op/number.t Tue Oct 8 14:02:34 2002
@@ -1,6 +1,6 @@
#! perl -w
-use Parrot::Test tests => 33;
+use Parrot::Test tests => 34;
use Test::More;
output_is(<<CODE, <<OUTPUT, "set_n_nc");
@@ -491,6 +491,50 @@ ok 1
ok 2
OUTPUT
+output_is(<<'CODE', <<OUTPUT, "ne_n");
+ set N0, 1.234567
+ set N1, -1.234567
+
+ bsr BR1
+ print "ok 1\n"
+ bsr BR2
+ print "ok 2\n"
+ bsr BR3
+ print "ok 3\n"
+ bsr BR4
+ print "ok 4\n"
+ bsr BR5
+ print "Shouldn't get here\n"
+ end
+
+BR1: ne N0, N1
+ print "not ok 1\n"
+ ret
+
+BR2: ne 2.54, N0
+ print "not ok 2\n"
+ ret
+
+BR3: ne N0, 0.00
+ print "not ok 3\n"
+ ret
+
+BR4: ne 1.0, 2.0
+ print "not ok 4\n"
+ ret
+
+BR5: ne N0, N0
+ print "ok 5\n"
+ end
+
+CODE
+ok 1
+ok 2
+ok 3
+ok 4
+ok 5
+OUTPUT
+
output_is(<<CODE, <<OUTPUT, "lt_n_ic");
set N0, 1000.0
set N1, 500.0
--- t/op/lexicals.t.old Tue Oct 8 14:16:20 2002
+++ t/op/lexicals.t Tue Oct 8 14:21:25 2002
@@ -1,6 +1,6 @@
#! perl -w
-use Parrot::Test tests => 2;
+use Parrot::Test tests => 3;
output_is(<<CODE, <<OUTPUT, "simple store and fetch");
new_pad
@@ -17,6 +17,33 @@ CODE
12
OUTPUT
+output_is(<<CODE, <<OUTPUT, "Repeated stores with the same key");
+ new_pad
+ new P0, .PerlInt
+ new P1, .PerlInt
+ set I0, 0
+LOOP:
+ set P0, I0
+ store_lex "a", P0
+ find_lex P1, "a"
+ print P1
+ print "\\n"
+ inc I0
+ lt I0, 10, LOOP
+ end
+CODE
+0
+1
+2
+3
+4
+5
+6
+7
+8
+9
+OUTPUT
+
output_is(<<CODE, <<OUTPUT, "nested scopes");
new P0, .PerlInt
new P1, .PerlInt
@@ -43,6 +70,10 @@ output_is(<<CODE, <<OUTPUT, "nested scop
print "\\n"
pop_pad
+ find_lex P3, "a"
+ print P3
+ print "\\n"
+
pop_pad
find_lex P3, "a"
@@ -52,6 +83,7 @@ output_is(<<CODE, <<OUTPUT, "nested scop
CODE
0
2
+1
0
OUTPUT
Who came up with the idea of two-argument ne, anyway? That's kind of
bizarre. I'd much rather have it tested if it exists at all, but it
seems pretty obscure.
Definitely bizarre. I think I'd rather not have it, it doesn't make much sense.
--
Dan
--------------------------------------"it's like this"-------------------
Dan Sugalski even samurai
d...@sidhe.org have teddy bears and even
teddy bears get drunk
> At 7:42 PM -0700 10/8/02, Steve Fink wrote:
> >Thanks, applied.
> >
> >Who came up with the idea of two-argument ne, anyway? That's kind of
> >bizarre.
>
>
> Definitely bizarre. I think I'd rather not have it, it doesn't make much sense.
Easily done. Patch below removes the ops, plus the relevent tests from
integer.t and number.t
Simon
--- core.ops.old Thu Oct 10 11:57:08 2002
+++ core.ops Thu Oct 10 11:57:29 2002
@@ -902,14 +902,6 @@ op eq (in PMC, in PMC, inconst INT) {
########################################
-=item B<ne>(in INT, in INT)
-
-=item B<ne>(in NUM, in NUM)
-
-=item B<ne>(in STR, in STR)
-
-=item B<ne>(in PMC, in PMC)
-
=item B<ne>(in INT, in INT, inconst INT)
=item B<ne>(in NUM, in NUM, inconst INT)
@@ -920,38 +912,8 @@ op eq (in PMC, in PMC, inconst INT) {
Branch if $1 is not equal to $2.
-Return address is popped off the call stack if no address is supplied.
-
=cut
-inline op ne (in INT, in INT) {
- if ($1 != $2) {
- goto POP();
- }
- goto NEXT();
-}
-
-inline op ne (in NUM, in NUM) {
- if ($1 != $2) {
- goto POP();
- }
- goto NEXT();
-}
-
-op ne (in STR, in STR) {
- if (string_compare (interpreter, $1, $2) != 0) {
- goto POP();
- }
- goto NEXT();
-}
-
-op ne (in PMC, in PMC) {
- if (! $1->vtable->is_equal(interpreter, $1, $2)) {
- goto POP();
- }
- goto NEXT();
-}
-
inline op ne(in INT, in INT, inconst INT) {
if ($1 != $2) {
goto OFFSET($3);
--- t/op/integer.t.old Thu Oct 10 11:58:24 2002
+++ t/op/integer.t Thu Oct 10 12:00:56 2002
@@ -1,6 +1,6 @@
#! perl -w
-use Parrot::Test tests => 39;
+use Parrot::Test tests => 38;
output_is(<<CODE, <<OUTPUT, "set_i_ic");
# XXX: Need a test for writing outside the set of available
@@ -520,47 +520,6 @@ ok 1
ok 2
OUTPUT
-output_is(<<CODE, <<OUTPUT, "ne ic, i (pop label off stack)");
-
- set I0, 12
- set I1, 10
-
- print "start\\n"
- bsr BR1
- print "done 1\\n"
- bsr BR2
- print "done 2\\n"
- bsr BR3
- print "done 3\\n"
- bsr BR4
- print "Shouldn't get here\\n"
-
- end
-
-BR1: ne I0, 10
- print "bad "
- ret
-
-BR2: ne 10, 12
- print "10 is 12! "
- ret
-
-BR3: ne I0, I1
- print "10 is 12, even when in I reg "
- ret
-
-BR4: ne 12, 12
- print "done 4\\n"
- end
-
-CODE
-start
-done 1
-done 2
-done 3
-done 4
-OUTPUT
-
output_is(<<CODE, <<OUTPUT, "lt_i_ic");
set I0, 2147483647
set I1, -2147483648
--- t/op/number.t.old Thu Oct 10 11:58:35 2002
+++ t/op/number.t Thu Oct 10 12:01:45 2002
@@ -1,6 +1,6 @@
#! perl -w
-use Parrot::Test tests => 34;
+use Parrot::Test tests => 33;
use Test::More;
output_is(<<CODE, <<OUTPUT, "set_n_nc");
@@ -491,50 +491,6 @@ ok 1
ok 2
OUTPUT
-output_is(<<'CODE', <<OUTPUT, "ne_n");
- set N0, 1.234567
- set N1, -1.234567
-
- bsr BR1
- print "ok 1\n"
- bsr BR2
- print "ok 2\n"
- bsr BR3
- print "ok 3\n"
- bsr BR4
- print "ok 4\n"
- bsr BR5
- print "Shouldn't get here\n"
- end
-
-BR1: ne N0, N1
- print "not ok 1\n"
- ret
-
-BR2: ne 2.54, N0
- print "not ok 2\n"
- ret
-
-BR3: ne N0, 0.00
- print "not ok 3\n"
- ret
-
-BR4: ne 1.0, 2.0
- print "not ok 4\n"
- ret
-
-BR5: ne N0, N0
- print "ok 5\n"
- end
-
-CODE
-ok 1
-ok 2
-ok 3
-ok 4
-ok 5
-OUTPUT
-
It's not completely without precedent, on the Z-80:
RET CC Return from sub if CC is true
But reversing the sense of the test makes it doubly weird. :)
> On Wed, 9 Oct 2002, Dan Sugalski wrote:
>
>
>>At 7:42 PM -0700 10/8/02, Steve Fink wrote:
>>
>>>Thanks, applied.
>>>
>>>Who came up with the idea of two-argument ne, anyway? That's kind of
>>>bizarre.
>>>
>>
>>Definitely bizarre. I think I'd rather not have it, it doesn't make much sense.
>>
>
> Easily done. Patch below removes the ops, plus the relevent tests from
> integer.t and number.t
There are also 2 operand =item B<eq>(in INT, in INT) equivalents - toss
them, my 2 ยข
There are also 2 operand math operations of dubious achievement:
5 add
2 sub
4 mul
1 div
2 mod
Each of them will be doubled for each RHS INT argument giving ~25 opcodes.
I would kill these too.
IMHO a smaller core will perform better, then the above saving of 1
operand in the byte code can achieve.
leo
Cool, thanks. They're gone, along with the two-arg eq ops. I had to
murder a total of 5 tests from integer.t, number.t, and string.t.
Those are all for the:
a op= b
form. There's a minor benefit to keeping them.
>I would kill these too.
>IMHO a smaller core will perform better, then the above saving of 1
>operand in the byte code can achieve.
Maybe, but I'm unconvinced. We are going to do a cull of the opcodes
before release, but I'm not inclined to do it now, and I'm not really
sure that there'll be any speed win at all from it. Size win, yes,
but still, not much. (And we do want the two-arg forms that use PMCs,
since they may well need them if Larry decides to allow overloading
+= differently from + and =)
> >Each of them will be doubled for each RHS INT argument giving ~25 opcodes.
>
> Those are all for the:
>
> a op= b
>
> form. There's a minor benefit to keeping them.
I would like to kill all generated variants of all the 3 argument opcodes
where both input arguments are constants. They truly are superfluous.
> >I would kill these too.
> >IMHO a smaller core will perform better, then the above saving of 1
> >operand in the byte code can achieve.
>
> Maybe, but I'm unconvinced. We are going to do a cull of the opcodes
> before release, but I'm not inclined to do it now, and I'm not really
> sure that there'll be any speed win at all from it. Size win, yes,
> but still, not much. (And we do want the two-arg forms that use PMCs,
It should make the computed goto core compile more rapidly.
It might also make the computed goto core run more efficiently, as by
being smaller it will bring more frequently used opcodes closer together.
(Although we can probably have a larger effect by specificy a sort order
either by iterative benchmarking the speed of parrot with different orders,
or by code coverage profiling to find the most common opcodes.)
Not much gain, but it might be worth it if we can automate the discovery
process.
Does anyone know a good way of doing permutations with a genetic algorithm?
I think that a good way to represent a permutations such that similar
permutations have similar genes would be more effective than an efficient
way to convert a number to a permutation.
And I don't know of either.
(I don't know an efficient way to turn the number 105124 into the 105124th
permutation of 60 things. divide by 60, take remainder; divide by 59, take
remainder; etc doesn't feel efficient at all. Is the correct answer
use more 'Knuth' ?)
Nicholas Clark
--
Even better than the real thing: http://nms-cgi.sourceforge.net/
Where both operands are ints or nums, I think it's a good idea. I'm
less sure in the case where there's a PMC or string involved, as
there may be some assumption of runtime behaviour (in the case of
constant PMCs that might have some methods overloaded) or strings
where the compiler is expecting runtime conversion to happen before
whatever gets done.
> > >I would kill these too.
>> >IMHO a smaller core will perform better, then the above saving of 1
>> >operand in the byte code can achieve.
>>
>> Maybe, but I'm unconvinced. We are going to do a cull of the opcodes
>> before release, but I'm not inclined to do it now, and I'm not really
>> sure that there'll be any speed win at all from it. Size win, yes,
>> but still, not much. (And we do want the two-arg forms that use PMCs,
>
>It should make the computed goto core compile more rapidly.
True, though I'm not hugely worried about this, as it happens only once.
>It might also make the computed goto core run more efficiently, as by
>being smaller it will bring more frequently used opcodes closer together.
>(Although we can probably have a larger effect by specificy a sort order
>either by iterative benchmarking the speed of parrot with different orders,
>or by code coverage profiling to find the most common opcodes.)
>Not much gain, but it might be worth it if we can automate the discovery
>process.
True. I think reordering is a bigger win, honestly. Lightly used
opcode functions won't get swapped in until they're really needed.
> >I would like to kill all generated variants of all the 3 argument opcodes
> >where both input arguments are constants. They truly are superfluous.
>
> Where both operands are ints or nums, I think it's a good idea. I'm
> less sure in the case where there's a PMC or string involved, as
> there may be some assumption of runtime behaviour (in the case of
> constant PMCs that might have some methods overloaded) or strings
> where the compiler is expecting runtime conversion to happen before
> whatever gets done.
I think I agree with this reasoning. I was really thinking of the ints
as being easiest to bump off, providing we can be sure that things will
consistently for bytecode compile on a 32 bit parrot, but run by a 64
bit parrot (or likewise for different length floating point)
IIRC C99 states that the pre-processor must do all calculations in its
longest int type, and it's sort of related.
I think we'd need to state that constant folding will be done at compile
time, and will be done at the precision of the compiling parrot.
> >It should make the computed goto core compile more rapidly.
>
> True, though I'm not hugely worried about this, as it happens only once.
Per user who compiles parrot. The current computed goto code hurts my
desktop at work (128M RAM, x86 linux) and with more ops it will get worse.
It may turn out that gcc improves to the point that it can build
measurably better code for specific CPUs, but distributions/*BSDs require
a lowest common denominator build (typically 486 in the x86 family, isn't
it?)
In which case many people may gain quite a lot by building their own custom
parrot, and they're going to notice the compile time.
I admit this is low down any list of priorities, but it ought to be somewhere.
I find with my work code (the C, not perl related) that gcc3.2 with all the
stops out generates code that was about 5% faster than deadrat's (default
2.96 heresy non-)gcc. And at YAPC::EU someone reported that (IIRC) he'd seen
12% speedup from newer gcc and option tweaking.
And getting even 5% without changing your perl6 code or parrot's code is
nice.
However, the more interesting thing about getting compile times down is you
get more smoke in finite time. (And also developers get more done if they
spend less time waiting for the compiler. BUT EVERYONE SHOULD ALREADY BE
USING ccache AS THAT MAKES REBUILDS AND EDITING COMMENTS DAMN FAST
(unless they can think of good reason not to))
> True. I think reordering is a bigger win, honestly. Lightly used
> opcode functions won't get swapped in until they're really needed.
More free speedup. I had this crazier idea - experiment with splitting
every parrot function out into its own object file, and see what happens
with permuting the order of all of them.
But I think I need a lot of tuits, and a decent way of doing permutations
with genetic algorithms. (I've got access to a fast machine, but it will
have to stop smoking perl5 for the duration). Although potentially I'll end
up with an order optimised for x86 FreeBSD, which should keep the
Linux vs FreeBSD performance arms race going nicely.
> On Fri, Oct 11, 2002 at 05:01:49PM -0400, Dan Sugalski wrote:
>
>>At 9:02 PM +0100 10/11/02, Nicholas Clark wrote:
>>
>
>>>I would like to kill all generated variants of all the 3 argument opcodes
>>>where both input arguments are constants. They truly are superfluous.
>>Where both operands are ints or nums, I think it's a good idea. I'm
>>less sure in the case where there's a PMC or string involved, as
>>there may be some assumption of runtime behaviour (in the case of
>>constant PMCs that might have some methods overloaded) or strings
>>where the compiler is expecting runtime conversion to happen before
>>whatever gets done.
Ok, we need 2 + 3 args PMC ops becaus of overloading.
> I think I agree with this reasoning. I was really thinking of the ints
> as being easiest to bump off, providing we can be sure that things will
> consistently for bytecode compile on a 32 bit parrot, but run by a 64
> bit parrot (or likewise for different length floating point)
By preprocessing ints and nums, we would move overflow/precision issues
from the running machine to the compiling machine. But the issue itself
would remain.
So we could end up with:
add_i_ic add_n_nc
add_i_i_i add_n_n_n
add_i_i_ic add_n_n_nc
add_i_ic_i add_n_nc_n
where the latter could be tossed, when the assebler/imcc swaps
arguments. This saves 6 opcodes for each <add> and <mul> and ~2 opcodes
for each <sub>, <div> and <mod>. <sub> with constants could be rewritten
to <add>. Saving a total of ~20 ops.
Some 2 operand bitwise operations are there some not - there is no
consistency - I would remove the to operand bitwise integer ops.
> More free speedup. I had this crazier idea - experiment with splitting
> every parrot function out into its own object file, and see what happens
> with permuting the order of all of them.
I have s small shell script (attached), which generates opcode coverage
for all PBC files in the parrot tree:
total ops 866
ops types 177
op usage stat
540 op-list-all
161 op-list-s
So only ~2/3 opcodes are currently used/tested (ops types/op-list-s are
ops short names w/o variants).
To run the script do:
make test
cd languages/perl6
make
perl6 --test
cd -
make disassemble
./op-stat
> But I think I need a lot of tuits, and a decent way of doing permutations
> with genetic algorithms. (I've got access to a fast machine, but it will
> have to stop smoking perl5 for the duration).
This would keep the machine busy for some time ;-) Interesting idea.
> Nicholas Clark
leo