Allison
PRuby is the project.
Suggestions of a better project name are welcome.
Current Source is at http://tewk.com/pruby.tgz
It is currently hosted in a private svn repo.
Chip said I can have a commit bit. So Thats a great start.
I will put a signed copy of the CLA in the mail today.
I based the initial PGE grammar for PRuby off of
svn://rubyforge.org/var/svn/rubygrammar/grammars/antlr-v3/trunk/ruby.g
which is in complete.
I'm looking for a BNF style description of the Ruby grammar. Otherwise
I will have to dig into :pserver:anon...@cvs.ruby-lang.org:/src/parse.y.
I used to use
$P0 = find_global "", "_dumper"
$P0( $P1, "$P1")
inside TGE transformational rules to dump tree nodes, but that doesn't
work now.
I tried replacing find_global with the alternatives proposed by chip and
mdiep to no avail.
Basically
interpinfo $P99, .INTERPINFO_NAMESPACE_ROOT
$P99 = $P99['_dumper']
$P0 = new .ResizablePMCArray
$P0 = get_namespace $P0
$P0['_dumper']
null $S0
$P0 = get_namespace $S0
$P0['_dumper']
Haven't had time to get back to it today.
A short example of how to include/load_bytecode library/dumper.pir and
lookup the _dumper symbol inside a TGE rule would be helpful.
Kevin Tew
> Current Source is at http://tewk.com/pruby.tgz
> It is currently hosted in a private svn repo.
>
i've taken a look at this and promised to help kevin get it fit for checkin.
> Chip said I can have a commit bit. So Thats a great start.
> I will put a signed copy of the CLA in the mail today.
>
sure is! i can't wait to welcome another HLL implementor to the growing team.
i'm almost done working with Chris Dolan to get his TAP parser checked
in, and then i'll be able to devote more time to getting ruby running
on parrot. yeehah!
~jerry
> That would be me!
>
> PRuby is the project.
> Suggestions of a better project name are welcome.
Possibly Cardinal? (A ruby-red bird.) The original Cardinal project was
started in 2002, but talking last night we decided it needed a complete
re-write in PGE/TGE (which is when you were mentioned). I suspect Phil
would be happy to donate the name to the new version, and even help out.
I'll connect you two.
Any chance you might make it out to Portland the last week of July?
There will be some hacking sessions at OSCON and it'd be great to get
together with you, Patrick, and the PDX.rb group.
> Current Source is at http://tewk.com/pruby.tgz
> It is currently hosted in a private svn repo.
>
> Chip said I can have a commit bit. So Thats a great start.
> I will put a signed copy of the CLA in the mail today.
Awesome!
> I based the initial PGE grammar for PRuby off of
> svn://rubyforge.org/var/svn/rubygrammar/grammars/antlr-v3/trunk/ruby.g
> which is in complete.
> I'm looking for a BNF style description of the Ruby grammar. Otherwise
> I will have to dig into :pserver:anon...@cvs.ruby-lang.org:/src/parse.y.
No one last night knew of a BNF grammar for Ruby, but I've found that
translating from yacc to PGE isn't difficult (it's what I've done with
Punie).
> I used to use
> $P0 = find_global "", "_dumper"
> $P0( $P1, "$P1")
> inside TGE transformational rules to dump tree nodes, but that doesn't
> work now.
PGE uses the Parrot version of Data::Dumper (which is what this code
does), but the TGE nodes don't (yet). Just call the 'dump' method on the
tree node.
node.'dump'()
Allison
I'll be glad to provide any help that I can in building a PGE
version of the grammar -- just let me know where I can help.
Pm
Ronie, or better Ronin if decent backronym can be found.
Brad
--
Furthermore, when experiencing a rush of blood to the head, if one
puts spittle on the upper part of one's ear, it will soon go away.
-- Hagakure http://bereft.net/hagakure/
It parses my simple puts.rb example, but parse time is really slow.. 2
minutes.
I'm sure I've made some dumb grammar mistakes that is slowing it down.
Source available at http://tewk.com/pruby.tgz
Suggestions or debugging tips welcome.
Thanks,
Kevin Tew
Well, the first thing to note is that subrule calls can be comparatively
slow, so I think you might get a huge improvement by eliminating
the <sp> subrule from
token ws {[<sp>|<[\t]>]*}
resulting in
token ws { <[ \t]>* }
(Also, <sp> is a capturing subrule, so that means a separate Match
object is being created and stored for every space encountered
in the source program. In such cases <?sp> might be better.)
Along a similar vein, I think that a rule such as
rule statement {
<ALIAS> <fitem> <fitem>
|<ALIAS> <global_variable> [<global_variable>|<back_reference>]
|<UNDEF> <undef_list>
|<statement2> [<IF> |<UNLESS> |<WHILE> |<UNTIL>] <expression_value>
|<statement2> <RESCUE> <statement>
|<BEGIN> \{ <compound_statement> \}
|<END> \{ <compound_statement> \}
|<command_call>
|<statement2>
}
may be quite a bit slower than the more direct
rule statement {
alias <fitem> <fitem>
|alias <global_variable> [<global_variable>|<back_reference>]
|undef <undef_list>
|<statement2> [if|unless|while|until] <expression_value>
|<statement2> rescue <statement>
|begin \{ <compound_statement> \}
|end \{ <compound_statement> \}
|<command_call>
|<statement2>
}
but I haven't tested this at all to know if the difference
in speed is significant. I do know that the regex engine will
have more optimization possibilities with the second form than
with the first. (If one stylistically prefers the keyword tokens
not appear as "barewords" in the rule, then <'alias'>, <'undef'>,
etc. work equally well for constant literals.)
It's also probably worthwhile to avoid backtracking and re-parsing
complex subrules such as <statement2> above. In the above, a plain
<statement2> w/o if/unless/while/until/rescue ends up being parsed
three separate times before the rule succeeds. Better might be:
rule statement {
|alias <fitem> <fitem>
|alias <global_variable> [<global_variable>|<back_reference>]
|undef <undef_list>
|begin \{ <compound_statement> \}
|end \{ <compound_statement> \}
|<statement2> [ [if|unless|while|until] <expression_value>
| rescue <statement>
]?
|<command_call>
}
(In fact, looking at the grammar I'm not sure that <command_call>
is really needed, since <statement2> already covers that. But I'm
not a Ruby expert.)
Anyway, let me know if any of the above suggestions make sense
or provide any form of improvement in parsing speed.
Thanks!
Pm
Nicolas