Well, aside from what is actually *in* Perl 6 currently, there are a
number of interesting side projects, which may or may not get
included in the final language design. Such as:
On Oct 18, 2005, at 3:40 AM, Uri Guttman wrote:
> the new OO design (stole the best from the rest and perlized it)
The current object model prototype (not yet totally approved) is a
self-bootstrapping meta-circular object model which is heavily
influenced by the CLOS Meta Object Protocol and Smalltalk. It
includes several features which are not found in any of your
traditional OO languages, such as Roles (which are a descendent of
Traits, I could site some papers here is this would help).
You might also want to include Luke Palmer's current work on
Attribute Grammars and the Theory model. Both of these are very
Haskell-ish. I will let Luke speak about those.
Yuval Kogman's recent Exceptuation proposal (the marriage of
exceptions and continuations, this exceptions you can *actuallY*
recover from) is very similar to the condition system of Common LISP.
There has also been much work on a type inferencing system for Perl 6
(type *checking* if just old-skool, all the cool kids infer).
The fact Pugs is written in Haskell (a language only a professor
could love) might be a help.
There is also the new Perl 6 packaging system. It will be far more
complex than the Perl 5 one, since it will not only include the 3
part name (Foo-0.0.1-cpan:JRANDOM), but it will probably need to
support seperate compilation units as well. This is borrowed from
everywhere, but specifically we have looked at Fortress (the new
scientific progamming language from Sun) and the new .NET assembly
model.
Thats about all I can think of now.
Stevan
1) You can write your program in any combination of programming styles
and languages, as you see fit. Thus, you can use your OO library
written in Ruby, that really fast C routine, and your Perl code, all
in one place.
2) There are a large number of operators that support list
manipulation, such as the zipper, the ==> and <== operators, reduce,
and others I can't remember in addition to P5's map, grep, and sort.
3) Macros. Nuff said.
4) More declarative syntax. This is more of a handwavy, but the syntax
feels (to me) as if it's more declarative than before. For example,
for @x -> $x { ... }
for @x -> $x, $y { ... }
That reads like a math proof. "For all X, do such-and-such".
Rob
Not quite. Lispish macros, that is, macros that let you look at what
you're expanding.
> 4) More declarative syntax. This is more of a handwavy, but the syntax
> feels (to me) as if it's more declarative than before. For example,
>
> for @x -> $x { ... }
> for @x -> $x, $y { ... }
>
> That reads like a math proof. "For all X, do such-and-such".
Uh huh. Sure it does.
(Were you referring to the fact that @x and $x are different things,
but really refer to the same thing (a collection and a particular
object in the collection)).
Luke
Which one? CS departments vary in what they consider cool. When I
talked to the attribute grammar guy here at CU, he snickered when he
found out we were writing Perl 6 in Haskell, dismissing Haskell as "a
mathematical exercise".
It would be wise to do some research and figure out what this CS
department considers cool before trying to techword them.
Luke
> On 10/18/05, Rob Kinyon <rob.k...@gmail.com> wrote:
>
>> 3) Macros. Nuff said.
>>
>
> Not quite. Lispish macros, that is, macros that let you look at what
> you're expanding.
To further expand on this, they will be AST-manipulating macros (LISP
style) rather than text-replacing macros (C style).
Stevan
SL> On Oct 18, 2005, at 1:45 PM, Luke Palmer wrote:
>> On 10/18/05, Rob Kinyon <rob.k...@gmail.com> wrote:
>>
>>> 3) Macros. Nuff said.
>>>
>>
>> Not quite. Lispish macros, that is, macros that let you look at what
>> you're expanding.
SL> To further expand on this, they will be AST-manipulating macros (LISP
SL> style) rather than text-replacing macros (C style).
my impression is that both styles are supported as you can return either
text or an AST (compiled code) from a macro.
uri
--
Uri Guttman ------ u...@stemsystems.com -------- http://www.stemsystems.com
--Perl Consulting, Stem Development, Systems Architecture, Design and Coding-
Search or Offer Perl Jobs ---------------------------- http://jobs.perl.org
That sounds really ... inefficient. For that to work, you'd have to
have seen the macro definition earlier in the parse cycle, then
encounter the call to the macro (thus consuming the token), unshift
the tokens of the macro into the parse queue (thus necessitating a
parse queue), then reparse the whole block because of potential
bracing issues.
Text-substitution macros would have to be handled in an earlier pass,
but the macro might be referencing items from BEGIN blocks already
seen ...
It's called a preprocessor in C for a reason.
Rob
Of course. Like:
sub foo(&) {...}
In perl 5 where
foo { print "hello" }
only parses correctly if it has seen the definition of the sub before the call.
> then encounter the call to the macro (thus consuming the token),
> unshift the tokens of the macro into the parse queue (thus
> necessitating a parse queue)
Uh, which we have.
> , then reparse the whole block because of potential bracing issues.
No, you wouldn't have to reparse the whole block. You can suspend the
parser while you insert the new text. If you fear lookahead problems
(where we only use lookahead at all in the operator-precedence
sandwitch), you just do macro expansion whenever you shift the
lookahead. That is, you run the macro expander in parallel
(coroutineish parallel, not threadish parallel), and always keep it
one step ahead of the the parser.
> Text-substitution macros would have to be handled in an earlier pass,
I still don't see evidence for this. Or maybe I do, but I don't see
any reason that the preprocessing pass must finish before the parsing
begins.
Luke
Mixing C and Perl ...
my $foo;
BEGIN { $foo = '}'; }
#define OPEN {
#define CLOSE $foo
void main (void)
OPEN
BEGIN { $foo = '{''; }
printf( "I don't work\n" );
CLOSE
How does that work out? The issue is that you can interrupt the
parsing process with executable code that can affect the parsing.
That's a good thing. It doesn't work so well with text-substitution,
though. Hence, I would argue it should be disallowed.
Rob
>> > Text-substitution macros would have to be handled in an earlier pass,
>>
>> I still don't see evidence for this. Or maybe I do, but I don't see
>> any reason that the preprocessing pass must finish before the parsing
>> begins.
RK> Mixing C and Perl ...
RK> my $foo;
RK> BEGIN { $foo = '}'; }
RK> #define OPEN {
RK> #define CLOSE $foo
RK> void main (void)
RK> OPEN
RK> BEGIN { $foo = '{''; }
RK> printf( "I don't work\n" );
RK> CLOSE
RK> How does that work out? The issue is that you can interrupt the
RK> parsing process with executable code that can affect the parsing.
RK> That's a good thing. It doesn't work so well with text-substitution,
RK> though. Hence, I would argue it should be disallowed.
from S06:
Macros (keyword: macro) are routines whose calls execute as soon
as they are parsed (i.e. at compile-time). Macros may return
another source code string or a parse-tree.
i see uses for text macros. sure they can trip you up but that is going
to be true about AST macros as well. macros are inherently trickier than
plain coding as you are dealing with another level at the same time. so
the author of your bad example should learn how to do that correctly and
not expect perfect DWIMMERY with an advanced technology.
and that excerpt also means that p6 macros are not done in a
preprocessing pass but at normal compile time as soon as the macro call
is fully parsed. so a text returning macro would run and the compiler
will replace the text of the parsed macro call and start reparsing with
the returned text. there may be some juggling of the main parse tree to
deal with this but it can be done without going too insane. :)