Plethora of operators

18 views
Skip to first unread message

Rob Kinyon

unread,
May 4, 2005, 10:58:22 AM5/4/05
to perl6-l...@perl.org
I just started following the list again after a few months (though I
have been skimming the bi-weekly summaries) and I'm a little alarmed
at what seems to be a trend towards operaterizing everything in sight
and putting those operators in the core.

My understanding of P6 after the reading the AES was that the core was
supposed to be very small, robust, and reusable. The biggest feature
was supposed to be the PGE, from which you could define anything you
wanted as an add-on module. Kinda like Lisp, but Perlishly dwimming.

What happened to the idea of having modules that define syntax? Did I
miss a change in focus over the past few months?

Rob

Larry Wall

unread,
May 4, 2005, 12:55:10 PM5/4/05
to perl6-l...@perl.org
On Wed, May 04, 2005 at 10:58:22AM -0400, Rob Kinyon wrote:
: I just started following the list again after a few months (though I

: have been skimming the bi-weekly summaries) and I'm a little alarmed
: at what seems to be a trend towards operaterizing everything in sight
: and putting those operators in the core.

I think your concern is overblown here. Yes, it's a slippery slope.
No, we are not sliding all the way down it. And it's just as easy
to slide up this slope as well as down, and end up with Lisp rather
than APL. Neither extreme is healthy.

Are there any particular other operators you're worried about?
I think the current design does a pretty good job of factoring out the
metaoperators so that the actual set of underlying basic operators *is*
relatively small. Yes, you can now say something like

$x = [»+^=«] @foo;

but the basic operator there is just ^, with a + modifier to indicate
numeric XOR, = to indicate an assignment operator, »« to indicate
explicit paralellism, and now [] to indicate reduction, all in a nice
visual pill so you can think of it as a single operator when you want
to. But I didn't even think about adding a reduction metaoperator till
I wanted it for something else in the design that had been bugging me
for a long, long time. Almost nothing in the design of Perl 6 is there
for a single purpose.

: My understanding of P6 after the reading the AES was that the core was


: supposed to be very small, robust, and reusable.

Eh, I don't think that was ever a major goal after the RFCs came out.
After the RFCs it soon became apparent that we had to figure out at
least one obvious way to do most of these things, or people would just
reinvent several incompatible ways. The main design goal is to find
an impedance match between the problem space and the solution space,
and that means some kind of middling approach to complexity.

: The biggest feature


: was supposed to be the PGE, from which you could define anything you
: wanted as an add-on module. Kinda like Lisp, but Perlishly dwimming.

Again, even with rules we're aiming at a combination of simplicity
and power. We have not hesitated to add notation where it clarifies.

: What happened to the idea of having modules that define syntax? Did I


: miss a change in focus over the past few months?

Nope. You can still warp syntax as much as you like. But we'd like
to discourage people from doing that by default merely because the
core neglects to provide a standard default solution.

That was the big problem with Perl 5's OO design. It was too minimal.
It didn't specify an obvious way to do it, so everybody rolled their own
in an incompatable fashion. We're not going so far as Python philosophy,
where if there's an obvious way to do it, you disallow any other solutions.
But if you oversimplify the core, you force the complexity elsewhere.
It's just the Waterbed Theory of linguistic complexity. Push down
here, it goes up there.

In short, it's still the very same old "Easy things should be easy,
and hard things should be possible." It's just that with Perl 6, we're
rethinking what should be easy, and what should be merely possible.
Most things are nailed down by now to one side or the other, but
now and then something flips over to the other side. And last night
I decided that reduce should flip to "easy", especially since it's a
really easy metaoperator to explain to a newbie. (Much easier than
the +, ~, and ? bitop prefixes that people nonetheless seem to like,
for instance.)

Larry

Rob Kinyon

unread,
May 4, 2005, 1:23:01 PM5/4/05
to perl6-l...@perl.org
> Are there any particular other operators you're worried about?
> I think the current design does a pretty good job of factoring out the
> metaoperators so that the actual set of underlying basic operators *is*
> relatively small. Yes, you can now say something like
>
> $x = [»+^=«] @foo;
>
> but the basic operator there is just ^, with a + modifier to indicate
> numeric XOR, = to indicate an assignment operator, »« to indicate
> explicit paralellism, and now [] to indicate reduction, all in a nice
> visual pill so you can think of it as a single operator when you want
> to. But I didn't even think about adding a reduction metaoperator till
> I wanted it for something else in the design that had been bugging me
> for a long, long time. Almost nothing in the design of Perl 6 is there
> for a single purpose.

"The basic operator is ^." ..... I've been programming for a while,
following P6 pretty heavily, and I would not have been able to parse
that out of the 6 characters.

My basic concern is that [»+^=«] looks like line-noise. Yes, I can
parse it out, given time and understanding of the various operators,
but that's starting to smack of golf in production code, even though
it's not.

> : What happened to the idea of having modules that define syntax? Did I
> : miss a change in focus over the past few months?
>
> Nope. You can still warp syntax as much as you like. But we'd like
> to discourage people from doing that by default merely because the
> core neglects to provide a standard default solution.
>
> That was the big problem with Perl 5's OO design. It was too minimal.
> It didn't specify an obvious way to do it, so everybody rolled their own
> in an incompatable fashion.

No-one came up with an incompatible way to do CGI or to handle
filenames, yet neither is within the language. If p5p had provided a
Class::* module within the core, that would have been the standard.
Now, this wouldn't have prevented others from providing alternatives
(such as CGI::Simple for CGI), but there would have been something
people could reach for if they needed it that would be installed. (I
actually think this was a mistake p5p made.)

Operators like [] and >><< can be provided for in a standard way, yet
not be in the core language. I'm not arguing against the operator
itself - I like [] as a reduce() circumfix operator modifier and wish
I had a way of putting it into Perl5. But, I would love to see it as:

use operator::reduce;
use keyword::flarg;

That way, you have the ability to document the usage of some of the
"weirder" operators.

Here's the base concern - I program Perl for a living as a contractor.
Every site I go to, I'm told "Don't use those -weird- features". The
features they're referring to? map/grep, closures, CODErefs, symbol
table manipulation ... the standard basics.

If the feature was in a module, kinda like a source filter (but not as
sucky), then the feature is more palatable because everyone has a
chance to agree that it should be added. It's stupid, but it's easier
to get everyone to agree to add the use of a module than to use a
builtin feature. I don't understand why, but that's my experience
across 4 states. *shrugs*

*thinks for a minute*

[»+^=«] reminds me of a P5 regex that has a comment saying "This is
black magic. Don't touch!". --That's-- my complaint.

Rob

Adam Kennedy

unread,
May 14, 2005, 12:19:22 AM5/14/05
to perl6-l...@perl.org
> [»+^=«] reminds me of a P5 regex that has a comment saying "This is
> black magic. Don't touch!". --That's-- my complaint.

Indeed. There's a time and a place for that sort of black magic, and
it's usually about once per 5,000 lines of code, and so deep and well
wrapped in comments and unit tests that nobody should have to touch it.
Ever.

If using something like [>>+^=<<] (and I'll bet a LOT of people are
going to have to type it the long way) is going to involve 5-6 lines of
comments just to explain what is going on, what's the point?

I look at...

>>but the basic operator there is just ^, with a + modifier to indicate
>>numeric XOR, = to indicate an assignment operator, »« to indicate

>>explicit parallelism, and now [] to indicate reduction

...and I just mind-wipe... so it's doing WHAT exactly? I've read it 5
times and I still have no idea. And reduction? I write 25,000+ lines of
Perl a year, and if you are talking about something like
List::Util::reduce, I think I've used it maybe twice?

That sort of "pill" is the sort of think I'd assumed I'd start seeing
once I wrote.

use physics;

Which, by the way I'm completely positive about. Loading in special
grammars for particular classes of programmers is just an amazing idea.

But really, in what circumstances could someone possibly need reduction
so badly it needs to be in the core?

Adam K

Damian Conway

unread,
May 14, 2005, 8:56:29 AM5/14/05
to perl6-l...@perl.org
Adam Kennedy wrote:

> And reduction? I write 25,000+ lines of
> Perl a year, and if you are talking about something like
> List::Util::reduce, I think I've used it maybe twice?

Which proves what? That you don't (yet) write the sort of code that benefits
from reductions? That you don't (yet) think in terms of reductions, so you
miss the many opportunities to use this handy metaoperation? Or perhaps just
that reductions aren't (yet) easy enough to use that they naturally recommend
themselves to you?


> But really, in what circumstances could someone possibly need reduction
> so badly it needs to be in the core?

Well, I find myself importing it from List::Util quite frequently.

But that's beside the point. It doesn't need to be in the core because
everyone will use it every day. It needs to be in the core because it needs to
be fast and efficient and accessible to the optimizer.

And it ought to be in the core because it's a fundamental computing operation;
one that many of us who write less code than you--but perhaps under a greater
variety of paradigms--*do* use on a regular basis.

Here are a few of the things I'll be using reductions for in Perl 6...

1. To add things up:

$sum = [+] @amounts;

2. To calculate the probability that I'll need to use a reduction today:

$final_prob = [*] @independent_probs;

3. To drill down a hierarchical data structure, following the path
specified by a list of keys:

$leaf_value = [.{}] %hash, @keys;

4. To compute RMS values:

$RMS = sqrt [+] @samples »** 2

5. To take the dot product of two vectors:

$dot_prod = [+] @vec1 »*« @vec2;

6. As a cleaner form of C<< join.assuming(:sep<>) >>:

$joined = [~] @strings;

7. For parity checking a bitset:

$parity = [?^] @bits;

8. To verify the monotonicity of a sequence:

$is_monotonic = [<] @numbers;

9. To retrieve the first defined value in a list:

$first_def = [//] @list;

10. To apply a series of properties to a value:

$propped = [but] $value, @properties;


Personally I think a metaoperator with that many uses is more than Swiss-Army
enough to be in the core of Perl 6.

Damian

Juerd

unread,
May 14, 2005, 9:46:40 AM5/14/05
to Damian Conway, perl6-l...@perl.org
Damian Conway skribis 2005-05-14 22:56 (+1000):

> $leaf_value = [.{}] %hash, @keys;
> $propped = [but] $value, @properties;

With the precedence of [op] being that of a normal list op, the above
are a problem. Perhaps ; or multiple <== can solve this?


Juerd
--
http://convolution.nl/maak_juerd_blij.html
http://convolution.nl/make_juerd_happy.html
http://convolution.nl/gajigu_juerd_n.html

Herbert Snorrason

unread,
May 14, 2005, 9:18:22 AM5/14/05
to Damian Conway, perl6-l...@perl.org
On 14/05/05, Damian Conway <dam...@conway.org> wrote:
> Here are a few of the things I'll be using reductions for in Perl 6...
>
> 1. To add things up:
>
> $sum = [+] @amounts;
>
> 2. To calculate the probability that I'll need to use a reduction today:
>
> $final_prob = [*] @independent_probs;
>
> 3. To drill down a hierarchical data structure, following the path
> specified by a list of keys:
>
> $leaf_value = [.{}] %hash, @keys;
>
> 4. To compute RMS values:
>
> $RMS = sqrt [+] @samples »** 2
>
> 5. To take the dot product of two vectors:
>
> $dot_prod = [+] @vec1 »*« @vec2;
>
> 6. As a cleaner form of C<< join.assuming(:sep<>) >>:
>
> $joined = [~] @strings;
>
> 7. For parity checking a bitset:
>
> $parity = [?^] @bits;
>
> 8. To verify the monotonicity of a sequence:
>
> $is_monotonic = [<] @numbers;
>
> 9. To retrieve the first defined value in a list:
>
> $first_def = [//] @list;
>
> 10. To apply a series of properties to a value:
>
> $propped = [but] $value, @properties;
And I note that each of these is a 'simple' operator inside the
reduce. I think the complex example earlier in this thread ([»+^=«],
warranted by the context, but still extreme) may give the wrong
picture of the 'new' metaoperator. The point is that although it opens
marvellous new ways of obfuscation, it's also extremely handy when
used properly. And that's something shared by many of Perl's existing
features.

> Personally I think a metaoperator with that many uses is more than Swiss-Army
> enough to be in the core of Perl 6.

So now it's going from Swiss-army chainsaw to Swiss-army atomic fusion bomb?

--
Schwäche zeigen heißt verlieren;
härte heißt regieren.
- "Glas und Tränen", Megaherz

Eirik Berg Hanssen

unread,
May 14, 2005, 10:22:07 AM5/14/05
to Juerd, Damian Conway, perl6-l...@perl.org
Juerd <ju...@convolution.nl> writes:

> Damian Conway skribis 2005-05-14 22:56 (+1000):
>> $leaf_value = [.{}] %hash, @keys;
>> $propped = [but] $value, @properties;
>
> With the precedence of [op] being that of a normal list op, the above
> are a problem. Perhaps ; or multiple <== can solve this?

I suppose the first must just make sure not to flatten the %hash:

$leaf_value = [.{}] \%hash, @keys; # %hash .{$key1} . {$key2} ...

But what's wrong with the second?

$propped = [but] $value, @properties; # $value but $prop1 but $prop2 ...


Eirik
--
Rudolph is at _least_ as real as a Cantor set or an untried recipe.
-- Joshua W. Burton <jbu...@nwu.edu>
(<6enjs1$k...@news.acns.nwu.edu>)

Autrijus Tang

unread,
May 14, 2005, 10:53:38 AM5/14/05
to Damian Conway, perl6-l...@perl.org
On Sat, May 14, 2005 at 10:56:29PM +1000, Damian Conway wrote:
> 8. To verify the monotonicity of a sequence:
>
> $is_monotonic = [<] @numbers;

Hey. Does this mean that the [] metaoperator folds with the
associativity of the operator inside it?

That is, if the operator inside is right-associative, it functions as
foldr; if the operator is left-associative, it functions as a foldl; and
if the operator is chain-associative like <, it assumes the special,
short-circuiting chained-folding semantic?

[>] 1, 2, 3; # 1 > 2 > 3 # 3 is not evaluated
[**] 1, 2, 3; # 1 ** (2 ** 3)
[+] 1, 2, 3; # (1 + 2) + 3

If so, should I send patches to S03, or is it already in the works?

Finally, what does this mean?

say [x];

Is it a repeating metaoperator on an empty list, or a single-element
array reference that contains the return value of calling &x()?

Thanks,
/Autrijus/

Juerd

unread,
May 14, 2005, 10:56:02 AM5/14/05
to Eirik Berg Hanssen, Juerd, Damian Conway, perl6-l...@perl.org
Eirik Berg Hanssen skribis 2005-05-14 16:22 (+0200):

> > With the precedence of [op] being that of a normal list op, the above
> > are a problem. Perhaps ; or multiple <== can solve this?
> I suppose the first must just make sure not to flatten the %hash:
> $leaf_value = [.{}] \%hash, @keys; # %hash .{$key1} . {$key2} ...

That's really weird for a list op, I think.

> But what's wrong with the second?
> $propped = [but] $value, @properties; # $value but $prop1 but $prop2 ...

Nothing. I wasn't thinking clearly.

Uri Guttman

unread,
May 14, 2005, 10:59:14 AM5/14/05
to Damian Conway, perl6-l...@perl.org
>>>>> "DC" == Damian Conway <dam...@conway.org> writes:

DC> Here are a few of the things I'll be using reductions for in Perl 6...

DC> 3. To drill down a hierarchical data structure, following the path
DC> specified by a list of keys:

DC> $leaf_value = [.{}] %hash, @keys;

so that would be expanded how? i read it as:

%hash{@keys[0]}{@keys[1]} ...

but how is . being used? how does the {} wrap each key instead of being
between each one? also can it be used as an lvalue (that would be useful
too in setting a hash from a list of keys)?

DC> 4. To compute RMS values:

DC> $RMS = sqrt [+] @samples »** 2

are those RMS values under the GPL? :)

DC> 6. As a cleaner form of C<< join.assuming(:sep<>) >>:

DC> $joined = [~] @strings;

i like that. can that be done in a "" string (without $() or whatever
the expression interpolator is now)?

DC> 9. To retrieve the first defined value in a list:

DC> $first_def = [//] @list;

that is a good one for initializing stuff. it should become an idiom
like ||= is now.

for those single ops it reads pretty well. if you choose to use it with
multiple levels of hyper/reduce, it should be well commented.

uri

--
Uri Guttman ------ u...@stemsystems.com -------- http://www.stemsystems.com
--Perl Consulting, Stem Development, Systems Architecture, Design and Coding-
Search or Offer Perl Jobs ---------------------------- http://jobs.perl.org

Jonathan Scott Duff

unread,
May 14, 2005, 10:49:34 AM5/14/05
to Damian Conway, perl6-l...@perl.org
On Sat, May 14, 2005 at 10:56:29PM +1000, Damian Conway wrote:
> 3. To drill down a hierarchical data structure, following the path
> specified by a list of keys:
>
> $leaf_value = [.{}] %hash, @keys;

I think this one needs to be written as:

$leaf_value = [.{}] \%hash, @keys;

But, assuming the given syntax does the right thing, the description
reads as though this generates something akin to:

$leaf = %hash.{$k1}.{$k2}.{$k3}...{$kN}

Does this really work?

if $sum = [+] 1,2,3,4
is the same as $sum = 1 + 2 + 3 + 4

Then surely $leaf = [.{}] %hash, $k1, $k2, $k3
is the same as $leaf = %hash .{} $k1 .{} $k2 .{} $k3

And %hash .{} $key doesn't make sense to me. What am I missing? It seems
to me that would have to be written as

$leaf_value = [$^a.{$^b}] %hash, @keys;

in order to work properly (still assuming the %hash doesn't need the \
that I think it does)

> Personally I think a metaoperator with that many uses is more than
> Swiss-Army enough to be in the core of Perl 6.

Indeed! :-)

-Scott
--
Jonathan Scott Duff
du...@pobox.com

Larry Wall

unread,
May 14, 2005, 11:29:14 AM5/14/05
to perl6-l...@perl.org
On Sat, May 14, 2005 at 10:53:38PM +0800, Autrijus Tang wrote:

: On Sat, May 14, 2005 at 10:56:29PM +1000, Damian Conway wrote:
: > 8. To verify the monotonicity of a sequence:
: >
: > $is_monotonic = [<] @numbers;
:
: Hey. Does this mean that the [] metaoperator folds with the
: associativity of the operator inside it?

Yes. It's as if there is a long cat, only without the cat.

: That is, if the operator inside is right-associative, it functions as


: foldr; if the operator is left-associative, it functions as a foldl; and
: if the operator is chain-associative like <, it assumes the special,
: short-circuiting chained-folding semantic?
:
: [>] 1, 2, 3; # 1 > 2 > 3 # 3 is not evaluated
: [**] 1, 2, 3; # 1 ** (2 ** 3)
: [+] 1, 2, 3; # (1 + 2) + 3
:
: If so, should I send patches to S03, or is it already in the works?

Feel free, unless someone else volunteers.

: Finally, what does this mean?


:
: say [x];
:
: Is it a repeating metaoperator on an empty list, or a single-element
: array reference that contains the return value of calling &x()?

Always the first. [x] doesn't have to do lookahead.

Larry

Juerd

unread,
May 14, 2005, 11:26:53 AM5/14/05
to perl6-l...@perl.org
Juerd skribis 2005-05-14 17:23 (+0200):
> Markus Laire skribis 2005-05-14 18:07 (+0300):
> > [>>+^=<<] (@a, @b, @c)
> These arrays flatten first (otherwise [+] @foo could never calculate the
> sum of the elements), so imagine that you have

$foo, $bar, $baz, $quux, $xyzzy

to let >>+^=<< operate on.

Juerd

unread,
May 14, 2005, 11:23:37 AM5/14/05
to Markus Laire, perl6-l...@perl.org
Markus Laire skribis 2005-05-14 18:07 (+0300):
> [>>+^=<<] (@a, @b, @c)

These arrays flatten first (otherwise [+] @foo could never calculate the
sum of the elements), so imagine that you have

Markus Laire

unread,
May 14, 2005, 11:07:09 AM5/14/05
to perl6-l...@perl.org, Adam Kennedy
Adam Kennedy kirjoitti:

>> [»+^=«] reminds me of a P5 regex that has a comment saying "This is
>> black magic. Don't touch!". --That's-- my complaint.
>
> I look at...
>
> >>but the basic operator there is just ^, with a + modifier to indicate
> >>numeric XOR, = to indicate an assignment operator, »« to indicate
> >>explicit parallelism, and now [] to indicate reduction
>
> ...and I just mind-wipe... so it's doing WHAT exactly? I've read it 5
> times and I still have no idea. And reduction? I write 25,000+ lines of
> Perl a year, and if you are talking about something like
> List::Util::reduce, I think I've used it maybe twice?

Just trying to guess:
(Here @a, @b, @c all have same length for simplicity)

[>>+^=<<] (@a, @b, @c)

# reduction: place the op between each item in the given list

@a >>+^=<< @b >>+^=<< @c

# hyper-op: apply op in parallel for items in lists

for @a Y @b Y @c -> $a is rw, $b is rw, $c {
$a +^= $b +^= $c
}

# and finally (+^= is right-associative)

for @a Y @b Y @c -> $a is rw, $b is rw, $c {
$b = $b +^ $c;
$a = $a +^ $b;
}


I'm not too familiar with xor, so here's an easier example with plain +=

my @a = (1,2,3);
my @b = (10,20,30);
my @c = (100,200,300);

[>>+=<<] (@a, @b, @c);

# i.e. @a >>+=<< @b >>+=<< @c

# now @c = (100, 200, 300)
# @b = (110, 220, 330)
# @a = (111, 222, 333)


--
Markus Laire
<Jam. 1:5-6>

Juerd

unread,
May 14, 2005, 11:05:10 AM5/14/05
to Jonathan Scott Duff, perl6-l...@perl.org
Jonathan Scott Duff skribis 2005-05-14 9:49 (-0500):

> Then surely $leaf = [.{}] %hash, $k1, $k2, $k3
> is the same as $leaf = %hash .{} $k1 .{} $k2 .{} $k3

Then perhaps the easy way out is to make .{} $key and .[] $index valid
syntax.

Or perhaps [] can play the role of infix list operator, to support
postcircumfixes:

my $leaf = %hash [.{}] @keys

> $leaf_value = [$^a.{$^b}] %hash, @keys;

Once arbitrary expressions are valid in [], its purpose is lost as a
meta-operator. You can write the above with "normal" reduce:

my $leaf = reduce { $^a.{$^b} }, \%hash, @keys;

Since writing that line, I understand how adding a \ can make Damian's
example work (provided that postcircumfix operators are supported in
that way). I wasn't realising that %hash.{$k1} would of course be
another hashref. Somehow I was thinking about a string.

Jonathan Scott Duff

unread,
May 14, 2005, 11:55:43 AM5/14/05
to Juerd, perl6-l...@perl.org
On Sat, May 14, 2005 at 05:05:10PM +0200, Juerd wrote:
> Jonathan Scott Duff skribis 2005-05-14 9:49 (-0500):
> > Then surely $leaf = [.{}] %hash, $k1, $k2, $k3
> > is the same as $leaf = %hash .{} $k1 .{} $k2 .{} $k3
>
> Then perhaps the easy way out is to make .{} $key and .[] $index valid
> syntax.

Not easy on my eyes or brain :)

> > $leaf_value = [$^a.{$^b}] %hash, @keys;
>
> Once arbitrary expressions are valid in [], its purpose is lost as a
> meta-operator. You can write the above with "normal" reduce:

Oh I agree. I was just trying to make sense of [.{}] in understandable
terms because I currently just don't understand it :-)

But perhaps the reduce operator is some of that sufficiently advanced
technology that "knows" how the operator it wraps is slotted and does
something appropriate.

Also, does the reduction operator have the same magic as its alphabetic
twin such that it can pull N things at a time from the list for
operators that require N operands?

Jonathan Scott Duff

unread,
May 14, 2005, 12:33:15 PM5/14/05
to perl6-l...@perl.org, Juerd
On Sat, May 14, 2005 at 09:20:21AM -0700, Larry Wall wrote:
> On Sat, May 14, 2005 at 10:55:43AM -0500, Jonathan Scott Duff wrote:
> : But perhaps the reduce operator is some of that sufficiently advanced

> : technology that "knows" how the operator it wraps is slotted and does
> : something appropriate.
>
> Possibly. Or we just define infix .{}. and .[]. variants, or some such.

It's funny what a big difference that extra character makes. As much
as I disliked Juerd's idea to make infix .{} work, I wouldn't mind an
infix .{}. operator.

Juerd

unread,
May 14, 2005, 12:41:35 PM5/14/05
to Larry Wall, perl6-l...@perl.org
Larry Wall skribis 2005-05-14 9:20 (-0700):

> Possibly. Or we just define infix .{}. and .[]. variants, or some such.

The problem is that we already have @foo[] meaning the same as @foo, and
an always allowed . that also allows you to put whitespace around it.
This means that %foo.{}.$kv should really just be %foo.kv, if $kv eq
'kv'. I think this won't work well with two dots surrounding {}.
%foo.{}$kv OTOH is currently invalid syntax, so available for
assimilation. Ugly, yes, but I would never suggest actually using this
operator literally -- it's fine to just have it for reduce.

OTOH, reduce probably just needs to be smart enough to understand
postcircumfix. Perhaps whitespace helps, [{ }], in parallel with
&postcircumfix:<{ }>, to avoid a conflict with an infix {}.

Juerd

unread,
May 14, 2005, 12:19:01 PM5/14/05
to perl6-l...@perl.org
Larry Wall skribis 2005-05-14 8:29 (-0700):

> : say [x];
> : Is it a repeating metaoperator on an empty list, or a single-element
> : array reference that contains the return value of calling &x()?
> Always the first. [x] doesn't have to do lookahead.

Does this mean that [x] is just an interesting way to create an empty
list? And undef in scalar context?

Very neat for obfuscation: return [%]; and let people wonder why the
heck that [%] is there :)

Larry Wall

unread,
May 14, 2005, 12:20:21 PM5/14/05
to perl6-l...@perl.org, Juerd
On Sat, May 14, 2005 at 10:55:43AM -0500, Jonathan Scott Duff wrote:
: But perhaps the reduce operator is some of that sufficiently advanced

: technology that "knows" how the operator it wraps is slotted and does
: something appropriate.

Possibly. Or we just define infix .{}. and .[]. variants, or some such.

: Also, does the reduction operator have the same magic as its alphabetic


: twin such that it can pull N things at a time from the list for
: operators that require N operands?

No. [x] tokens are simple string matching. They have no internal
structure, or we'll not be able to tell them from array composers.

Larry

Larry Wall

unread,
May 14, 2005, 12:45:43 PM5/14/05
to perl6-l...@perl.org
On Sat, May 14, 2005 at 06:41:35PM +0200, Juerd wrote:
: Larry Wall skribis 2005-05-14 9:20 (-0700):

: > Possibly. Or we just define infix .{}. and .[]. variants, or some such.
:
: The problem is that we already have @foo[] meaning the same as @foo, and
: an always allowed . that also allows you to put whitespace around it.
: This means that %foo.{}.$kv should really just be %foo.kv, if $kv eq
: 'kv'. I think this won't work well with two dots surrounding {}.
: %foo.{}$kv OTOH is currently invalid syntax, so available for
: assimilation. Ugly, yes, but I would never suggest actually using this
: operator literally -- it's fine to just have it for reduce.

Good thing I said "or some such". :-)

: OTOH, reduce probably just needs to be smart enough to understand


: postcircumfix. Perhaps whitespace helps, [{ }], in parallel with
: &postcircumfix:<{ }>, to avoid a conflict with an infix {}.

Erm, I don't like tokens with spaces in the middle.

Actually, I think Damian's original formulation is sufficiently clear.

Larry

Rod Adams

unread,
May 14, 2005, 1:51:32 PM5/14/05
to perl6-l...@perl.org
Larry Wall wrote:

I read it all, and thought that it should be written:

$leaf = %hash{[;] @keys};

Which is taken from what Larry was talking about when he first brought
up the [] metaop.

Unless, of course, there is some subtle difference between a 3-d hash
and a hash of hashes of hashes that invalidates this.

-- Rod Adams

Rod Adams

unread,
May 14, 2005, 2:36:22 PM5/14/05
to perl6-l...@perl.org
Larry Wall wrote:

>On Sat, May 14, 2005 at 12:51:32PM -0500, Rod Adams wrote:
>
>: Unless, of course, there is some subtle difference between a 3-d hash

>: and a hash of hashes of hashes that invalidates this.
>

>No difference, I hope. The multidimensional notation is meant
>to extend to HoH and AoA transparently (as well as HoA and AoH).
>Since a variable's dimensionality is always declared (and a container
>object's dimensionality generated at "new" time), we shouldn't have
>to worry about whether to add dimensions or autovivify a reference.
>Either there's storage already allocated, or we autovivify.
>
>
Hmm. So if I say:

@a = [ { a => 1, b => 2}, { a => 3, b => 4 } ];

Can I then say:

$x = @a[1;'b'];

And get $x = 4?

And that gets me wondering what the real difference between .[] and .{}
is. Best I can tell, it's only how they autovivify dimensions not yet
existent. Which would make the following an error:

@a[2;'a'] = 5;

Since it would attempt to autovivify @a[2] as an array, and 'a' is an
invalid array subscript (unless you do some voodoo to arrays). However,

@a{2;'a'} = 5;

Should work fine, since the first dim already has storage as an array,
and the second gets autovivified as a hash.

All of this changes if you explicitly declared the shape of @a somewhere.


Of course, it's entirely possible I've completely misconstrued your
above statement.

-- Rod Adams

Larry Wall

unread,
May 14, 2005, 2:09:40 PM5/14/05
to perl6-l...@perl.org
On Sat, May 14, 2005 at 12:51:32PM -0500, Rod Adams wrote:

Yes, that should work.

: Unless, of course, there is some subtle difference between a 3-d hash

: and a hash of hashes of hashes that invalidates this.

No difference, I hope. The multidimensional notation is meant


to extend to HoH and AoA transparently (as well as HoA and AoH).
Since a variable's dimensionality is always declared (and a container
object's dimensionality generated at "new" time), we shouldn't have
to worry about whether to add dimensions or autovivify a reference.
Either there's storage already allocated, or we autovivify.

I guess the interesting question is to what extent we allow binding
of a hash or array container with the "wrong" dimensionality to a
variable with a declared shape. I can argue that one both ways,
and there are probably situations in which either approach is the
"right" one. A strict interpretation would allow the optimizer to
make more assumptions, while a loose interpretation would allow more
generic code. Pragma time? Or do we need an "is exact/inexact"
sort of thing? Of course, I suspect the PDL folks will want both
things to be true simultaneously... :-)

Larry

Larry Wall

unread,
May 14, 2005, 2:51:46 PM5/14/05
to perl6-l...@perl.org
On Sat, May 14, 2005 at 01:36:22PM -0500, Rod Adams wrote:
: Larry Wall wrote:
:
: >On Sat, May 14, 2005 at 12:51:32PM -0500, Rod Adams wrote:
: >
: >: Unless, of course, there is some subtle difference between a 3-d hash
: >: and a hash of hashes of hashes that invalidates this.
: >
: >No difference, I hope. The multidimensional notation is meant
: >to extend to HoH and AoA transparently (as well as HoA and AoH).
: >Since a variable's dimensionality is always declared (and a container
: >object's dimensionality generated at "new" time), we shouldn't have
: >to worry about whether to add dimensions or autovivify a reference.
: >Either there's storage already allocated, or we autovivify.
: >
: >
: Hmm. So if I say:
:
: @a = [ { a => 1, b => 2}, { a => 3, b => 4 } ];
:
: Can I then say:
:
: $x = @a[1;'b'];
:
: And get $x = 4?

Probably not, but @a{1;'b'} might. I think what we've said before is
that .[] allows the optmimizer to assume numeric subscripting only,
while .{} is the more general form.

Larry

Rob Kinyon

unread,
May 14, 2005, 3:38:41 PM5/14/05
to perl6-l...@perl.org

So, does this mean that I can do something like:

@a = [ 1 .. 4 ];
$x = @a{2};

and have $x == 3? If so, is there any reason (other than clarity) to
use the @a[] notation? The @ already indicates you have an array vs.
the % which indicates hash. Is there a reason to have the subscripting
notation also be different?

I understand why it was different in P5, given that you needed to
differentiate $x->[2] and $x->{2} and allow the reader to know whether
$x was an arrayref or hashref. But, that need is gone from P6. (Isn't
it?)

Rob

Rod Adams

unread,
May 14, 2005, 3:57:24 PM5/14/05
to perl6-l...@perl.org
Rob Kinyon wrote:

>So, does this mean that I can do something like:
>
> @a = [ 1 .. 4 ];
> $x = @a{2};
>
>and have $x == 3? If so, is there any reason (other than clarity) to
>use the @a[] notation? The @ already indicates you have an array vs.
>the % which indicates hash. Is there a reason to have the subscripting
>notation also be different?
>
>

There are optimizations to be had if the compiler knows you're using
numerics.

If I understand things, Arrays are just really optimized hashes, with a
few constraints on the keys.

>I understand why it was different in P5, given that you needed to
>differentiate $x->[2] and $x->{2} and allow the reader to know whether
>$x was an arrayref or hashref. But, that need is gone from P6. (Isn't
>it?)
>
>

As long as you're calling your arrays @x. There's still the distinct and
likely possibility of storing your array in $x, in which case it's not
at all obvious that $x{2} is calling an array.

-- Rod Adams

Brent 'Dax' Royal-Gordon

unread,
May 14, 2005, 4:17:47 PM5/14/05
to Damian Conway, perl6-l...@perl.org
Damian Conway <dam...@conway.org> wrote:
> 3. To drill down a hierarchical data structure, following the path
> specified by a list of keys:
>
> $leaf_value = [.{}] %hash, @keys;

When I saw this, the following happened.

*pause for a second*
"Wow."
*a few more seconds*
"Holy /f---/."

I think that means this should be in core.

--
Brent 'Dax' Royal-Gordon <br...@brentdax.com>
Perl and Parrot hacker

Juerd

unread,
May 14, 2005, 5:02:54 PM5/14/05
to perl6-l...@perl.org
Larry Wall skribis 2005-05-14 9:45 (-0700):

> : OTOH, reduce probably just needs to be smart enough to understand
> : postcircumfix. Perhaps whitespace helps, [{ }], in parallel with
> : &postcircumfix:<{ }>, to avoid a conflict with an infix {}.
> Erm, I don't like tokens with spaces in the middle.

Neither do I...

> Actually, I think Damian's original formulation is sufficiently clear.

Why is "{}" clear enough for [op], but not when declaring a
postcircumfix operator?

How does [!@#] know the difference between &postcircumfix:<!@ #> and
&postcircumfix:<! @#>?

Aaron Sherman

unread,
May 14, 2005, 6:43:14 PM5/14/05
to Eirik Berg Hanssen, Perl6 Language List
On Sat, 2005-05-14 at 16:22 +0200, Eirik Berg Hanssen wrote:

> I suppose the first must just make sure not to flatten the %hash:
>
> $leaf_value = [.{}] \%hash, @keys; # %hash .{$key1} . {$key2} ...

Side point on the whole topic: I just LOVE \ as an explosive list-
context flattening preventer. Given it's escaping function in strings
and rules, this is a perfectly mnemonic behavior that I'd never realized
fell out of not auto-expanding references.


Damian Conway

unread,
May 14, 2005, 6:54:49 PM5/14/05
to perl6-l...@perl.org
Larry wrote:

> Actually, I think Damian's original formulation is sufficiently clear.

<aol>Me too!</aol> ;-)

I think that a standard [.<infix marker><postfix marker>] abbreviation for all
postcircumfix operators within [op] reductions would be a useful bit of dwimmery.

Damian

Stuart Cook

unread,
May 14, 2005, 10:55:47 PM5/14/05
to Juerd, perl6-l...@perl.org
On 5/15/05, Juerd <ju...@convolution.nl> wrote:
> How does [!@#] know the difference between &postcircumfix:<!@ #> and
> &postcircumfix:<! @#>?

Perhaps it checks how many different variations are actually
defined--if it finds only one, it can DWIM, and if it finds more than
one it can barf with an error message complaining about an "ambiguous
postcircumfix operator" or somesuch. In that case, you may have to
write it out longhand.

If we combine this with Damian's [.!@#] instead of [!@#], this should
cover most cases fairly cleanly. If your postcircumfix operators are
that ambiguous, you should probably be writing things out longhand
anyway.


Stuart

Markus Laire

unread,
May 16, 2005, 3:18:47 AM5/16/05
to perl6-l...@perl.org
Juerd wrote:
> Juerd skribis 2005-05-14 17:23 (+0200):
>
>>Markus Laire skribis 2005-05-14 18:07 (+0300):
>>
>>> [>>+^=<<] (@a, @b, @c)
>>
>>These arrays flatten first (otherwise [+] @foo could never calculate the
>>sum of the elements), so imagine that you have
>
> $foo, $bar, $baz, $quux, $xyzzy
>
> to let >>+^=<< operate on.

Is this then ok?

[>>+^=<<] (@a ; @b ; @c)

or

[>>+^=<<] (\@a, \@b, \@c)

As S09 says that:

At the statement level, a semicolon terminates the current
expression. Within any kind of bracketing construct, semicolon
notionally produces a list of lists, the interpretation of which
depends on the context. Such a semicolon list always provides list
context to each of its sublists. The following two constructs are
structurally indistinguishable:

(0..10; 1,2,4; 3)
([0..10], [1,2,3,4], [3])


If not, how then would I use hyper-reduction ops like [>>+^=<<] with
several arrays?

i.e. How do I write

@a >>+^=<< @b >>+^=<< @c

using the [>>+^=<<] op?

Larry Wall

unread,
May 16, 2005, 11:00:44 AM5/16/05