I suggest a signature prototype that all multis defined in or exported
to the current namespace must match (they match if the proto would allow
the same argument list as the multi, though the multi may be more
specific). Prototypes are exportable. Documentation tie-ins are also
suggested, ultimately allowing for documentation-only interface modules
which collect and re-export the interfaces of implementation modules
while providing high-level documentation and constraints.
Details:
Larry has said that programming by contract is one of the many paradigms
that he'd like Perl 6 to handle. To that end, I'd like to suggest a way
to assert that "there will be multi subs defined that match the
following signature criteria" in order to better manage and document the
assumptions of the language now that methods can export themselves as
multi wrappers. Let me explain why.
In the continuing evolution of the API documents and S29, we are moving
away from documentation like:
our Scalar multi max(Array @list) {...}
our Scalar multi method Array::max(Array @array:) {...}
toward exported methods:
our Scalar multi method Array::max(Array @array:)
is export {...}
"is export" forces this to be exported as a function that operates on
its invocant, wrapping the method call. OK, that's fine, but Array isn't
the only place that will happen, and the various exported max functions
should probably have some unifying interface declared. I'm thinking of
something like:
our proto max(@array, *%adverbs) {...}
This suggests that any "max" subroutine defined as multi in--or exported
to--this scope that does not conform to this prototype is invalid. Perl
will throw an error at compile-time if it sees this subsequently:
our Any multi method Array::max(Array @array: $x)
is export {...}
However, this would be fine:
our Any multi method Array::max(Array @array: :$x)
is export {...}
because the prototype allows for any number of named parameters.
The default behavior would be to assume a prototype of:
our proto max(*@posargs, *%namedargs) {...}
Which allows for any signature.
Any types used will constrain multis to explicitly matching those types
or compatible types, so:
our Int proto max(Seq @seq, *%adverbs) {...}
Would not allow for a max multi that returned a string (probably not a
good idea).
The goal, here, is to allow us to centrally assert that "Perl provides
this subroutine" without defining its types or behavior just yet.
Documentation/code could be written for the prototype:
=item max
=inline our proto max(@array, *%adverbs) is export {...}
C<max> takes an input sequence or array (C<@array>) and
returns the maximum value from the sequence.
Specific implementations of max may be defined which
allow comparators or other adverbs (C<%adverbs>) to
be defined.
=cut
I've invented the "=inline" POD keyword here as an arm-wave to
programming by contract (both Perl and POD read its argument). If it's
not liked, the proto could be duplicated both inside and outside of the
documentation as we do now. Kwid, when it comes to pass, could provide
similar mechanisms. Given this, an entire "interface-only" module could
exist as POD/Kwid-only, which isn't a bad thing given that pre-processed
bytecode will be what most people are loading anyway, and thus not
parsing the POD every time as in Perl 5.
There's also another interesting thing that we might or might not decide
to tack onto protos, which is that the "is export" tag on one could
cause the exporter mechanism to automatically export any "is export"
tagged subroutines from the current namespace that match this prototype,
even if they came from a different namespace. Essentially defining one
proto allows you to re-export any multis that you imported by that name.
This seems to me to be a better mechanism than a simple :REEXPORT tag or
the like on the "use", as it more explicitly targets the interfaces that
your module defines its own prototype for.
This produces a generic set of documentation for a module that might
only act as a re-exporter for other modules. e.g. the above might appear
in a module called CORE which is "use"d by the runtime automatically,
and uses various other modules like Math::Basic and List without any
explicit export tags, thus providing the minimal interfaces that Perl
promises. S29 could eventually be adapted as the documentation for the
prototypes in that module without having to actually document the
individual APIs of the rest of the Perl runtime.
In Perl 6, therefore, "perldoc perlfunc" would become "perldoc CORE" or
whatever we call that module.
This is only a first step to programming by contract, which has many
more elements than simply blending signatures into documentation
(assertions and other elements are also part of it), but I consider it
an important step in the process to becoming more PbC-aware.
Any thoughts?
I'm still thinking about the practical implications of this... but what
immediately occurs to me:
The point of multiple, as opposed to single, dispatch (well, one of the
points, and the only point that matters when we're talking about multis of
a single invocant) is that arguments are not bound to a single type. So at
first gloss, having a single prototype in the core for all same-named
multis as in your proposal seems to defeat that use, because it does
constrain arguments to a single type.
I would hate for Perl 6 to start using C<Any> or C<Whatever> in the sort
of ways that many languages abuse "Object" to get around the restrictions
of their type systems. I think that, as a rule, any prototype
encompassing all variants of a multi should not only specify types big
enough to include all possible arguments, but also specify types small
enough to exclude impossible arguments.
In other words, to use your proposal, "our proto moose (Moose $x:)" should
assert not just that all calls to the multi moose will have an invocant
that does Moose, but also that all objects of type Moose will work with a
call to the multi moose. That may have been implicit in your proposal,
but I wanted to make it explicit.
In practice, the ability to use junctive types, subsets, and roles like
any other type makes the concept of "single type" a much less restrictive
one in Perl 6 than in most languages. For example, if you wanted C<max>
to work on both arrays and hashes, you could have
our proto max (Array|Hash $container)
Or you could define an C<Indexed> role that both Array and Hash do and
have:
our proto max (Indexed $container)
So maybe this is a reasonable constraint. But it seems odd to me that
Perl might then not allow me to write a C<max> that takes, say, Bags or
Herds or whatever. And as I said before, I think a prototype of
our proto max (Whatever $container)
is incorrect too. What I really want is for max to be callable on
anything that can do max, and not on anything that can't. Following that
observation to its logical conclusion, at some point we get to the core
containing prototypes like:
our proto max(Maxable $container)
our proto sort(Sortable $container)
our proto keys(Keyable $container)
which (I think) results in no better support for contracts, but merely
requires gratuitious typing (in both senses of the word): where before we
could just write our routine "multi max...", now we need to write both
"multi max..." and remember to add "does Maxable" so Perl will let us
compile it.
My apologies if I'm attacking a strawman here; perhaps there's a saner way
to allow the flexibility for users to define novel implementations of
global multis while still having the prototypes well-typed.
All that said, the globalness of multis does concern me because of the
possibility of name collision, especially in big systems involving multis
from many sources. Your proposal would at least make an attempt to define
a multi not type-conformant with a core prototype throw a compile-time
error, rather than mysterious behavior at runtime when an unexpected multi
gets dispatched.
Trey
> Any types used will constrain multis to explicitly matching those types
> or compatible types, so:
>
> our Int proto max(Seq @seq, *%adverbs) {...}
>
> Would not allow for a max multi that returned a string (probably not a
> good idea).
IIRC, perl 6 doesn't pay attention to the leading Int here except when
dealing with the actual code block attached to this - that is, "Int"
isn't part of the signature. If you want Int to be part of the
signature, say:
our proto max(Seq @seq, *%adverbs -> Int) {...}
More to the point, I _could_ see the use of type parameters here
(apologies in advance if I get the syntax wrong; I'm going by memory):
our proto max(Seq of ::T *@seq, *%adverbs -> ::T) {...}
This would restrict you to methods where the return type matches the
list item type.
--
Jonathan "Dataweaver" Lang
I certainly hope not, as I agree with you! That's not the goal at all,
and in fact if that were a side effect, I would not want this to be
implemented. The idea of having types AT ALL for protos was something
that I threw in because it seemed to make sense at the end. The really
interesting thing is to match signature shapes, not types. That is, max
doesn't take two positional arguments, and a max that does is probably
doing something that users of max will be shocked by. To this end, a
programmer of a library *can* issue an assertion: all implementations of
max will take one (no type specified) positional parameter and any
number of adverbial named parameters (again, no type specified).
Notice that I keep saying "no type specified", when in reality, us Perl
6 programmers know that parameters default to type Any (it is Any now,
right?) I don't see value in protos taking this into account. If there
is value, then I'll bow to superior Perl 6 mojo on the part of whoever
can point it out.
Remember that this is NOT part of the MMD system. Once a multi is
declared, and passes any existing protos, the proto no longer has any
relevance, and is never consulted for any MMD dispatch. It is forgotten
(unless a new multi is defined later).
Does that help to remove any concerns? Adding in types is fine, and I
have no problem with it, but adding in types should probably not be
something done in core modules without heavy thought.
> In other words, to use your proposal, "our proto moose (Moose $x:)"
> should assert not just that all calls to the multi moose will have an
> invocant that does Moose, but also that all objects of type Moose will
> work with a call to the multi moose. That may have been implicit in
> your proposal, but I wanted to make it explicit.
If you specify such types, OK, that seems fair. Side point: "the multi
moose" is a pretty darned funny turn of phrase ;)
> All that said, the globalness of multis does concern me because of the
> possibility of name collision, especially in big systems involving
> multis from many sources. Your proposal would at least make an
> attempt to define a multi not type-conformant with a core prototype
> throw a compile-time error, rather than mysterious behavior at runtime
> when an unexpected multi gets dispatched.
Say "signature-conformant" there, and I'm in full agreement.
What bugs me is a possible duplication of functionality. I believe that
declarative requirements should go on roles. And then packages could do
them, like this:
package Foo does FooMultiPrototypes {
...
}
Of course, I hadn't quite thought this through - as packages aren't
classes, there would probably have to be heavy constraints on what
FooMultiPrototypes may declare.
This would also allow you to reuse multi prototype sets.
Also, I don't think you answered Trey's concern, that your mechanism
allows you to declare what classes may export but not what they *have*
to export, which I would also view as more important - your mechanism
seems to only serve to ban extensions and convenience methods, but
doesn't give any extra promises on the class behaviour. To extend your
example, if a library developer provides a three-argument max, it won't
get in the way and won't break any existing contracts. But if the same
developer doesn't also provide 2-argument max on that same class, it may
very well break any code that works with that class.
Miroslav Silovic wrote:
> What bugs me is a possible duplication of functionality. I believe that
> declarative requirements should go on roles. And then packages could do
> them, like this:
>
> package Foo does FooMultiPrototypes {
> ...
> }
I like this idea because it makes roles the central bearer of type
information. I think there has to be a way for a role to define
constraints onto the thing it is going to be composed into---see my
unreplied mail 'class interface of roles'. The type requirements
of a role could be mostly inferred from the role definition, especially
if we have something like a super keyword that in a role generically
refers to the thing it's composed into before the role is composed.
Note that self refers to the object *after* composition and instance
creation.
> Of course, I hadn't quite thought this through - as packages aren't
> classes, there would probably have to be heavy constraints on what
> FooMultiPrototypes may declare.
Basically instance data declarations and methods require a class
for composition. Everything else can also go into packages and modules.
IIRC, additional methods can come as package qualified as in
role Foo
{
method Array::blahh (...) {...} # goes into Array class
}
and as such require the package to make an Array class available.
Regards,
--
Trey Harris wrote:
> I would hate for Perl 6 to start using C<Any> or C<Whatever> in the
> sort of ways that many languages abuse "Object" to get around the
> restrictions of their type systems. I think that, as a rule, any
> prototype encompassing all variants of a multi should not only
> specify types big enough to include all possible arguments, but also
> specify types small enough to exclude impossible arguments.
As Miroslav proposed to handle the specification of the interface with
role composition we get another thing as well: implicit type parameter
expansion. That is, a role can be defined in terms of the self type and
e.g. use that as parameter type, return type and type constraints. All
of which nicely expand to the module or package type the role is
composed into!
Regards, TSa.
--
Type information is secondary to the proposal, but I'll run with what
you said.
This (the example, above) is a promise made by a class to meet its own
specification.
In the RFC, I was trying to develop a method by which a module could
assert a stricture (consider this part of "use strict" in Perl 6 if you
will) that would constrain the CALLER of that module (as well as the
module itself, of course) to a particular signature template for a
multi. This allows us to centrally document a multi that might be
defined in many places, and have that documentation actively constrain
the multi to match. In this way, the user doesn't have to figure out
that max is a method on Array in order to find its documentation, and a
module that uses Array gets
Constraining a class to use the multis that it declares isn't really a
constraint. It's more of a second definition, and there isn't much need
for that in Perl 6.
I'm starting to think that "proto" was the wrong word, as it immediately
makes people think about C/C++ "prototypes", which are not at all the
same beast.
Of course, if you want to have a role that uses prototypes to constrain
a class, that's certainly doable:
role StrictMax { our proto max(Array @array) { ... } }
class MyClass does StrictMax { ... }
Sure, that works, but an unexported, type-specific proto is rather a
weak version of what I was suggesting.
Again, here's the example that I gave at the end:
Actually, it's a promise made by a package (not a class) to meet the
specification given by a role (which can, and in this case probably
does, reside in a separate file - quite likely one heavily laced with
POD). Specifically, the role states which subroutines the package
must define, including the signatures that they have to be able to
support. IOW, it defines what the package is required to do, as
opposed to what the package is forbidden from doing (as your proposal
does). If I'm understanding the idea correctly, you write the package
role as a file, hand it to the programmer, and tell him to produce a
package that does the role.
--
Jonathan "Dataweaver" Lang
That's a fine thing to want to do. Not something that I was thinking of
initially, and only tangentially related, but a good idea. I think you
get this for free by embedding a proto (or perhaps a "sigform") inside
of a role:
role Foo { sigform bar($baz) { ... } }
Notice the lack of export which forces this to only apply to the class
or module to which the role is applied via composition, not to a module
which imports that class or module.
Whereas:
package CORE;
use Array;
use List;
...
=item max
=inline sigform max(@items, *%adverbs) is export {...}
...docs...
would not only impose those constraints on this use of Array and List,
but on the caller of CORE (in this case any typical Perl invocation).
Both work equally well, which is sort of nice, given that I didn't think
about the first form before-hand. I think that goes a long way to
demonstrate the flexibility of Perl 6's package/module/class/role system.
What would be the difference between this and
role Foo { sub bar($baz) { ... } }
? IOW, what's the difference between a 'sigform' declaration and a
"to be defined later" subroutine declaration?
> Notice the lack of export which forces this to only apply to the class
> or module to which the role is applied via composition, not to a module
> which imports that class or module.
True enough. That said, it wouldn't be hard to change this. Consider
the possibility of an "exported" trait, which causes whatever it's
applied to to be exported whenever a module imports its package.
Thus, you could say something like:
<start of file>
role Foo;
sub bar($baz) is exported { ... }
At which point anything that imports a module that composes Foo will
import bar as well.
And I'm making the (probably erroneous) assumption that Perl 6 doesn't
have a robust, intuitive means of marking package components for
export already. I'm sure that a few moments with the appropriate
Synopsis would correct said error.
--
Jonathan "Dataweaver" Lang
This would define a non-multi subroutine with a yadda body. Later
declarations of a multi sub by the same name would be resolved as per
the rules in, I think, A12, but don't quote me on that, as I'm probably
wrong about the location. Multi and single dispatch have rules of
engagement, but essentially don't interact much until something gets
invoked.
Still, that has little or nothing to do with constraining the definition
of multi subs by signature, which is what I was proposing.
>> Notice the lack of export which forces this to only apply to the class
>> or module to which the role is applied via composition, not to a module
>> which imports that class or module.
>
> True enough. That said, it wouldn't be hard to change this. Consider
> the possibility of an "exported" trait, which causes whatever it's
> applied to to be exported whenever a module imports its package.
The original example was an exported version, so there's no change to
make. I only gave the non-exported version as an example in this recent
mail to demonstrate that the rather unrelated concept that you brought
up (defining a role that restricts multi method definitions WITHIN a
class/module/package) just happened to be easy to do given the RFC as I
proposed it, even though I had not thought about that use before.
Please, feel free to re-read the RFC. I'm getting the impression that it
wasn't as clear, as perhaps I had intended, and you might be able to
propose some clarification....
One thing that occurs to me: following this "contract" or "promise"
analogy, what does C<...> mean in a role or class?
Unless I've missed somewhere in the Synopses that explicates C<...>
differently in this context, yada-yada-yada is just code that "complains
bitterly (by calling C<fail>) if it is ever executed". So that's fine for
an abstract routine at runtime--code calls it, and if it hasn't been
reimplemented, it fails.
But unless something else is going on with C<...>, as far as the language
is concerned, a routine with body C< {... }> *is* implemented, as surely
as a routine with body C<{ fail }> is implemented. So the routine is only
"abstract" insofar as you'll need to reimplement it to do anything useful
with it.
Roles are types, and we've been talking about doing a role as making a
promise. But "does" just mixes in the role, implementations and all. If
your yada'ed routine ever gets *called*, you fail, so it would behoove you
to implement any role you do, unless you're writing an abstract class on
purpose.
But as we have no "abstract" adverb presently, it seems to me that the
compiler has no way of inferring whether a give nclass doing a role
actually has implemented what it's supposed to in order to meet the role's
specification or not. We only find out if the contract was not met or the
promise was not kept at runtime.
Is my inference correct?
Trey
-snip-
> Is my inference correct?
I hope not. My understanding is that '{ ... }' is supposed to
represent the notion of abstract routines: if you compose a role that
has such routines into a class or package, I'd expect the package to
complain bitterly if any such routines are left with yada-yadas as
their codeblocks, on the basis that while roles can be abstract,
classes and packages should not be.
--
Jonathan "Dataweaver" Lang
Really? I think I need to let that sink in and percolate a bit.
I'm rather fond of creating abstract superclasses to factor out common
object-management code. I have many classes in Perl 5 with code like
this:
sub new {
my $proto = shift;
my $class = ref($proto) || $proto;
if ($class eq __PACKAGE__) {
croak "Attempt to instantiate abstract class $class";
}
# Proceed with common object instantiation
}
and then the concrete subclasses either don't define &new at all or do
their own subclass-specific initializations and call NEXT to get the
common initializations. I've been thinking that a role is not the proper
place for this sort of code, as it is tightly coupled to implementation,
and my understanding of the gestalt of roles is that they should be
loosely coupled to their users' implementations.
But if only a role can be abstract, I guess I'd have to use a role (or
fall back to the Perl 5-ism above and check my .HOW and throw a runtime
exception).
If you define a BUILD in a role, will it be called when an object of a
class that does the role is instantiated, as part of the .new BUILD walk?
Trey
First the high-level point: I'm dropping the RFC, because, as TimToady
pointed out on IRC, we're not quite far enough down the line to see the
breadth or certainty of the need yet.
That said, this is a different point, above and I think it's an easy one
to take on.
role A { method x() {...} }
class B { does A; }
does generate an error per "If a role merely declares methods without
defining them, it degenerates to an interface:" from S12.
However, that's not to say that a class can't be abstract, just that a
class that does an interface (a role with nothing but abstract methods)
must implement the role's interface.
Nothing is said of what happens when you compose a role that has both
defined and undefined methods, but IMHO, that's at most a warning if
they remain undefined after composition, since you might want to use
that class as a trait at runtime where target objects will define that
method.
Certainly this:
class A { method x() {...} }
is very explicit, and should be allowed, given that it is a promise that
"sometime before x is called, it will be defined." It's a runtime error
if that promise is not kept.
Yes, but I don't think the conversation should stop. These are important
semantics of the object model and we should be at least roughly on the
same page so that when someone gets the tuits to work on it, it's clear
what the direction is.
> That said, this is a different point, above and I think it's an easy one to
> take on.
>
> role A { method x() {...} }
> class B { does A; }
>
> does generate an error per "If a role merely declares methods without
> defining them, it degenerates to an interface:" from S12.
>
> However, that's not to say that a class can't be abstract, just that a class
> that does an interface (a role with nothing but abstract methods) must
> implement the role's interface.
So why would it generate an error? Why wouldn't it merely result in B
being abstract too, assuming that contra my prior mail, classes can be
abstract? Do you have to be explicit about it and say
role A { method x() { ... } }
class B { does A; method x() { ... } }
? That seems un-Perlishly verbose to me; we had DRY before Ruby ever did.
> Nothing is said of what happens when you compose a role that has both
> defined and undefined methods, but IMHO, that's at most a warning if
> they remain undefined after composition, since you might want to use
> that class as a trait at runtime where target objects will define that
> method.
As I said on IRC, I don't read the part of S12 you quote as being the
definition of interface; I merely read it as being an analogy to the
concept of interface found in other languages. I'd find it absurd if
merely changing
role Existential { method is { ... }; method isnt { ... } }
to
role Existential { method is { ... }; method isn't { ! .is } }
resulted in the role changing its instantiability.
> Certainly this:
>
> class A { method x() {...} }
>
> is very explicit, and should be allowed, given that it is a promise that
> "sometime before x is called, it will be defined." It's a runtime error if
> that promise is not kept.
Did you mean to have "class B" and "does A" there, as in my un-Perlish
example above? If so, then see my response above. If not... I'm a little
surprised. Since roles and classes share the package namespace, I
wouldn't expect you to be able to declare a class with the same name as a
preexisting role, even if you implemented all the role's interfaces within
that class....
One more point for the hypothetical future object-model
designer/implementer with the even more hypothetical tuits, so that it
doesn't get lost: note that in a DBC context, { ... } is insufficient for
an abstract routine. PRE and POST blocks would ordinarily be included as
well. A routine missing a PRE or POST would be considered to have:
method is {
PRE { True }
POST { True }
...
}
which under DBC rules would be the same as having no POST at all... but
would unfortunately cause any future PREs to be ignored!
(Under design-by-contract rules, POSTs are and-ed from least-derived to
most-derived, but PREs are or-ed. In fact, in Eiffel, PRE is called
"require" and POST is called "ensure", but only in base classes; in
derived classes, you must type "require else" and "ensure then" to make
explicit that your assertions are being or-ed or and-ed with assertions
elsewhere in code.)
Trey
Aaron Sherman wrote:
> Details:
>
> Larry has said that programming by contract is one of the many paradigms
> that he'd like Perl 6 to handle. To that end, I'd like to suggest a way
> to assert that "there will be multi subs defined that match the
> following signature criteria" in order to better manage and document the
> assumptions of the language now that methods can export themselves as
> multi wrappers. Let me explain why.
OK. My understanding of "programming by contract" as a paradigm is
that one programmer figures out what tools he's going to need for the
application that he's working on, and then farms out the actual
creation of those tools to another programmer.
Second, when you mention 'signature criteria', what immediately comes
to mind is the notion of the signature, which applies restrictions on
the various parts of an argument list:
:(@array, *%adverbs)
This applies two restrictions: there can be only one positional
parameter, and it must do the things that a list can do. Change the
comma to a colon, and you have a signature that says that there must
be a list-like invocant, and that there can be no positional
parameters.
The only aspect of the signature that is not concerned with argument
types is the part that determines how many of a particular kind of
parameter (invocant, positional, or named) you are required or
permitted to have: even the @-sigil in the first positional parameter
(or the invocant, in the method-based signature) is specifying type
information, as it's placing the requirement that that parameter needs
to behave like a list.
In effect, I could see thinking of a signature as being a regex-like
entity, but specialized for matching against parameter lists (i.e.,
capture objects) instead of strings.
> In the continuing evolution of the API documents and S29, we are moving
> away from documentation like:
>
> our Scalar multi max(Array @list) {...}
> our Scalar multi method Array::max(Array @array:) {...}
>
> toward exported methods:
>
> our Scalar multi method Array::max(Array @array:)
> is export {...}
>
> "is export" forces this to be exported as a function that operates on
> its invocant, wrapping the method call. OK, that's fine, but Array isn't
> the only place that will happen, and the various exported max functions
> should probably have some unifying interface declared.
This would seem to be a case for changing the above to something along
the lines of:
our Scalar multi submethod max(@array:)
is export {...}
This removes all references to Array from the signature, and leaves it
up to the @-sigil to identify that the invocant is supposed to be some
sort of list-like entity. The change above from 'method' to
'submethod' is predicated on the idea that methods have to be defined
within a class or role, much like attributes have to be; if this is
incorrect, then it could be left as 'method'.
> I'm thinking of
> something like:
>
> our proto max(@array, *%adverbs) {...}
The Synposes already define a 'proto' keyword for use with routines;
it's listed right alongside 'multi'. Were you intending to refer to
this existing keyword, or did you have something else in mind?
> This suggests that any "max" subroutine defined as multi in--or exported
> to--this scope that does not conform to this prototype is invalid. Perl
> will throw an error at compile-time if it sees this subsequently:
In short, you want to define a signature that every routine with the
given name must conform to, whether that routine is a sub or submethod
defined in the package directly, or if it is a method defined in a
class or role that is in turn defined in the package.
While 'role Foo { our method max(@array:) { ... } }' specifies that
whatever composes the role in question must include a method called
max that takes a list-like object as an invocant, you want to be able
to say that any method, sub, submethod, or other routine defined in a
given package that is called 'max' must match the signature ':(@array,
*%adverbs)'. This would seem to bear some resemblance to Perl 6's
notion of 'subtypes', which add matching criteria to objects, and
throw exceptions whenever you try to assign a value to the object that
doesn't meet the criteria.
> The goal, here, is to allow us to centrally assert that "Perl provides
> this subroutine" without defining its types or behavior just yet.
Here's the thing: the above doesn't seem to require that any such
subroutine be defined. That is, the coder could forego defining _any_
'max' routines when he implements your documentation, and the first
indication you'd have of this oversight would be when your application
complains that the function doesn't exist. That is, you're not saying
'this module provides this subroutine'; you're saying 'if this module
provides this subroutine, it will look something like this.' I'm not
sure how useful that would be. Composing a role, OTOH, says 'this
package provides this routine' without (neccessarily) defining its
types or behaviors just yet.
> I've invented the "=inline" POD keyword here as an arm-wave to
> programming by contract (both Perl and POD read its argument). If it's
> not liked, the proto could be duplicated both inside and outside of the
> documentation as we do now.
Personally, I don't like the idea of embedding code in documentation.
> There's also another interesting thing that we might or might not decide
> to tack onto protos, which is that the "is export" tag on one could
> cause the exporter mechanism to automatically export any "is export"
> tagged subroutines from the current namespace that match this prototype,
> even if they came from a different namespace.
More generally, this would be a mechanism which lets you apply traits
to every routine that matches the defined pattern:
> our proto max(@array, *%adverbs) is export {...}
would apply the 'export' trait to every routine named 'max' that's
visible in the current module.
> This is only a first step to programming by contract, which has many
> more elements than simply blending signatures into documentation
> (assertions and other elements are also part of it), but I consider it
> an important step in the process to becoming more PbC-aware.
I think that a more useful tool might be a more general assertion
mechanism which includes signature-matching as one of its options. I
could see benefit to being able to write a package which is a mixture
of documentation (telling the programmer what you want) and assertions
(telling him when his code isn't what you want). The trick is to keep
the assertion syntax simple enough that you don't do more coding when
writing the assertions than the programmer does when writing the
actual code. This is a very real danger, as conceptually this notion
of assertions is akin to the concept of schema for XML.
--
Jonathan "Dataweaver" Lang
Um, so if I get this right, you want to restrict the users of the module
from *EVER* extending that particular part of the module's functionality?
I would be strongly opposed to the existence of this feature. Firstly,
what you propose is not DBC. Design by contract is about requiring
minimal functionality from the parties in the contract, not about
banning them from going above the requirements. Secondly, what happens
when you use two modules with two different prototypes for the same
multi? Without this declaration, and assuming the modules don't try to
dispatch on the same argument lists, everything just works. But with
this stricture, you simply aren't allowed to do this, and I don't see
any justification for it. Frankly, sometimes things -will- be named the
same and yes, sometimes you need to use grep to find the docs. Not sure
why this is a problem, though.
Miro
Yes. That is mentioned in A12, even if S12 didn't make it explicit.
At least S12:531 implies that roles have BUILD submethods, and the previous
paragraph indicates that BUILDALL calls all appropriate BUILD methods.
What A12 makes clear that S12 doesn't is that if you write your own BUILD
submethod in your class, it doesn't need to worry about the roles'
BUILDs. Those are called automatically anyway.
Larry
Other way around. "package" is Perl 5, because that's the P5 keyword,
and seeing a "package" declaration is an indicator to Perl6 that the
file it's processing is written in P5. In P6, there are both
"module"s and "class"es, but no "package"s other than those inherited
from P5 code..
--
Mark J. Reed <mark...@mail.com>
Right. Thank you; I'm not sure how I got those flipped.
--
Jonathan "Dataweaver" Lang
That is ever so slightly overstated. We still have packages as a
native notion in P6. The "'package' indicates P5" thing is just if
the first thing in the file is a package declaration, but elsewhere
in Perl 6 you're allowed to say:
package Foo { our $bar = 3 }
and such. Our bare unvarnished namespaces are still called "packages",
but in terms of roles, Module does Package just as Class does Module
(and by implication, Package). Same for roles and subsets and enums.
Basically, all types do Package whenever they need an associated
namespace. And most of the Package role is simply:
method postfix:<::> () { return %.HOW.packagehash }
or some such, so "$type.::" returns the symbol table hash associated
with the type, if any. It's mostly just a convention that the Foo
prototype and the Foo:: package are considered interchangable for
most purposes.
Larry
And here I thought you were a responsible, law-abiding citizen... :P
--
Jonathan "Dataweaver" Lang
Urque.
Actually, that'd have to be %($.HOW.packagehash) or $.HOW.packagehash.{},
since what I wrote there would mean %(self.HOW).packagehash instead, which
might work accidentally, but only if self.HOW does Hash.
Larry
And so it begins.
I daresay there will be no shortage of jokes among P6ers about "does Hash" ...
> So why would it generate an error? Why wouldn't it merely result in B
> being abstract too, assuming that contra my prior mail, classes can be
> abstract?
What use is an interface if it doesn't give you a guarantee? If I say,
"all dogs can bark," and you define a dog that can't bark, that's not
"abstract", that's a failure to meet the interface requirements of a dog.
Now, I *could* see a class being explicitly abstract. That is, defining
its own incomplete method. At that point, I have met the interface
requirement, but explicitly stated that my interface is incomplete. I'm
not sure that that has an particular standing in the language though.
That is, you could instantiate such a class and invoke the method in
question. Only at runtime would the invocation result in an error, and
perhaps you did all of this because, at runtime, you will mix in a class
or role that delivers the promised functionality:
role ripens { method ripen() {...} }
role vine_ripened { method ripen($self:) { $self.organic //= 1 } }
role fruit { does ripens; method ripen() {...} }
my fruit $orange .= new();
$orange.ripen; # error
$orange does vine_ripened;
$orange.ripen; # Now we can, though "organic" needs defining
The idea of abstract objects is certainly compelling, but I don't think
it's something we'll want to do without substantial explicitness.
> role A { method x() { ... } }
> class B { does A; method x() { ... } }
And here you do just that.
>> Certainly this:
>>
>> class A { method x() {...} }
>>
>> is very explicit, and should be allowed, given that it is a promise
>> that "sometime before x is called, it will be defined." It's a
>> runtime error if that promise is not kept.
>
> Did you mean to have "class B" and "does A" there
You seem to be parsing multiple examples statefully. I recommend against
that.
On your last point, I think you are confusing an incomplete PRE (one
which invokes yadda) and an undefined PRE slot in a method. The latter
must be detectable as distinct for exactly the reasons you state.
No--I wouldn't define a dog that can't bark, I'd define a dog that *can*
bark, but I wouldn't say *how* it could bark (conjecturing an "is
abstract" class trait I give a sample implementation of below):
class Dog is abstract { method bark () { ... } #[ ... ] }
class Pug is Dog { method bark () { self.vocalize($.barkNoise) } }
my Dog $fido .= new; # exception, instantiating abstract class
my Dog $fido = Pug.new; # good, $fido can bark
Seems like there would be three ways to achieve the guarantee: an explicit
"is abstract" marker like above to prevent instantiation; some way to
infer concreteness (this gets into my earlier question of whether
yada-yada is enough to do so) so you can prevent instantiation; or simply
disallow abstract classes entirely.
It sounds like the assumption thus far has been that the existance of
roles imply that abstract classes are disallowed, so you'd write:
role Dog { method bark { ... } #[ ... ] }
class Pug does Dog { method bark { .vocalize($.barkNoise) } }
S12 says: "Classes are primarily for instance management, not code reuse.
Consider using C<roles> when you simply want to factor out
common code."
But if I want to write code that handles the instance management for
several related classes but cannot be instantiated itself, that must be a
role and not a class. But instance management, as the S12 quote says, is
the purpose of classes, not roles. Is it a deep requirement of the
MOP/MRP that only one of classes and roles can be abstract?
I've looked at my uses of uninstantiable classes in Perl 5 for the past
few years, and I think that almost all of them could be done as roles
(though for some of them the cognitive dissonance of calling a set of
object-management methods a "role" still bothers me). But there's one
case where I can't figure out how to do it except for throwing a runtime
exception in a class.
For a system monitoring application, I have a class heirarchy like the
following (bare names indicate concrete instantiable classes, brackets
indicate uninstantiable abstract ones):
+ [SystemMonitor]
- CPUMonitor
- DiskMonitor
+ ScriptedMonitor
+ [HardwareMonitor]
- FanMonitor
- TempMonitor
- PowerSupplyMonitor
Here, SystemMonitor is abstract and sets up the data collection and
storage routines. Its concrete subclasses implement how to actually get
the data and any munging the data requires, but otherwise inherit their
behavior from SystemMonitor. ScriptedMonitor is a concrete class that
gets a script attribute which it runs and a closure attribute it uses to
munge the data the script generates.
Turns out that there are many HardwareMonitors that all run the same suite
of hardware monitoring scripts and performs the same munging on them, but
has almost the same behavior as ScriptedMonitor. So I handled that by
subclassing it with a new abstract class, HardwareMonitor, which factored
out the new behavior all the hardware monitors shared. I then subclassed
*that* with concrete classes implementing the last little unfactorable
bits. So Abstract <- Concrete <- Abstract <- Concrete.
new(), for instance, was defined only in SystemMonitor (but threw an
exception if you tried to call it on SystemMonitor, thus making the class
abstract); gatherData() is called in SystemMonitor but is defined only in
the direct subclasses of SystemMonitor, and is overridden in
HardwareMonitor with a call to the superclass method
(ScriptedMonitor::gatherData). HardwareMonitor's subclasses just define
some munging methods that HardwareMonitor's processData() methods calls.
In this way, I never repeat myself, I use polymorphism so that I never
write any conditionals involving the type of anything (except in the case
of new() throwing an exception to prevent instantiation of an abstract
class), and related code goes together. This is a good thing.
In Perl 6, the abstract SystemMonitor could be a role, and a concrete
ScriptedMonitor could be a class that does SystemMonitor, but it's not at
all clear to me what HardwareMonitor would be, since classes can't be
abstract and roles can't inherit from classes. I guess it would be a
role, but then we'd have something like:
- role SystemMonitor
- class CPUMonitor does SystemMonitor
- class DiskMonitor does SystemMonitor
- class ScriptedMonitor does SystemMonitor
- role HardwareMonitor does SystemMonitor
- class FanMonitor does HardwareMonitor
- class TempMonitor does HardwareMonitor
- class PowerSupplyMonitor does HardwareMonitor
and I'd have to repeat the non-overridden parts of ScriptedMonitor in
HardwareMonitor.
I don't see where this is a win for me--it looks very much like a loss, as
my clear class heirarchy with no repetition has now become a flat set of
classes with tightly-coupled mixins requiring a repetition.
I could factor out the repetition with a third role that both the
ScriptedMonitor class and the HardwareMonitor role both do, but now I have
one more package than I had in the Perl 5 program, and I'm not sure why.
What have I gained by having it?
And the fact that I can't easily figure out a name that this role would
have apart from ScriptedMonitor also alarms me--I generally consider an
inability to see an obvious name for a software component to be a good
sign that my software composition is faulty.
So let me just ask the question bluntly: why can't we have classes that
can't be instantiated (short of checking .WHAT and explicitly throwing a
runtime exception)? Heck, to hack it in as a runtime exception, all we
need is something like:
role abstract {
has @!classes;
multi sub trait_auxiliary:is(abstract $trait, Class $container:) {
# $container might be concrete class, so we use ::?CLASS
# to get at the lexical class 'is abstract' was written on;
# we keep a list because there might be multiple abstract
# classes in our instance's composition
@!classes.push(::?CLASS);
}
submethod BUILD {
if (self.WHAT == any(@!classes)) {
croak "Attempt to instantiate abstract class {self.WHAT}";
}
}
}
Trey
To me, "instance management" means "the package can create, build, and
destroy objects" - not "the package initializes and cleans up
attributes". A 'class' that is forbidden from creating, building, and
destroying objects isn't a class; it's a role. In fact, you can think
of 'role' as being shorthand for 'abstract class' - after all, the
only difference between a concrete class and an abstract class is that
the former must implement everything and can manage instances, while
the latter cannot manage instances but doesn't have to implement
everything.
-snip-
> In Perl 6, the abstract SystemMonitor could be a role, and a concrete
> ScriptedMonitor could be a class that does SystemMonitor, but it's not at
> all clear to me what HardwareMonitor would be, since classes can't be
> abstract and roles can't inherit from classes.
S12 says:
*> A role is allowed to declare an additional inheritance for its
*> class when that is considered an implementation detail:
*>
*> role Pet {
*> is Friend;
*> }
So:
role SystemMonitor { ... }
class CPUMonitor does SystemMonitor { ... }
class DiskMonitor does SystemMonitor { ... }
class ScriptedMonitor does SystemMonitor { ... }
role HardwareMonitor is ScriptedMonitor { ... }
class FanMonitor does HardwareMonitor { ... }
class TempMonitor does HardwareMonitor { ... }
class PowerSupplyMonitor does HardwareMonitor { ... }
# and so on
is perfectly valid, and is shorthand for
role SystemMonitor { ... }
role HardwareMonitor { ... }
class CPUMonitor does SystemMonitor { ... }
class DiskMonitor does SystemMonitor { ... }
class ScriptedMonitor does SystemMonitor { ... }
class FanMonitor is ScriptedMonitor does HardwareMonitor { ... }
class TempMonitor is ScriptedMonitor does HardwareMonitor { ... }
class PowerSupplyMonitor is ScriptedMonitor does HardwareMonitor { ... }
# and so on
ISTR that it's also possible to treat a class as if it were a role
(e.g., "does classname" is valid, both as a statement in another role
or class and as an expression used to test the truth of the claim),
although I can't seem to find documentation for this at the moment.
--
Jonathan "Dataweaver" Lang
Thanks. This is what I was missing. I read the above, together with "A
role may not inherit from a class, but may be composed of other roles," as
specifying that any class doing Pet must already be a Friend, not that
doing Pet caused you to inherit from Friend.
I now see that what it meant was "a role may not *inherit* from a class,
because inheritance is a concept that only applies to instantiated
objects, but a role can *cause* its class to inherit from a class, which
works out to pretty much the same thing..."
So long as .post_data would work on a TempMonitor object below:
role SystemMonitor {
method post_data ($value, :$units, :$timestamp) {
# complete implementation here
}
}
class ScriptedMonitor is SystemMonitor {
# post_data never mentioned
}
role HardwareMonitor is ScriptedMonitor {
method post_data { next METHOD }
}
class TempMonitor is HardwareMonitor {
method post_data ($value, *%_) { call($value, :units(Celsius), |%_) }
}
I'm happy. It sounds like it should.
Trey
Larry Wall wrote:
> Basically, all types do Package whenever they need an associated
> namespace.
Great! This is how I imagined things to be. And the reason why
the :: sigil is also the separator of namespaces.
> And most of the Package role is simply:
>
> method postfix:<::> () { return %.HOW.packagehash }
>
> or some such, so "$type.::" returns the symbol table hash associated
> with the type, if any. It's mostly just a convention that the Foo
> prototype and the Foo:: package are considered interchangable for
> most purposes.
Do these namespaces also give a structural type definition in the
sense of record subtyping? That is a package is a subtype of another
package if the set of labels is a superset and the types of the slots
available through the common labels are in a subtype relation?
Regards,
--