Great. I am sure I would use it.
Is there a way to put conditions in pattern like@pattern f(0) = 42@pattern f(x) = x //if x>0@pattern f(x) = -x //if x<0or more general a bool valued function that needs to evaluate to true on the match in order for the rule to apply? That would be powerful and nice to have, I think.A common programming pattern in MATHEMATICA is something like thisf[0] = 1;f[n_] := f[n] = n*f[n-1];Putting the question aside wether one should do something like this, is it possible with your pattern approach?
Very cool. You're right that the best of all worlds is to define
patterns for basically anything you can imagine, then group the
patterns into types, and within that generate efficient matching code
to resolve the more dynamic patterns (e.g. specific numeric values).
Of course in theory the whole compiler could work that way and you
wouldn't need a "two level" system, but it's probably useful to have a
clear notion of what the compiler can figure out (types) and what it
can't (everything else). You can also look at it from the
specialization perspective. Obviously there are very few functions
we'd want specialized for every integer (though they do exist), so we
can use types to define what is specialized on by default.
Here is colon-free a syntax suggestion for specifying conditions in patterns
f( x | (x == 0) ) = 42
f( x | (x >= 0) ) = x
f( x | (x < 0) ) = -x
This would be really interesting if it is practical to be included in the language.
It would then make it real easy to pick different algorithms for different problem sizes. Currently, we have inline code for multiplying 2x2 matrices, but it has to be thrown into a conditional, and adding more cases just gets ugly.
If constraints could be supported, one could even have different algorithms for different types of inputs.
And to dream - this would also open the door for a more general ATLAS style tuning of different codes on different architectures.
The type system is precisely that object model which the compiler knows everything about. If julia knew what x>0 meant, it would suddenly be a rather different language. If julia continues not to know what this means, then the functionality is not really part of the language, regardless of where or how it is implemented.
One might also wish to avoid having both declarative and imperative syntax for everything.
Not to discourage all this of course; scheme has its various case-lambdas and whatnot.
The type system is precisely that object model which the compiler knows everything about. If julia knew what x>0 meant, it would suddenly be a rather different language.
If julia continues not to know what this means, then the functionality is not really part of the language, regardless of where or how it is implemented.
One might also wish to avoid having both declarative and imperative syntax for everything.
Not to discourage all this of course; scheme has its various case-lambdas and whatnot.
BTW, the name Type{} is inconsistent with the convention of writing
*Kind for the types of types. You should either change it to Kind{},
or (better IMHO) change CompositeKind to CompositeType. After all, if
the type of all integers is called Integer, the type of all composite
types should be called CompositeType, I think.
Yes, it is a kind but I didn't want to be too pedantic.
Actually it is hard to say since julia doesnt strictly separate types and kinds the way a "real" type system would. The type of a value is also a value, so the type of a type is also a value, a type, and a kind.
[Consolidated reply]
On Thu, May 24, 2012 at 4:06 PM, Patrick O'Leary
<patrick...@gmail.com> wrote:
> Type{T} is not the type of T, it is the instance of the type Type
> parameterized by T.
Fair enough. But I still think singleton types should be generalized
to all objects, precisely because it neatly allows the special
handling of specific values using the existing type machinery.
> So Type{T} is a type just like Array{T} is a type,
> rather than a kind.
Parameterized or not, Type is a type of types, which is how "kind" is
defined elsewhere in the language. That only counts if the decision
is to keep the *Kind names, which I am against anyway.
> All that the pattern dispatch stuff really amounts to is syntactic sugar,
> (but potentially very useful sugar, I think!) maybe you can say the the
> decoupling of pattern methods goes a little beyond that.
I think that pattern dispatch (which is really dispatching on types
whose position in the type lattice is not known until run time)
is only really useful if you can define the cases in separate places and
have Julia merge them into single LLVM functions. That's what Pure
does.
Since the compiler can't figure out the "proper" order of matches for
arbitrary predicates, you end up having to lay down an arbitrary rule
like "first written, first processed", which is what Pure uses, though
I often wish for "first written, *last* processed" at the REPL, which
would allow me to add special cases after writing down the general
case.
I would be slow to provide pattern methods until the advantages and
limitations are fully hashed out.
(typing out loud)
The next examples use 'when' rather than 'where'; There may be two conceptually distinguished, operationally distinct, mutually re-enforcing neighborhoods you are touching: one situated, one of duration/transition.
ok, how about
f( when x == 0 ) = 42
f( when x > 0 ) = x
f( when x < 0 ) = -x
zero a is special case, it might have its own conditional tokens
f( ~eql0 ) = 42 f( ~eql(42) ) = Inf
f( ~gtn0 ) = x f( ~gtn(42) ) = x
f( ~ltn0 ) = -x f( ~ltn(-42) ) = -x
f( ~ ) = NaN
f( ~ ) := f( otherwise )
what of partially external conditions in patterns (guards)?
f( where !hot(temp(motherboard)) when x > 1e7 ) = f( bignum(x) )
I note that all the examples given have just one conditioned argument. My understanding is that one of the intricate (although not that complex) things in, eg, a Haskell compiler, is factoring out implicit common elements of guards so they aren't evaluated more than once.
Eg, in
data Tree a=Lf | Nd a
f Lf Lf=0
f (Nd _) Lf=1
f Lf (Nd _)=2
f (Nd _) (Nd _)=3
there's only one test against the first argument being Lf, one for snd being Lf and then the dispatching uses those values. (Actually, its a bit more complicated due to laziness, but ignore that). So in researching any possible design for Julia I'd definitely consider multi-argument matching in figuring the issues to solve.
This looks very nice. Just one thought. Would it be possible to make it multi-line, so that one could write something like
@patterns (
f (n where n == 0) = 1
,
f(n where n>0 ) = n + f(n-1)
,
g (x) = 2x+1
,
h (x) = 3x-2
)
That is, pass a tuple or list of assignment expressions to
the macro? (I haven't written any macros yet, so I'm uncertain how
they work).
That is very beautiful!
Can you describe a little bit what is Pure's approach?
I think that pattern dispatch (which is really dispatching on types
whose position in the type lattice is not known until run time) is
only really useful if you can define the cases in separate places and
have Julia merge them into single LLVM functions. That's what Pure
does.
Since the compiler can't figure out the "proper" order of matches for
arbitrary predicates, you end up having to lay down an arbitrary rule
like "first written, first processed", which is what Pure uses, though
I often wish for "first written, *last* processed" at the REPL, which
would allow me to add special cases after writing down the general
case.