--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
* macro annotations (useful for pickling, slick, enums,...?)
* untyped macros (https://github.com/aztek/scala-workflow)In paradise (Eugene's fork): not supported (by descending likelihood of making it to master):* multi-parametric type class materialization aka fundep materialization (Iso's, extracting the essence of whitebox macros)
* introduceTopLevel -- currently in master, but I'd like to get it out (can always cast)
* type macros
Ugghh ... macros without type macros? Isn't that pretty pointless?
From my POV, it's better to remove macros alltogether then. The features which would be shipping are "neat", but I think they are not that useful in practice and they are certainly not worth the "perceived complexity" if they are so severely limited that one can't build things which are truly helpful to end-users (see F# type providers).
On Thursday, May 30, 2013 5:11:28 PM UTC-4, Simon Ochsenreither wrote:Ugghh ... macros without type macros? Isn't that pretty pointless?
From my POV, it's better to remove macros alltogether then. The features which would be shipping are "neat", but I think they are not that useful in practice and they are certainly not worth the "perceived complexity" if they are so severely limited that one can't build things which are truly helpful to end-users (see F# type providers).
So, I definitely sympathize with disappointment around the reluctance to release type macros, but I don't share this sentiment at all.Macros are already an amazing tool to produce faster Scala code. This is arguably a bit of a niche interest, but it's one I have so I wouldn't want to see them gone.
P.S. I would have posted from my normal email address but recently my emails have not been getting through to the mailing list.
--
You received this message because you are subscribed to a topic in the Google Groups "scala-internals" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scala-internals/91W0-PxMQ9Q/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to scala-interna...@googlegroups.com.
(1) type macros are not going in to 2.11 because they are not ready yet; and (2) type macros are not going in because we are having second thoughts that this was a good idea in the first place.
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
The features which would be shipping are "neat", but I think they are not that useful in practice and they are certainly not worth the "perceived complexity" if they are so severely limited that one can't build things which are truly helpful to end-users (see F# type providers).
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
Anyone who uses shapeless probably considers computation at the type level crucial! :)
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
I think that's a bit of an over-generalization. I can think of at least two ways of implementing type providers with the macro features planned for 2.11.
--
I think the classic test is FreeBase. FreeBase has thousands of types, which need to be lazily instantiated if you don't want to pull down the complete database schema for some simple type completion.
How would macro annotations help here? Having macro annotations which in turn create members with even more macro annotations?
Anyway, my experience with macros in general is that it is too early to tell which parts and which functionality will be needed and which doesn't. Example: The tools to build decent enums are all in the third category (and some are not even there yet).
In terms of non-obviousness of how to do it best/well/right, this feature ranks pretty high.
P.S. I would have posted from my normal email address but recently my emails have not been getting through to the mailing list.
--
It's a shame to see macros deemed unuseful for type-level computations, because of some implementation warts such as the separate compilation requirement.
I think it'd be the best to support both: interpreted style for lightweight macros and mixed or solely compiled style for big macros that require the best possible performance.
Indeed ... I should have some exciting new stuff to show by Scala Days :-)
It's a problem with complex dependencies.
Adriaan, could you please flesh out the definitions a bit more?
--
Speaking of Scala types + pattern matching on types, why abandon the abstraction mechanisms provided by Scala and reduce them to just functions? What if I would like to modularize my type functions in OO-style? How do I do that?
Macros are first-class member of the language, one can import/unimport them, use mixin composition, etc. What would type functions offer the programmers in this regard?
--
FTR, I'm largely in agreement with the above (although I wouldn't be
quite so disparaging about Prolog ;-)
My uses of macros in shapeless are largely workarounds for features
missing in the language and its type system, and I would much rather
see the problems dealt with at source.
Miles
On Fri, May 31, 2013 at 1:03 PM, Eugene Burmako <eugene....@epfl.ch> wrote:The domain of discourse of macros is outside of the language, it's the implementation of the language -- that's my point!
Speaking of Scala types + pattern matching on types, why abandon the abstraction mechanisms provided by Scala and reduce them to just functions? What if I would like to modularize my type functions in OO-style? How do I do that?
I never said to abandon the existing abstraction mechanisms -- quite the opposite: type functions would be members of classes just like type aliases are, and can partake in the whole abstraction game.Macros are first-class member of the language, one can import/unimport them, use mixin composition, etc. What would type functions offer the programmers in this regard?You're "scripting the compiler". It may be acceptable to construct trees for values that way, but it's much too low-level when it comes to expressing types.Type functions would be part of the object level (i.e., strictly reasoning inside the type system), whereas macros are meta: they reason about/manipulate the type system.
I don't want to derail the (def) macro work with my dreaming of type functions. We're not ready for them yet.More research is needed. I unfortunately can't afford to work on that right now.More importantly, we have enough on our plates stabilizing the macro features we want to support.Let's get the value level right first before we turn to types.We already have implicits and type inference to battle there.I also am strongly convinced a lot of type-level hacking can be done with the features as they are planned. Necessity is the mother of invention.I was also happy to hear positive feedback from Miles.
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
This seems to be a debate between two alternatives, one of which is
implemented (in early form), and one of which isn't. Unimplemented
software invariably has fewer warts and more potential capabilities,
than implemented software. I worry that if type macros get sidelined,
we'll be waiting ~3 years until the alternative solution is ready for
applied use. "Perfect is the enemy of good" etc..
The prime reason for turning to macros when implementing Typed Channels was error reporting,
I guess what this boils down to is introducing non-termination explicitly into your type functions.
--
On Sun, Jun 2, 2013 at 10:31 PM, Roland Kuhn <goo...@rkuhn.info> wrote:
The prime reason for turning to macros when implementing Typed Channels was error reporting,I'd love to understand the approach and how it could be implemented using (whitebox) def macros.
I don't see why that is. A type function must terminate (because it must be decidable), but it can do so with an error.I guess what this boils down to is introducing non-termination explicitly into your type functions.
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
3a) Speaking of being low-level. Would you name this type macro low-level? Why?
class Set[T] {
def map[U](f: T => U): Result[U]
type Result[NewEl] = NewEl match {
case tq"Boolean" => tq"BitSet"
case _ => tq"Set[$NewEl]"
}
}
3b) I've been looking through the Omega manual trying to find out another good example to discuss, and then I stumbled upon this type function:
plus :: Nat ~> Nat ~> Nat
{plus Z y} = y
{plus (S x) y} = S{plus x y}An interesting thing here is that we need extended kinds to support this kind of signature. If we do a proposed DSL for type-level programming, we would need to come with a special syntax for that, and then we'd thread support for such kinds through the type checker. With macros, we can reuse a pre-existing language mechanism:
macro type plus[M: Nat, N: Nat] = tq"$M" match {case tq"Z" => tq"N"case tq"S[$M]" => tq"S[plus[$M, $N]]"
}
The reason I'm asking for more detail is because def macros can also change the result type of the expression that they generate.You only need type macros if you need to change the meaning of types that are unrelated to expressions (e.g., to change the parents of a class, to compute a self-type or the bound of a type parameter).In many cases, type macros can be replaced by white box macros that synthesize type class instances. I've included an example below.As soon as expressions are involved (e.g., a method call), you can use a whitebox def macro to influence its result type, give error messages,...So, if you can express your design in terms of "magic" method calls that reason, influence and potentially "complain" about their types, you don't need type macros.
Whitebox macros refer to the kind of def macros that are see-through (like a white box): they can change their signature. Blackbox macros can't.Both kinds of macros are implemented, though we will promote blackbox over whitebox.
So, if you can express your design in terms of "magic" method calls that reason, influence and potentially "complain" about their types, you don't need type macros.Yes, I am aware of all this, since I am using those white-box def macros already. The reason I chimed in is that I’d rather replace those white-box macros—which make life a lot harder for IDEs and probably don’t work in IntelliJ anyway—by the type calculations you talked about, I’m with Miles on this one. But in order to do that we would need to have a solution for the error reporting problem.
Whitebox macros refer to the kind of def macros that are see-through (like a white box): they can change their signature. Blackbox macros can't.Both kinds of macros are implemented, though we will promote blackbox over whitebox.Did I understand correctly that white-box macros will be part of 2.11 for sure?
@Roland. Could you elaborate on IDE problems caused by your macros?
@Adriaan. What if some extra payload that identifies an error needs to be carried over? E.g. in Roland's case of ping-pong analysis that would be not only the name of the unsupported message, but also a sequence of messages that led to that message.
1) I rather meant to ask how one would write complex type functions. What abstraction mechanisms would it be possible to use e.g. to remove duplication from two similar type functions?3a) Speaking of being low-level. Would you name this type macro low-level? Why?
2) Being meta- or macro- has its benefits. We can use familiar Scala code (probably restricted in a way) and use familiar abstractions (the reflection API) to solve a whole range of problems, whereas tools given to a DSL programmer will most likely be inapplicable in other domains.
class Set[T] {
def map[U](f: T => U): Result[U]
type Result[NewEl] = NewEl match {
case tq"Boolean" => tq"BitSet"
case _ => tq"Set[$NewEl]"
}
}
3b) I've been looking through the Omega manual trying to find out another good example to discuss, and then I stumbled upon this type function:
plus :: Nat ~> Nat ~> Nat
{plus Z y} = y
{plus (S x) y} = S{plus x y}An interesting thing here is that we need extended kinds to support this kind of signature. If we do a proposed DSL for type-level programming, we would need to come with a special syntax for that, and then we'd thread support for such kinds through the type checker. With macros, we can reuse a pre-existing language mechanism:
macro type plus[M: Nat, N: Nat] = tq"$M" match {case tq"Z" => tq"N"case tq"S[$M]" => tq"S[plus[$M, $N]]"
}
4) Also let's not jump into conclusions solely based on ideology. It sounds nice to claim that macros are low-level, are too rough to touch such sophisticated beings as types, don't deal with the problems at the root, etc, but let's examine the situation concretely using examples. If macros are indeed ill-suited for the tasks at hand, you know me, I'll be the most fervent zealot (though not after first trying to make macros better), but let's not get ahead of ourselves...
trait Natclass Zero extends Natclass Succ[T <: Nat] extends NatThe standard application, in the literature, is to use those to build sized lists — I'd bet that's also doable in Omega, since the stated goal is to support similar examples. In the following, L is the list length.
trait List[+A, L] {def head[K <: Nat](implicit p: L =::= Succ[K]): Adef tail[K <: Nat](implicit p: L =::= Succ[K]): List[A, K]def map[B](f: A => B): List[B, L]def zip[B](that: List[B, L]): List[(A, B), L]}
To show some problems where type macros seem unsuitable, I'll show that often the typechecker needs not only to run type functions, but also to do equational reasoning with them.
The examples in this thread are simple type transformers, even though they surely have their applications, but are too simple for my example. I'll take an example which actually uses type-level addition, show a reasonable way of supporting it (the approach used in Agda as far as I understand), and this might give you an idea of the requirements which a type-level sum must satisfy.
However, take an encoding of type-level numerals in Scala. Here's a simple encoding, translated from some variant of C#:trait Natclass Zero extends Natclass Succ[T <: Nat] extends NatThe standard application, in the literature, is to use those to build sized lists — I'd bet that's also doable in Omega, since the stated goal is to support similar examples. In the following, L is the list length.trait List[+A, L] {def head[K <: Nat](implicit p: L =::= Succ[K]): Adef tail[K <: Nat](implicit p: L =::= Succ[K]): List[A, K]def map[B](f: A => B): List[B, L]def zip[B](that: List[B, L]): List[(A, B), L]}The rest is at https://gist.github.com/Blaisorblade/5726830#file-typesasvalues-scala-L44.
Suppose you also have type-level sum, defined as the equivalent of the standard definition on Peano numerals:Sum Z n = n
Sum (S m) n = S (Sum m n)Now you can write the type for list concatenation:
trait List[+A, L] {...def append[K <: Nat](l: List[A, K]): List[A, Sum[L, K]]}Now, consider typechecking `Nil zip append(v1 :: l1, l2)`, where l1 has length M and l2 length N, both unknown.
To typecheck the call to zip, the typechecker needs then to solve Z = Sum (S M) N, and it must prove that unification is impossible (either directly or with programmer help). Even though M and N are unknown, normalization can then apply the definition Sum (S m) n = S (Sum m n) as a rule for equational reasoning (or as a reduction rule, left to right) and produce Z = S (Sum M) N, which is clearly impossible because Z and S are different constructors.
, but I think it's worth a great deal of thought if one is going to try to go meaningfully beyond type macros.
I'm a little wary of aiming for less than this because it seems almost-clear to me that this ought to be the eventual goal, and having type macros, equational reasoning with limited syntax, and first class types all at the same time is a recipe for...well...I'm not sure it's even a recipe. If you need reasoning, provide a general reasoning-library and let people apply it to type-values (or whatever else they might want to reason with).
Why do you want to make your compiler understand natural numbers constructively? This is an example to me of not of the defects of type macros but of the defects of having anything less than a Turing-complete language to do type-level reasoning--and while you're at it, why not use the same language that you're writing everything else in?
Why do you want to make your compiler understand natural numbers constructively?
"Arbitrary" means the language available is the same as for values, and is almost Turing-complete; you can write any function which you can prove total, and a lot of recursion patterns are supported. And of course you don't have side effects in type computation - in Agda, side effects are controlled through monads.
On Thu, Jun 6, 2013 at 10:20 PM, Rex Kerr <ich...@gmail.com> wrote:
Why do you want to make your compiler understand natural numbers constructively?
On the off-chance someone tries to connect the text I quoted and my response to it, I add that I DON'T think we should have to make our compiler understand natural numbers constructively. We can easily teach the compiler about the properties of natural numbers in such a way that they are among the foundational elements to which I referred. I have long wished to do this.
On Friday, June 7, 2013 10:20:28 PM UTC+2, Paul Phillips wrote:
On Fri, Jun 7, 2013 at 1:18 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:"Arbitrary" means the language available is the same as for values, and is almost Turing-complete; you can write any function which you can prove total, and a lot of recursion patterns are supported. And of course you don't have side effects in type computation - in Agda, side effects are controlled through monads.Okay, but this is not how we usually define "arbitrary".
filterTypes nil empty = nilfilterTypes (cons v vs) (dropFirst contain) = filterTypes vs containfilterTypes (cons v vs) (keepFirst contain) = cons v (filterTypes vs contain)
I'm a little wary of aiming for less than this because it seems almost-clear to me that this ought to be the eventual goal, and having type macros, equational reasoning with limited syntax, and first class types all at the same time is a recipe for...well...I'm not sure it's even a recipe. If you need reasoning, provide a general reasoning-library and let people apply it to type-values (or whatever else they might want to reason with).There's no "equational reasoning with limited syntax", neither does the "type system prove things by induction".Probably my explanation confused you, and in multiple ways. To understand such systems, you should maybe try programming in one, or reading some good paper about one (maybe "Why dependent types matter" is a good choice).
What I will do is write in Agda a version of your `gather` in a few lines, using pattern matching.
Instead of:I should have written that "Sum (S m) n -> S (Sum m n)" works as a reduction rule. In other words, Sum just does pattern matching as it would do in Haskell; the computational model is still lambda-calculus, like in every (typical) functional programming language.
> Even though M and N are unknown, normalization can then apply the definition Sum (S m) n = S (Sum m n) as a rule for equational reasoning (or as a reduction rule, left to right) and produce Z = S (Sum M) N, which is clearly impossible because Z and S are different constructors.
The difference, however, is that the type-checker needs to normalize open terms - that is, during typechecking expressions are evaluated not using call-by-value (or even call-by-need), but using full beta reduction.
In your example, you seem to try using standard numbers at the type level and have them work as first-class entities. How should the typechecker compare Sum (S m) n and S (Sum m n) when they are expressed as (m + 1) + n and (m + n) + 1? Answer: if you use builtin addition and don't add equational reasoning tools to your typechecker (somehow), you can't.
`Nil zip append(v1 :: l1, l2)` wouldn't typecheck. That's why I defined naturals constructively, because that makes things **easier** from this point of view — some proofs will just be done by computing values at compile-time and comparing them, which is what you want.
However, in practice you want to automate reasoning over naturals. The most common strategy is to not have first-class types, and use a specialized reasoning tool (like an SMT solver) for typechecking programs like your list there. I think that's what they use to typecheck JavaScript using dependent types (Google "dependent types for javascript"). But there, as far as I understand, types are very much non-first-class.
It's also possible to combine some automated reasoning with first-class types.But then, to typecheck programs using lists of types, the typechecker would need to do similar reasonings as the ones I've shown.
On Fri, Jun 7, 2013 at 3:54 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
I'm a little wary of aiming for less than this because it seems almost-clear to me that this ought to be the eventual goal, and having type macros, equational reasoning with limited syntax, and first class types all at the same time is a recipe for...well...I'm not sure it's even a recipe. If you need reasoning, provide a general reasoning-library and let people apply it to type-values (or whatever else they might want to reason with).There's no "equational reasoning with limited syntax", neither does the "type system prove things by induction".Probably my explanation confused you, and in multiple ways. To understand such systems, you should maybe try programming in one, or reading some good paper about one (maybe "Why dependent types matter" is a good choice).It's certainly true that my knowledge of languages with proof-assistant capability lags far behind my knowledge of either how to do normal (mathematical) proofs or more traditional languages.
What I will do is write in Agda a version of your `gather` in a few lines, using pattern matching.
Instead of:I should have written that "Sum (S m) n -> S (Sum m n)" works as a reduction rule. In other words, Sum just does pattern matching as it would do in Haskell; the computational model is still lambda-calculus, like in every (typical) functional programming language.
> Even though M and N are unknown, normalization can then apply the definition Sum (S m) n = S (Sum m n) as a rule for equational reasoning (or as a reduction rule, left to right) and produce Z = S (Sum M) N, which is clearly impossible because Z and S are different constructors.
Doesn't this, unless your compiler knows how to prove things via induction, give the compiler O(n+m) work to do--with a very large constant term compared to a bulk memory copy--every time the user concats two fixed-size arrays?
That is, if n+m = a+b but n, m, a, and b are all distinct, your type checker ought to know that Sum n m is Sum a b, and if the only means of computing that is to expand to S(S(S(....S(Zero)...))), it's not exactly going to be fast.
I suppose you could argue that you don't need to worry about such things as much with lists, where the numeric S operation and the list cons are naturally related (not sure I want to say isomorphic). But one can have big lists also.
There's no "equational reasoning with limited syntax", neither does the "type system prove things by induction".