Scala - a Roadmap

9,634 views
Skip to first unread message

martin odersky

unread,
Mar 20, 2012, 7:29:05 AM3/20/12
to scala-l...@googlegroups.com
There is much controversy surrounding SIP 18. I believe at least part
of it is because people don't have a clear idea where Scala is going.
So people are making up crazy reasons like that the primary purpose of
SIP 18 is to pander to naysayers.

Let me first say something about that:

First, the criticism of Scala that's most vexing for me is when people
say it's like C++, or it's a huge language that includes everything
including the kitchen sink. I'm hurt by the criticism because I
believe it describes Scala as the exact opposite of what I panned it
to be. I have always tried to make Scala a very powerful but at the
same beautifully simple language, by trying to find unifications of
formerly disparate concepts. Class hierarchies = algebraic types,
functions = objects, components = objects, and so on.

So I believe the criticisms are overall very unfair. But they do
contain a grain of truth, and that makes them even more vexing. While
Scala has a simple and consistent core, some of its more specialized
features are not yet as unified with the rest as they could be. My
ambition for the next 2-4 years is that we can find further
simplifications and unifications and arrive at a stage where Scala is
so obviously compact in its design that any accusations of it being a
complex language would be met with incredulity. That will be the best
counter-argument to the naysayers. But much more importantly, it will
be a big help for the people writing advanced software systems in
Scala. Their job will be easier because they will work with fewer but
more powerful concepts.

If we arrive to do the simplifications, that would be a good basis for
a Scala 3. Right now, all of this is very tentative. Scala 3 does not
have an arrival date, and it's not even sure that it will ever arrive.
But to give you an idea on what I will be working on, here are some
potential simplifications.

- The easiest win is probably XML literals. Seemed a great idea at the
time, now it sticks out like a sore thumb. I believe with the new
string interpolation scheme we will be able to put all of XML
processing in the libraries, which should be a big win. It also means
we could provide swappable alternatives to the current XML system such
as Anti XML or others.

- The type system. Ideally, Scala's types will be built from just
traits, mixin composition, refinements, and paths, and nothing else.
That's the true core of Scala as is captured in our dependent object
types formalism. We'll throw in classes for Java compatibility. We
still have to make this into a practical programming language
compatible with what Scala currently is. The potential breakthrough
idea here is to unify type parameters and abstract type members. The
idea would be to treat the following two types as equivalent

trait Seq[type Elem] and trait Seq { type Elem }

The definition

trait Seq[Elem]

could still be kept and be interpreted as a type that has an
inaccessible member Elem, similar to the distinction between

class C(x: T) and class C(val x: T)

that we have now. The big simplifications are then:

(1) Any type can be written without its parameters.
(2) Any type that has abstract type members can be retroactively parameterized.
(3) Type parameters can be unified by name.

To illustrate (3), right now to create a synchronized hashmap, one has to write:

new HashMap[String, List[Int]] with SynchronizedMap[String, List[Int]]

That's one of the thankfully few aspects where current Scala violates
the DRY principle. In the future you'd be able to write

new HashMap[String, List[Int]] with SynchronizedMap

because SynchronizedMap could refer to the type Key and Value
parameters in Map. Or you could write

new HashMap[Key = String, Value = List[Int]] with SynchronizedMap

to make it even clearer. The two formulations would be equivalent.


Now if we do that then we have suddenly gained the essential
functionality of higher-kinded types and existential types for free! A
higher-kinded type is simply a type where some parameters are left
uninstantiated. And an existential type is the same! Now clearly,
something must get lost in a scheme that unifies higher-kinded and
existential types by eliminating both. The main thing that does get
lost is early checking of kind-correctness. Nobody will complain that
you have left out type parameters of a type, because the result will
be legal. At the latest, you will get an error when you try to
instantiate a value of the problematic type. So type-checking will be
delayed. Everything will still be done at compile-time. But some of
the checks that used to raise errors at the declaration site will now
raise errors at the use site. In my mind that could be a price worth
paying for the great overall simplification and gain in expressive
power. The other thing that gets lost are the more complicated forms
of existential types that cannot be expressed as a (composition of)
types with uninstantiated type members.

One particularly amusing twist is that this could in one fell sweep
eliminate what I consider the worst part of the Scala compiler. It
turns out that the internal representation of higher-kinded types in
the Scala compiler is the same as the internal representation of raw
types in Java (there are good reasons for both representation
choices). But raw types should map to existentials, not to
higher-kinded types. We therefore need to map Java raw types to Scala
existential types. The code that does this is probably the most
fragile and intricate part of the Scala compiler. There's basically no
good way to do it without either forgetting some transformations or
accidentally tripping off cyclic reference errors. But with the
projected simplifications we would get

raw types = existentials = higher-kinded types = types with
uninstantiated parameters

so the whole issue would vanish in a puff of smoke.

To summarize: If we do this simplification,

- we could eliminate two classes of types in Scala, so that only a
simple core remains,
- we could gain expressive power through unification of concepts,
- we could avoid unnecessary repetition of parameters in mixin
compositions and extends clauses
- we could strengthen the analogies between type parameters and value
parameters.

Other more narrowly scoped ideas for the type system are to introduce
least upper bounds and greatest lower bounds of types as type
constructors. This would avoid the explosion of computed lub types
that we sometimes see in codebases today. And there are some ideas to
make type inference more powerful by making it constraint based.

One big unknown right now is how to ensure a high degree of backwards
compatibility or alternatively provide migration strategies. It's
clear that we have to do this if we want this to fly. It will require
an implementation and lots of experimentations. Therefore, I don't
expect any of these things to materialize before a timeframe of 2-4
years.

So, what does this have to do with SIP 18? Two things:

First, while we might be able to remove complexities in the definition
of the Scala language, it's not so clear that we can remove
complexities in the code that people write. The curse of a very
powerful and regular language is that it provides no barriers against
over-abstraction. And this is a big problem for people working in
teams where not everyone is an expert Scala programmer. Hence the idea
to put in an import concept that does not prevent anything but forces
people to be explicit about some of the more powerful tools that they
use. I am certain there is no way we can let macros and dynamic types
into the language without such a provision.

Second, the discussion here shows that complex existentials might
actually be something we want to remove from a Scala 3. And
higher-kinded types might undergo some (hopefully smallish) changes to
syntax and typing rules. So I think it is prudent to make people flag
these two constructs now with explicit imports, because, unlike for
the rest of the language we do not want to project that these two
concepts will be maintained as they are forever. If you are willing to
keep your code up to date, no reason to shy away from them. But if you
want a codebase that will run unchanged 5 years from now, maybe you
should think before using complex existentials or higher kinded types.
Of course the docs for these two feature flags will contain a
discussion of these aspects, so people can make an informed choice for
themselves.

I know that despite these explanations SIP 18 will still be
contentious. But let's keep the discussion of SIP 18 on scala-sips. Of
course I'd be happy to see responses to all other parts of this mail
in this thread.

Cheers

- Martin

Chris Marshall

unread,
Mar 20, 2012, 8:03:48 AM3/20/12
to scala-l...@googlegroups.com
I still don't get SPI-18. If I start typing... 

   macro def

...I'm writing a macro. Why is my IDE/paperclip saying "it looks like you are writing a macro - want me to import language.macroDefs?" adding anything? Why is it being any more explicit about whether macros are being used in the program than the program, um, containing macros? 

Having said all that; very interesting update on the direction of scala

Chris

On Tue, Mar 20, 2012 at 11:29 AM, martin odersky <martin....@epfl.ch> wrote:
I am certain there is no way we can let macros and dynamic types
into the language without such a provision.

Cheers

 - Martin

rkuhn

unread,
Mar 20, 2012, 8:46:54 AM3/20/12
to scala-l...@googlegroups.com
Hi Martin,

I really like this proposed unification, the way you present it suggests that working with generic types will become much easier to think about and nicer to write down. Simplifying the compiler is more than just sugar on top: I take it as an omen that this idea is a very powerful one. And if you are right that most current code will remain valid, this means that the current system—while doing it a bit convoluted—is actually not that far from being ideal ;-)

Thanks for this vision!

Regards,

Roland

Oleg Galako

unread,
Mar 20, 2012, 9:08:39 AM3/20/12
to scala-l...@googlegroups.com
Looks very promising!

And one thing probably a little off topic: in my experience most communities which are formed around something always have more people who just use/enjoy the thing and don't discuss it too much in public. It works, it's fun to use, i'm teaching new people in my company to use it and it's all ok! It's always easier to make it look like a language or technology is responsible for your fails but we don't do that.

Scala has some quite usual problems for a growing language, but it is really very strong and beautiful and the roadmap looks very much in Scala's spirit.

Keep doing your great job and may the force be with you :)

Alex Repain

unread,
Mar 20, 2012, 10:16:08 AM3/20/12
to scala-l...@googlegroups.com


2012/3/20 Chris Marshall <oxbow...@gmail.com>

I still don't get SPI-18. If I start typing... 

   macro def

...I'm writing a macro. Why is my IDE/paperclip saying "it looks like you are writing a macro - want me to import language.macroDefs?" adding anything? Why is it being any more explicit about whether macros are being used in the program than the program, um, containing macros? 


Try turning the chessboard around : if you are a non-advanced user of Scala and you heard scary stories about macros, and Scala in general. You might want to start learning the language without having to care about advanced features like macros. In that perspective, reducing the minimal core of the language is interesting. Sure, if you already master current Scala, adding a new feature is exciting for you, and you will avoid most of the dangerous orthogonal feature alchemies, but Scala was probably meant to be safe (typesafe, and generally safe to use for projects). It would be a big loss for the language if it were to stay safe only for the already advanced users.

How would you call that philosophy ? Incremental design ?
 

Daniel Spiewak

unread,
Mar 20, 2012, 10:38:27 AM3/20/12
to scala-l...@googlegroups.com
- The type system. Ideally, Scala's types will be built from just
traits, mixin composition, refinements, and paths, and nothing else.
That's the true core of Scala as is captured in our dependent object
types formalism. We'll throw in classes for Java compatibility. We
still have to make this into a practical programming language
compatible with what Scala currently is. The potential breakthrough
idea here is to unify type parameters and abstract type members. The
idea would be to treat the following two types as equivalent

 trait Seq[type Elem]     and   trait Seq { type Elem }

I don't see how this works out from a theoretical standpoint (and thus, from a practical standpoint).  Abstract type members are a generalization of existential types, as you point out.  You're proposing to unify existential types with universal types.  What you're literally saying is that universal quantification is just existential quantification in disguise.  Unless I missed a huge chunk of predicate calculus, I don't think this is true.

Existential types may unify with let-bound polymorphism in the way you suggest, but not all of Scala's universal quantification is let-bound according to the mapping of let given in your example (a class/trait type declaration).  Thinking along these lines, one practical impact of this theoretical discord comes to light immediately:

def identity[A](a: A): A = a

So…is this a universal type?  There aren't any classes here onto which you could inject type members (maybe Function1?), so I don't see a way to carry your unification strategy through every case.  In other words, attempting to implement this strategy would make the language less consistent, rather than more, since it would then have two distinct ways of encoding universal types, and that's assuming the encoding works at all at the class level!

There just seem to be a lot of holes here that for which I can't see a resolution, generally stemming from the duality (not isomorphism) of existential and universal quantification.

Am I missing something?

Daniel

martin odersky

unread,
Mar 20, 2012, 10:54:31 AM3/20/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 3:38 PM, Daniel Spiewak <djsp...@gmail.com> wrote:
>
>> - The type system. Ideally, Scala's types will be built from just
>> traits, mixin composition, refinements, and paths, and nothing else.
>> That's the true core of Scala as is captured in our dependent object
>> types formalism. We'll throw in classes for Java compatibility. We
>> still have to make this into a practical programming language
>> compatible with what Scala currently is. The potential breakthrough
>> idea here is to unify type parameters and abstract type members. The
>> idea would be to treat the following two types as equivalent
>>
>>  trait Seq[type Elem]     and   trait Seq { type Elem }
>
>
> I don't see how this works out from a theoretical standpoint (and thus, from
> a practical standpoint).  Abstract type members are a generalization of
> existential types, as you point out.

They are considerably more powerful since they can be used as input
types as well as output types. What you are arguing is that the basis
of any good type systems should be F_omega_sub, or something close. I
want to explore something completely different. But let me finish the
research before going into details.

> def identity[A](a: A): A = a
>
> So…is this a universal type?  There aren't any classes here onto which you
> could inject type members (maybe Function1?), so I don't see a way to carry
> your unification strategy through every case.

Nothing new here. Polymorphic types for methods already exist and will
be maintained. In a calculus we could model identity as a member of a
parameterized class, but in a practical programming language things
will stay as they are.

Cheers

- Martin

Simon Ochsenreither

unread,
Mar 20, 2012, 11:02:35 AM3/20/12
to scala-l...@googlegroups.com
Hi Martin,

Having some cleaned-up Scala 3 would be huge, BUT:

I think these changes are hard to sell to people and will put adoption at risk: "we will have a new major release in 2-4 years" when people are already scared from point releases? People won't use Scala _now_ if they get told that there will be a huge new version with probably breaking changes ahead.

A value proposition which would make many people more willing to move to Scala 3:
  • The changes you mentioned

PLUS:

  • Simplifications to reduce the complexity of signatures in the collection space.
  • Better/reified Generics. Seeing Oracle slides mentioning the possibility of reified Generics in a future version of the JVM/Java I think Scala can push this forward. The main thing in Generics I care about is not reflection stuff, but the whole overloading/overriding/subclassing problem. Other platforms don't use the erasure scheme either.
  • Having better default/named arguments so that overloading can be put to rest completely. (Makes reflection much more simpler)
  • No nullable reference types by default.
  • Further unification of AnyVal/AnyRef.
  • No any2StringAdd.
  • No unsafe implicit conversions for primitive types.

I think having a look at the stuff Kotlin (reified generics) or Ceylon (union types instead of nullable reference types) are trying to do makes sense.

I don't think the feature flags are the right way to pull off a migration to a future version. Imho it makes more sense to not annoy people about existing stuff but offer them a way to opt-in to "future implementations" on a non-global base. E. g. the thing Adrian mentioned about virtpatmat.

I still think import is the wrong way and I also dislike using "language" for it. We don't have "utilities" but "util", so I think "lang" is more consistent and looks more familiar to Java people.

Imho the crucial thing is to have an existing implementation before starting to make any warning noises.

In the end I think Scala 3 makes sense, but it should be done in a more continous way, e. g. folding these changes into the next 2.x release when they become ready and declare Scala 3 when all the changes are already in. E. g. "This is what we think is worth calling 3.0, because it is stable mature and all features are well-tested. They are no compatibility issues." E.g. don't repeat Python2 -> Python3. And don't keep people wating forever like in Scala 2.7 -> Scala 2.8.

I think there is a scheme were I could agree having some language pragma:
 - Offer a future implementation with a pragma in version 2.X
 - Make the pragma the default in 2.X+1 but allow people to revert using the old implementation
 - Drop the pragma and the possibility to use the old implementation in 2.X+2

Thanks and bye!

Simon

Alex Repain

unread,
Mar 20, 2012, 11:24:19 AM3/20/12
to scala-l...@googlegroups.com


2012/3/20 Daniel Spiewak <djsp...@gmail.com>

I'm going all-in with the following view, please call my bluff :

trait Seq{type Elem } is existential quantification over {X / X <: Any}. (There exists a type X <: Any such that Elem = X and P(X)), where P(X) are the requirements inferred through the trait for the Elem type. but for as long as we don't identify an X that satisfies the condition implied by the trait, it is a universal quantification : (for all X such that (X <: Any and P(X)), Seq{type Elem = X} can exist). When instancing Seq, we must provide an X  satisfying P, therefore satisfying both universal and existential quantifications as previously mentioned.

Then ... you can define the existential quantification as equivalent to universal quantification when applied on a singleton. That is :

trait Seq{type Elem = A} hints either (for all X in {A}, P(X)) for universal quantification, or (there exists X such that P(X) and X = A), for the existential version.

I don't really know what theoretical value this as, since I deliberately used different sets over which to work the quantifications, but a compiler would most probably be able to handle that, isn't it ?



Daniel

Daniel Spiewak

unread,
Mar 20, 2012, 11:27:31 AM3/20/12
to scala-l...@googlegroups.com
They are considerably more powerful since they can be used as input
types as well as output types. What you are arguing is that the basis
of any good type systems should be F_omega_sub, or something close. I
want to explore something completely different. But let me finish the
research before going into details.

Sounds interesting!  I wish you luck, but I find it hard to imagine how you'll be able to escape the lambda cube.

Type members are more powerful than existentials (as I said, they are a generalization).  That certainly makes them sufficiently powerful to simulate universally quantified types under certain circumstances (e.g. probably let bindings), but I'm not convinced it allows them to represent universals in all cases.
 
Nothing new here. Polymorphic types for methods already exist and will
be maintained. In a calculus we could model identity as a member of a
parameterized class, but in a practical programming language things
will stay as they are.

I think I'm going to withhold judgement until I see the results of your research.  As I said, it sounds very interesting, but I can't quite see it from where I'm standing.

Daniel

Alois Cochard

unread,
Mar 20, 2012, 11:32:21 AM3/20/12
to scala-l...@googlegroups.com
Hi Martin, All,

I just wanted to share an other idea:
- As of today most coders are used to this advanced feature, and I personally use some of them very often.

It coud be possible to have the "-language:all" flag enabled by default (sort to say) on version 2.X, and when switching to 3.X the flag won't be added by default. This could make code compatible without too much hassle for people using theses features today in 2.X.

Of course I suppose it should be too possible for the compiler to add the given import when necessary by analysing the source code and checking if theses features are used, doing so will ease the hypothetical migration and lower all frustration from advanced users.

Anyway, I think it could be worth having Martin finishing his experimentation, and be sure this path is solid and feasible before adding a verbose import system to enable theses features.

Just my 2 cents,

Kind regards,

nuttycom

unread,
Mar 20, 2012, 11:41:09 AM3/20/12
to scala-language


On Mar 20, 9:02 am, Simon Ochsenreither
<simon.ochsenreit...@googlemail.com> wrote:
> Hi Martin,
>
> Having some cleaned-up Scala 3 would be huge, BUT:
>
> I think these changes are hard to sell to people and will put adoption at
> risk: "we will have a new major release in 2-4 years" when people are
> already scared from point releases? People won't use Scala _now_ if they
> get told that there will be a huge new version with probably breaking
> changes ahead.
>
> A value proposition which would make many people more willing to move to
> Scala 3:
>
>    - The changes you mentioned
>
> PLUS:
>
>    - Simplifications to reduce the complexity of signatures in the
>    collection space.
>    - Better/reified Generics. Seeing Oracle slides mentioning the
>    possibility of reified Generics in a future version of the JVM/Java I think
>    Scala can push this forward. The main thing in Generics I care about is not
>    reflection stuff, but the whole overloading/overriding/subclassing problem.

Sorry, I know this is off topic, but I just have to jump in here.

Reified types are EVIL, and I dearly hope that they never become part
of Scala or the JVM. The reason that reified types are evil is that
type reification has only two uses: first, it would allow you to
reflectively inspect parameterized type values, which you shouldn't be
doing anyway, or, second, overloading on parameterized types, which is
unnecessary since overloading is a purely syntactic concern that can
be avoided by simply using different names. The only condition under
which reification would actually be helpful is at serialization
boundaries - but here, you *actually* gain nothing from reification
because you're already forced to perform a cast, and secondly if
you've got type information reified into the serialization format, you
can just go ahead and dispatch upon that.

Please, please stop spreading the incorrect notion that reified types
are an appropriate solution to any important problems. Writing
programs in the presence of erasure *forces* you to avoid excessive
coupling to runtime type knowledge, which is required if you actually
want to write reusable code.

>    Other platforms don't use the erasure scheme either.
>    - Having better default/named arguments so that overloading can be put
>    to rest completely. (Makes reflection much more simpler)
>    - No nullable reference types by default.
>    - Further unification of AnyVal/AnyRef.
>    - No any2StringAdd.
>    - No unsafe implicit conversions for primitive types.

Daniel Spiewak

unread,
Mar 20, 2012, 12:26:58 PM3/20/12
to scala-l...@googlegroups.com
I was finally able to puzzle through what it is you lose if you treat type members as universals: higher-rank types.  Let-bound polymorphism is satisfied, and this is trivially easy to show due to the fact that type members can be given definite instantiations.  When instantiations are given, the existential type is given a definite binding and the behavior is equivalent to when a similar universal type is bound.  This is why cases like Seq[A] work out just fine under this regime.

However, you have no way to encode the type forall a . a -> a.  Naively, one might want to try something like this:

type Id = { type A; def apply(a: A): A }

Unfortunately, that falls over immediately:

def foo(id: Id) = id(42)         // error!!

The reason this falls over is A is not any type, it is some type.  We can't just mash Int into apply and expect it to work.  This is where the fundamental duality between existential and universal types rears its head.  You could fix this by playing Oleg's double-negation trick, but that always ends up being extremely ugly.

Given that Scala theoretically shouldn't have higher-rank types at all, maybe this isn't a serious issue.  Still, it bugs me.

Daniel

martin odersky

unread,
Mar 20, 2012, 12:56:35 PM3/20/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 4:02 PM, Simon Ochsenreither
<simon.och...@googlemail.com> wrote:
> Hi Martin,
>
> Having some cleaned-up Scala 3 would be huge, BUT:
>
> I think these changes are hard to sell to people and will put adoption at
> risk: "we will have a new major release in 2-4 years" when people are
> already scared from point releases? People won't use Scala _now_ if they get
> told that there will be a huge new version with probably breaking changes
> ahead.

I am a bit more optimistic than you. Java has published a roadmap and
some of the changes might well be breaking (I would not see how else
to introduce reified types). Now we all know that roadmaps like that
are tentative and if the breakage is too serious one won't be able to
do it. But I believe it's better to be clear about the directions of
_work on the language_ without already being explicit about releases.

> A value proposition which would make many people more willing to move to
> Scala 3:
>
> The changes you mentioned
>
> PLUS:
>
> Simplifications to reduce the complexity of signatures in the collection
> space.

I do not see how you will be able to do this without complicating
greatly the use of collections. If you want to convince me otherwise,
write an alternative implementation and convince more then 10% of
Scala programmers to use it. Then, and only then, I will give it a
serious look.

> Better/reified Generics. Seeing Oracle slides mentioning the possibility of
> reified Generics in a future version of the JVM/Java I think Scala can push
> this forward. The main thing in Generics I care about is not reflection
> stuff, but the whole overloading/overriding/subclassing problem. Other
> platforms don't use the erasure scheme either.

Our answer to that is manifests / type tags. I am convinced we can
make this work well so that no reified types are needed. Regarding
other platforms: The only platform that uses reified types is .NET and
it's by no means accepted in their core developer team that it was a
good idea. Haskell, SML, OCaml use erased types just like Scala and
Java. C++ templates do not count, IMO, because that's a compile-time
expansion mechanism.

> Having better default/named arguments so that overloading can be put to rest
> completely. (Makes reflection much more simpler)

I am very sympathetic to avoid overloading but do not see how we can
do that and maintain Java compatibility.

> No nullable reference types by default.

We'd need to add non-null types. Not convinced it is worth it because
Option fulfills this role. I'm sitting on the fence on this one, but
my gut feeling is it's better to improve Option.

> Further unification of AnyVal/AnyRef.

Will hopefully happen in 2.10. See value classes SIP.

> No any2StringAdd.

I believe once we have string interpolation in 2.10 (needs a vote, but
I believe this one will be accepted),
we can deprecate any2StringAdd afterwards. Maybe even deprecate in
2.10.1 if we want to go fast, otherwise 2.11.

> No unsafe implicit conversions for primitive types.

Well, it was a design decision of Scala to keep Java expressions
as-is, and I believe it was a good one. Maybe at some point in the
future we want to revise that. But right now I prefer we keep it.

>
> I don't think the feature flags are the right way to pull off a migration to
> a future version. Imho it makes more sense to not annoy people about
> existing stuff but offer them a way to opt-in to "future implementations" on
> a non-global base. E. g. the thing Adrian mentioned about virtpatmat.
>

We can do it for the pattern matcher. There is simply no way to do it
for the core type system, without maintaining two different compilers
at the same time.

Cheers

- Martin

Luke Vilnis

unread,
Mar 20, 2012, 12:57:01 PM3/20/12
to Daniel Spiewak, scala-l...@googlegroups.com
I think it still works because you have to translate the argument list to "apply" into a module as well... So:
type Id = { def apply[A](a:A): A } 
becomes
type Id = { def apply(a: { type A; def value: A } ): a#A }
And then  
def foo(id: Id) = id(42)    
works as long as you have a mechanism to automatically translate argument lists into modules, which was I think the gist of Martin's original idea.

So higher rank types translate into modules containing functions that take modules.

On Tue, Mar 20, 2012 at 12:34 PM, Daniel Spiewak <djsp...@gmail.com> wrote:
So, your email put me on to the precise issue that arises when you try to do this, which I have now posted to the list.  Basically, type members are existentials, it's just that the pack/unpack is hidden by the language (as it is with most languages that support existential quantification).  It's pretty easy to see this existentiality though if you look:


type Id = { type A; def apply(a: A): A }

def foo(id: Id) = id(42)          // error!

Let-bound polymorphism works just fine, since there's no difference between an instantiated universal and an instantiated existential.  Higher-rank types (true universal polymorphism) do not work at all, and that's where the weakness of this approach shows up.  It's possible that this may be resolved by leveraging the fact that module members are late-bound in the resolution (basically, the same trick we currently use to wrestle higher-rank types out of what is fundamentally let-bound polymorphism), but I'm not sure.

Higher-kinded types seem like the most dubious part of the proposal.  I'm not even sure how it would all work out, and the theory here is generally untested waters.  I'm still thinking about it though, and I look forward to seeing what Martin comes up with!

Daniel


On Tue, Mar 20, 2012 at 10:05 AM, Luke Vilnis <lvi...@gmail.com> wrote:
Hi Daniel,
It's probably off topic for this thread, but I couldn't help but start thinking about the problem you and Martin were discussing. (FYI, I'm just an amateur who enjoys functional programming and has read TAPL/ATTAPL, so take this for what it's worth). I've enjoyed reading some of your blog posts (and your data structures talk), so I was wondering if you would give me your perspective on this interpretation:

I think if you think of Scala objects as first-class modules, then the ability to get universal quantification out of type members makes sense. Type members are far from regular existentials because they don't require pack/unpack (I think that's what TAPL called it) , which can only be done inside the scope of the function, where results of the existential type can't be returned. So you could imagine a function that takes a module and returns a modified copy of that module, while treating the input module's type member in a generic (a.k.a universally quantified way). I have to admit this is complete hand-waving (and very much out of my depth) but this is my intuition of how you get back F-sub type behavior from first-class modules (still not sure how to get the omega part).

So your identity example would be like:

type valueWithType = new {
  type T
  def value: T
}

And the function would then just be valueWithType => valueWithType#T

I think what Martin is saying is that you can turn the argument list of a function into a module, and then the type parameters of the function become abstract type members of the module. Not sure how higher kinded types works into there. Any thoughts?

Best,
Luke

Daniel Spiewak

unread,
Mar 20, 2012, 1:05:54 PM3/20/12
to Scala Language
I think it still works because you have to translate the argument list to "apply" into a module as well... So:
type Id = { def apply[A](a:A): A } 
becomes
type Id = { def apply(a: { type A; def value: A } ): a#A }
And then  
def foo(id: Id) = id(42)    
works as long as you have a mechanism to automatically translate argument lists into modules, which was I think the gist of Martin's original idea.

It's not really translating, but wrapping.  You're taking advantage of the following tautology:

forall a . exists b . a ⟷ b

Another way to handle this encoding is to do something like the following:


type Id = { def apply[A](a: A): A }

// becomes

type Id = {
  def apply = {

    type A
    def apply(a: A): A
  }
}

This would be more consistent with what we have in Scala today.  Basically, Martin's proposal is to replace instantiated universal types (traditional let-bound polymorphism) with instantiated existential types.  Scala's higher-rank types arise at the intersection between first-class modules and let-bound polymorphism (due to the fact that the let-binding is on the method, and therefore free within the module itself).  This trick to achieve higher-rank types with instantiated universal types is just as applicable to instantiated existential types, as my above snippet shows.  This holds because an instantiated universal is trivially equivalent to an instantiated existential.

Still makes me itch.  :-)  In any case, I'm still really looking forward to what Martin is cooking up in this area.

Daniel

Alex Kravets

unread,
Mar 20, 2012, 1:27:38 PM3/20/12
to scala-l...@googlegroups.com
Hi Martin,

Page 32 of Fortress Language Specification specifies blank non-space characters that are not allowed in source (except in comments).

If one looks at most files in java.lang or java.util. packages they represent a jumble of space-based and tab-based indentation.

It's a very small thing, but simply restricting valid spacing to only the space character would, IMHO, be very beneficial. 

It would end all the space-vs-tabs-vs-mix wars and make the source indentation much more regular.

Is there any chance that tabs can be prohibited from source (outside of comment blocks) in any future version of Scala ?

Cheers...
--
Alex Kravets       def redPill = 'Scala
[[ brutal honesty is the best policy ]]

Paul Phillips

unread,
Mar 20, 2012, 1:41:56 PM3/20/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 10:27 AM, Alex Kravets <kra...@gmail.com> wrote:
> It's a very small thing, but simply restricting valid spacing to only the
> space character would, IMHO, be very beneficial.
>
> It would end all the space-vs-tabs-vs-mix wars and make the source
> indentation much more regular.

One is tempted to observe that this is also how people propose
periodically to end real wars (the complete extermination of the
Other) with generally unpleasant results. I'm not a big fan of tabs
either, but it seems unwise to martyr the tabbies.

Robert Kirkpatrick

unread,
Mar 20, 2012, 1:41:51 PM3/20/12
to scala-l...@googlegroups.com

Le 20/03/2012 18:27, Alex Kravets a écrit :
> Hi Martin,
>
> Page 32 of Fortress Language Specification

> <http://labs.oracle.com/projects/plrg/Publications/fortress.1.0.pdf> specifies blank non-space
> characters that are /not allowed /in source (except in comments).


>
> If one looks at most files in java.lang or java.util. packages they represent a jumble of
> space-based and tab-based indentation.
>
> It's a very small thing, but simply restricting valid spacing to only the space character
> would, IMHO, be very beneficial.

What a nightmare!! The result would be massive space filling (by IDEs or editors) to produce
indentation !


>
> It would end all the space-vs-tabs-vs-mix wars and make the source indentation much more regular.

To me whitespace is perfectly defined by combinations of space + tab + nl, like in regexp's.


>
> Is there any chance that tabs can be prohibited from source (outside of comment blocks) in any
> future version of Scala ?
>
> Cheers...
>

> .r { text-align: right; } body { font-family: arial; } td { width: 120; font: 10pt tahoma;
> padding: 4px 8px; } td.mail { font: 10pt courier new; } td.bg { background: lightcyan; }
Kr,
Robert.

Alex Cruise

unread,
Mar 20, 2012, 1:49:59 PM3/20/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 9:56 AM, martin odersky <martin....@epfl.ch> wrote:
On Tue, Mar 20, 2012 at 4:02 PM, Simon Ochsenreither
<simon.och...@googlemail.com> wrote:> No nullable reference types by default.

We'd need to add non-null types. Not convinced it is worth it because
Option fulfills this role. I'm sitting on the fence on this one, but
my gut feeling is it's better to improve Option.

Just to hijack this thread somewhat... Given 'extends AnyVal', is it any more feasible today to revisit the old alchemists' dream of transmuting Some(x) to x, and None to null?  (i.e. an unboxed Option)

-0xe1a

Paul Phillips

unread,
Mar 20, 2012, 2:20:44 PM3/20/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 10:49 AM, Alex Cruise <al...@cluonflux.com> wrote:
> Just to hijack this thread somewhat... Given 'extends AnyVal', is it any
> more feasible today to revisit the old alchemists' dream of transmuting
> Some(x) to x, and None to null?  (i.e. an unboxed Option)

We'd need a new Option type, one which explicitly disallowed null.
Anything you used it with would have to disallow null as well. Maybe
if we had a fully working NotNull there could be an Option[T <:
NotNull].

I don't think there's any way to do it as things stand.

Alex Cruise

unread,
Mar 20, 2012, 2:34:54 PM3/20/12
to scala-l...@googlegroups.com
OK, thanks!  Just out of curiosity, what it would take for NotNull to work fully?  Is it a language change or "merely" a compiler change?

-0xe1a 

Grzegorz Kossakowski

unread,
Mar 20, 2012, 3:07:23 PM3/20/12
to scala-l...@googlegroups.com
Or have runtime assertion that disallows construction of Some(null).

I've been pondering an idea of specializing Option[T] into T (with transparent treatment of None as null) but I got stuck completely with case of Option[Option[T]] and distinguishing between Some(None) and None.

If somebody comes up with an a sensible idea how to deal with nested options then I might give it a whirl and hack some experimental compiler phase that would specialize option by leveraging what's been implemented for value classes.

I'd be extremely curious to see how much performance gains we could get from such a specialization.

--
Grzegorz Kossakowski

Grzegorz Kossakowski

unread,
Mar 20, 2012, 3:10:15 PM3/20/12
to scala-l...@googlegroups.com
On 20 March 2012 12:29, martin odersky <martin....@epfl.ch> wrote:

Second, the discussion here shows that complex existentials might
actually be something we want to remove from a Scala 3. And
higher-kinded types might undergo some (hopefully smallish) changes to
syntax and typing rules. So I think it is prudent to make people flag
these two constructs now with explicit imports, because, unlike for
the rest of the language we do not want to project that these two
concepts will be maintained as they are forever. If you are willing to
keep your code up to date, no reason to shy away from them. But if you
want a codebase that will run unchanged 5 years from now, maybe you
should think before using complex existentials or higher kinded types.
Of course the docs for these two feature flags will contain a
discussion of these aspects, so people can make an informed choice for
themselves.

What about types computed for mix-in composition? I thought DOT/Scala3 would result in very different typing for mix-in composition. Isn't it the case?

--
Grzegorz Kossakowski

Mike Mintz

unread,
Mar 20, 2012, 3:20:34 PM3/20/12
to scala-l...@googlegroups.com

Is this feasible?

Some(Some(...(x))) --> x
Some(Some(...(None))) --> NoneN (e.g. None1, None2, depending on
number of nested Somes)
None -> null (not necessary?)

Option[Option[...[T]]] would have to be stored underneath as AnyRef
and not T, since it would need to be able to refer to T and NoneN, but
that doesn't seem like a blocker. Maybe it slows things down with more
polymorphism, but it still reduces allocation of Some instances.

Adriaan Moors

unread,
Mar 20, 2012, 4:53:31 PM3/20/12
to scala-l...@googlegroups.com
for one approach to all this, please have a look at chapter 4 of my thesis: https://lirias.kuleuven.be/bitstream/1979/2642/5/thesis_adriaan_moors_archive.pdf

Alex Kravets

unread,
Mar 20, 2012, 5:06:22 PM3/20/12
to scala-l...@googlegroups.com

Using spaces vs. tabs is not entirely a matter of taste or personal preference, there's actually a valid, IMHO, reason to avoid tabs: Tab rendering is not standardized across all OS's, IDE's etc.

Therefore sometimes tabs are rendered as 2, 4, 6 or 8 times the width of a space and sometimes even a fractional number, this causes misalignment of indentation when there is a mix of spaces and tabs (and I've never seen a tab-only source file in my 16+ years of professional experience).

If you want to observe the effect of this, just click through into the source on Java libraries and you'll observe a complete jumble of indentation.

Cheers...

Daniel Spiewak

unread,
Mar 20, 2012, 5:06:45 PM3/20/12
to scala-l...@googlegroups.com
This looks like the unfolding I've been assuming in my comments.  I'll need to read further to see how you handle higher-rank universals, but it looks like kinds are just encoded directly.  Did you prove the soundness of using this encoding for let-bound polymorphism, or is that assumed?

Daniel

Daniel Sobral

unread,
Mar 20, 2012, 6:29:10 PM3/20/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 09:03, Chris Marshall <oxbow...@gmail.com> wrote:
> I still don't get SPI-18. If I start typing...
>
>    macro def

Technically,

def ident[TP](args: T): T = macro ...

I found it confusing at first, but the distinction is interesting and
I'm very much in agreement with. The definition is exactly the same as
every other in Scala: it takes some parameters and produces a result,
all according to whatever types you specify. A macro does not change
the definition: if you are taking two strings and returning a boolean,
you are taking two strings and returning a boolean, period. The
*implementation* of said definition that is *produced* by a macro.

Off topic, I know, but I like nipping misconceptions at the bud (all
sorts of sp?).

>
> ...I'm writing a macro. Why is my IDE/paperclip saying "it looks like you
> are writing a macro - want me to import language.macroDefs?" adding
> anything? Why is it being any more explicit about whether macros are being
> used in the program than the program, um, containing macros?
>

> Having said all that; very interesting update on the direction of scala
>
> Chris
>

> On Tue, Mar 20, 2012 at 11:29 AM, martin odersky <martin....@epfl.ch>
> wrote:
>>
>> I am certain there is no way we can let macros and dynamic types
>> into the language without such a provision.
>>

>> Cheers
>>
>>  - Martin
>
>

--
Daniel C. Sobral

I travel to the future all the time.

Simon Ochsenreither

unread,
Mar 20, 2012, 6:38:06 PM3/20/12
to scala-l...@googlegroups.com
Hi,

first of all thanks to the whole team for all the hard work. Sometimes I read my stuff again and realize that I'm sounding completely negative, when I'm in fact excited and thankful for all the work you put into Scala. I appreciate it very much, even if I'm only mentioning the parts I don't like.


    I am a bit more optimistic than you. Java has published a roadmap and
    some of the changes might well be breaking (I would not see how else
    to introduce reified types). Now we all know that roadmaps like that
    are tentative and if the breakage is too serious one won't be able to
    do it. But I believe it's better to be clear about the directions of
    _work on the language_ without already being explicit about releases.

Ok, that's great. Sometimes I think it should be possible to take more control of Scala's fate on the JVM, like Charles Nutter does for JRuby quite effectively (at least it looks like it does). But maybe most of it happens behind the scenes and Charles Nutter's public comments just build up some expectations towards the JVM team.


    I do not see how you will be able to do this without complicating
    greatly the use of collections. If you want to convince me otherwise,
    write an alternative implementation and convince more then 10% of
    Scala programmers to use it. Then, and only then, I will give it a
    serious look.

You're right, it makes no sense to have an discussion without a working proposal. Btw, is there any decision yet about the Traversable/Iterable merge?

 

    I am very sympathetic to avoid overloading but do not see how we can
    do that and maintain Java compatibility.

import lang.overloading? But yes, people on the Ceylon list talk how hard to implement/figure out that decision is (leaving out overloading).

 

    > No nullable reference types by default.

    We'd need to add non-null types. Not convinced it is worth it because
    Option fulfills this role. I'm sitting on the fence on this one, but
    my gut feeling is it's better to improve Option.

What about making something like “Foo with Null” work? Or union types which Adriaan mentioned in some other scenario? This would be especially nice for enums/pattern matching/exhaustiveness checks, e. g. having something like type Foo = Bar|Baz|Bad where Bar/Baz/Bad are not necessarily in some subtyping relationship.

 

    > Further unification of AnyVal/AnyRef.

    Will hopefully happen in 2.10. See value classes SIP.

Yes, although on an unrelated note, I'm very concerned about the feature overlap between implicits and value classes. The whole topic of implicit conversions to value types will get very interesting.

 

    > No any2StringAdd.

    I believe once we have string interpolation in 2.10 (needs a vote, but
    I believe this one will be accepted),
    we can deprecate any2StringAdd afterwards. Maybe even deprecate in
    2.10.1 if we want to go fast, otherwise 2.11.

Yes, looking forward to that, although my main concern about the current situation is about the + and the clashes it produces. Using something different , maybe even . or .. might be enough.

 

    > No unsafe implicit conversions for primitive types.

    Well, it was a design decision of Scala to keep Java expressions
    as-is, and I believe it was a good one. Maybe at some point in the
    future we want to revise that. But right now I prefer we keep it.

Octal numbers and fp literals like “123.” are Java legacy, too, but are now going away.


    We can do it for the pattern matcher. There is simply no way to do it
    for the core type system, without maintaining two different compilers
    at the same time.

My idea was basically to introduce the heavy stuff first, so that when we arrive at 3 there are no big compatibility issues to expect.
But then of course if it won't work it doesn't sound too great :-/ Although maintaining Scala 2 _and_ Scala 3 cannot be avoided for some time either, right?


Thanks and bye,


Simon

martin odersky

unread,
Mar 20, 2012, 6:43:29 PM3/20/12
to scala-l...@googlegroups.com
No, I think that would got dropped. We have to consider it again
before freezing for 2.10.

Yes, but I guess they are used much more rarely. One thing I do not
understand. Why outlaw conversions from Long to Float? I mean, we know
Float is a lossy approximation no matter what you do, so why is bit
loss in the conversion a problem?

Cheers

- Martin

>
>     We can do it for the pattern matcher. There is simply no way to do it
>     for the core type system, without maintaining two different compilers
>     at the same time.
>
> My idea was basically to introduce the heavy stuff first, so that when we
> arrive at 3 there are no big compatibility issues to expect.
> But then of course if it won't work it doesn't sound too great :-/ Although
> maintaining Scala 2 _and_ Scala 3 cannot be avoided for some time either,
> right?
>
>
> Thanks and bye,
>
>
> Simon

--
Martin Odersky
Prof., EPFL and Chairman, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967

Daniel Sobral

unread,
Mar 20, 2012, 6:45:09 PM3/20/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 18:06, Alex Kravets <kra...@gmail.com> wrote:
>
> Using spaces vs. tabs is not entirely a matter of taste or personal
> preference, there's actually a valid, IMHO, reason to avoid tabs: Tab
> rendering is not standardized across all OS's, IDE's etc.

Well, and spaces have poor indentation on fonts of non-fixed size.
But, by all means, do bring it up on scala-debate, and once consensus
is formed, bring it back to scala-language. Until then, _please_ do
not inject discussions about tabs vs spaces on threads about the
evolution of Scala's type system. Let me end this with a quote by
James Iry: "1940s - Various "computers" are "programmed" using direct
wiring and switches. Engineers do this in order to avoid the tabs vs
spaces debate."

>
> Therefore sometimes tabs are rendered as 2, 4, 6 or 8 times the width of a
> space and sometimes even a fractional number, this causes misalignment
> of indentation when there is a mix of spaces and tabs (and I've never seen a
> tab-only source file in my 16+ years of professional experience).
>
> If you want to observe the effect of this, just click through into the
> source on Java libraries and you'll observe a complete jumble of
> indentation.
>
> Cheers...
>
>
>
> On Tue, Mar 20, 2012 at 10:41 AM, Paul Phillips <pa...@improving.org> wrote:
>>
>> On Tue, Mar 20, 2012 at 10:27 AM, Alex Kravets <kra...@gmail.com> wrote:
>> > It's a very small thing, but simply restricting valid spacing to only
>> > the
>> > space character would, IMHO, be very beneficial.
>> >
>> > It would end all the space-vs-tabs-vs-mix wars and make the source
>> > indentation much more regular.
>>
>> One is tempted to observe that this is also how people propose
>> periodically to end real wars (the complete extermination of the
>> Other) with generally unpleasant results.  I'm not a big fan of tabs
>> either, but it seems unwise to martyr the tabbies.
>
>
>
>
> --
> Alex Kravets       def redPill = 'Scala
> [[ brutal honesty is the best policy ]]
>

--

Erik Osheim

unread,
Mar 20, 2012, 6:54:23 PM3/20/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 11:43:29PM +0100, martin odersky wrote:
> Yes, but I guess they are used much more rarely. One thing I do not
> understand. Why outlaw conversions from Long to Float? I mean, we know
> Float is a lossy approximation no matter what you do, so why is bit
> loss in the conversion a problem?

I can't think of the exact problem I ran into when working on Spire [1]
but the unsafe conversions did give me problems, and I would also like
to be rid of them.

The thing that is galling is that Int/Long are precise, and the user
should need to be explicit about an action that will move to an
approximate type (unless the operation could only be done with an
approximate type). There are valid reasons to prefer pow(Double) to
pow(Long) in some cases (it's a bit faster) but it's easy for a user to
get this wrong, and often you really do want pow(Long), which Scala
doesn't provide.

In general I would like it Scala supported more arithmetic operations
on all the numeric types (rather than relying on implicit conversions).

-- Erik

[1] https://github.com/non/spire

Daniel Sobral

unread,
Mar 20, 2012, 6:54:11 PM3/20/12
to scala-l...@googlegroups.com

It's hardly about precision, it's about losing type safety. That
numbers all support about the same operations don't make type mistakes
right, just make them more difficult to catch. At least, that's my own
reasons.

>
> Cheers
>
>  - Martin
>
>>
>>     We can do it for the pattern matcher. There is simply no way to do it
>>     for the core type system, without maintaining two different compilers
>>     at the same time.
>>
>> My idea was basically to introduce the heavy stuff first, so that when we
>> arrive at 3 there are no big compatibility issues to expect.
>> But then of course if it won't work it doesn't sound too great :-/ Although
>> maintaining Scala 2 _and_ Scala 3 cannot be avoided for some time either,
>> right?
>>
>>
>> Thanks and bye,
>>
>>
>> Simon
>
>
>
> --
> Martin Odersky
> Prof., EPFL and Chairman, Typesafe
> PSED, 1015 Lausanne, Switzerland
> Tel. EPFL: +41 21 693 6863
> Tel. Typesafe: +41 21 691 4967

--

Simon Ochsenreither

unread,
Mar 20, 2012, 7:05:14 PM3/20/12
to scala-l...@googlegroups.com
Hi,


> You're right, it makes no sense to have an discussion without a working
> proposal. Btw, is there any decision yet about the Traversable/Iterable
> merge?
No, I think that would got dropped. We have to consider it again
before freezing for 2.10.
Sorry, I'm stupid. What is being dropped? The decision, the merge, the differentiation?

>     > No unsafe implicit conversions for primitive types.
 

One thing I do not understand. Why outlaw conversions from Long to Float?

I mean, we know Float is a lossy approximation no matter what you do, so
why is bit loss in the conversion a problem?

Because it is often not visible. E.g. when integer types are used in arguments to a method accepting floating point values.
Another probably more severe example is
scala> (123456789).round
res0: Int = 123456792
 
When I learned about implicits the big rule was that implicits shouldn't be used for unsafe operations, but only for stuff where it is sure that it won't go wrong for all inputs.


Thanks and bye,


Simon

martin odersky

unread,
Mar 20, 2012, 7:08:01 PM3/20/12
to scala-l...@googlegroups.com
On Wed, Mar 21, 2012 at 12:05 AM, Simon Ochsenreither
<simon.och...@googlemail.com> wrote:
> Hi,
>
>
>> > You're right, it makes no sense to have an discussion without a working
>> > proposal. Btw, is there any decision yet about the Traversable/Iterable
>> > merge?
>> No, I think that would got dropped. We have to consider it again
>> before freezing for 2.10.
>
> Sorry, I'm stupid. What is being dropped? The decision, the merge, the
> differentiation?

Fingers typed too fast to keep meaning. I meant, the issue got dropped.

- Martin


>
>> >     > No unsafe implicit conversions for primitive types.
>
>
>>
>> One thing I do not understand. Why outlaw conversions from Long to Float?
>>
>> I mean, we know Float is a lossy approximation no matter what you do, so
>> why is bit loss in the conversion a problem?
>
> Because it is often not visible. E.g. when integer types are used in
> arguments to a method accepting floating point values.
> Another probably more severe example is
> scala> (123456789).round
> res0: Int = 123456792
>
> When I learned about implicits the big rule was that implicits shouldn't be
> used for unsafe operations, but only for stuff where it is sure that it
> won't go wrong for all inputs.
>
>
> Thanks and bye,
>
>
> Simon

--

Lars Hupel

unread,
Mar 20, 2012, 7:34:47 PM3/20/12
to scala-l...@googlegroups.com
> It's a very small thing, but simply restricting valid spacing to only the
> space character would, IMHO, be very beneficial.
>
> It would end all the space-vs-tabs-vs-mix wars and make the source
> indentation much more regular.
>
> Is there any chance that tabs can be prohibited from source (outside of
> comment blocks) in any future version of Scala ?

I didn't expect that I'd have to start a post to a Scala mailing list
like that again, but here it goes:

I sincerely hope this proposal is a joke. Formatting is certainly *not*
a compiler issue. Furthermore, declaring your opinion as the end of such
an issue is presumptuous to say the least.

Paul Phillips

unread,
Mar 20, 2012, 10:37:31 PM3/20/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 4:08 PM, martin odersky <martin....@epfl.ch> wrote:
> Fingers typed too fast to keep meaning. I meant, the issue got dropped.

Not entirely. I did a lot of the Iterable/Traversable blend, but it
was tedious and I could tell it was going to be squandered to drift if
I didn't merge it immediately. Also, I ran into this.

http://www.scala-lang.org/node/11957

As best I recall, I saw no way to do it in a backward compatible way.
As detailed at the above:

In Traversable, foreach is abstract
in Iterable, foreach is concrete, iterator is abstract
Iterable extends Traversable

So here are two lines which could exist in the wild now:

new Traversable[Int] { def foreach[T](f: Int => Unit): Unit = ??? }
new Iterable[Int] { def iterator = ??? ; override def foreach[T](f:
Int => Unit): Unit = super.foreach(f) }

There's just no way not to break one. If foreach is concrete, the
first breaks. If it is abstract, the second breaks.

This is probably resolvable by introducing one or more new types, but
again, it's unappealing to touch it again unless I intend to merge it
the minute it works. So we have to agree on everything up front, in
contrast to my usual "write all the code and only then think about
what I'm writing" approach.

Dan Rosen

unread,
Mar 20, 2012, 11:20:30 PM3/20/12
to scala-l...@googlegroups.com
On Tuesday, March 20, 2012 4:29:05 AM UTC-7, martin wrote:

My ambition for the next 2-4 years is that we can find further
simplifications and unifications and arrive at a stage where Scala is
so obviously compact in its design that any accusations of it being a
complex language would be met with incredulity. That will be the best counter-argument to the naysayers

I don't personally believe it's a particularly complex language.  There are a few surface-level things I find clumsy, like methods whose terminal symbol is a ':' being right-biased when used infix, or the '_*" type ascription to keep varargs as varargs, but all in all I find it much more uniform in its design discipline than most other languages I've worked with.  I like Scala, I like the change it's helping bring to our industry, I like the fact that it evolves over time in reaction to how people use it, and I frankly believe you're overly concerned about these "naysayers."

Regardless how internally consistent and small the core language becomes, there will always be the fearful few who perceive Scala and functional programming in general as overkill.  No problem; those people will always have Java, which will never evolve at a rate faster than the rate they're willing to learn new things.  Remember PoPL '97 and OOPSLA '98?  Now remember Java 1.5 (2004)?  Yeah, that was a long time...

Anyway, I appreciate that you're hoping to drive adoption by reducing the perceived complexity you find so vexing, but two comments on that:

  - First, "perceived" is the key word there.  One thing I've learned in this business is that marketing dictates truth.  I believe we have a wealth of good marketing available to us as a community that, if we choose to use it, will drown out the occasional whining with an overwhelming chorus of "look at this amazing thing we were able to build, how fast and scalable it is, how few lines of code, how well covered by tests," and so on...

  - Second, and maybe more importantly, is perceived complexity the principal barrier to adoption?  My gut feeling is: no, that dubious honor goes to tool support.  I'm really looking forward to Scala debuggers in the IDE being awesome.
 

So, what does this have to do with SIP 18? Two things:

First, while we might be able to remove complexities in the definition
of the Scala language, it's not so clear that we can remove
complexities in the code that people write. The curse of a very
powerful and regular language is that it provides no barriers against
over-abstraction.

At the same time, providing a powerful language seems to be the goal!  I certainly recognize that nothing about SIP-18, or anything else you've proposed recently, would reduce Scala's expressive power.  And quoting earlier in your original post here:

it will be a big help for the people writing advanced software systems in
Scala. Their job will be easier because they will work with fewer but
more powerful concepts.

So it seems you propose to offer a very powerful tool for modularizing software, while at the same time warning emphatically against its use!  I've said it before, and I'll say it again: telling novices that certain aspects of Scala are too advanced or dangerous for them is self-fulfilling.  Better simply to offer an awesome language and teach people how to use it well.

So on the topic of "offering an awesome language":

The potential breakthrough idea here is to unify type parameters and abstract type members.

Neat!  Are there any papers or preliminary things published about this idea?

Now if we do that then we have suddenly gained the essential
functionality of higher-kinded types and existential types for free! A
higher-kinded type is simply a type where some parameters are left
uninstantiated.

Also note that the "type lambda trick" for partially applying type constructors to type parameters becomes unnecessary.  Your proposed syntax for constructing types with named type parameters (analogous to the syntax for invoking methods with named value parameters) is nice.

Now clearly, something must get lost in a scheme that unifies higher-kinded and
existential types by eliminating both. The main thing that does get
lost is early checking of kind-correctness. Nobody will complain that
you have left out type parameters of a type, because the result will
be legal. At the latest, you will get an error when you try to
instantiate a value of the problematic type.

Yes, and this seems to somewhat parallel the tradeoffs with declaration-site vs. use-site variance annotations.  I'm specifically worried about what the compiler error messages might look like with this new scheme...  For example, let's take everybody's second-favorite typeclass:

trait Functor[F[_]] {
  def map[A, B](f: A => B)(fa: F[A]): F[B]
}

In the new kind-free ("unkind" ???) world, that looks instead like "trait Functor[F]", right?  So I can then declare something like:

implicit val tuple2FirstFunctor = new Functor[Tuple2] {
  def map[A, B](f: A => B)(fa: Tuple2[A]): Tuple2[B] = f(fa._1) -> fa._2
}

Should this code produce an error?  Would the error have to do with not knowing the type of fa._2?  Would I need to replace it with:

implicit val tuple2FirstFunctor = new Functor[Tuple2] {
  def map[A, B, X](f: A => B)(fa: Tuple2[A, X]): Tuple2[B, X] = f(fa._1) -> fa._2
}

Would that even be considered a valid override?  Or should it typecheck without the additional 'X' parameter and then fail elsewhere (e.g. in type inference at map()'s call sites)?  In either case, how would the compiler tell me I screwed up?  That's my only concern about this idea, which otherwise strikes me as very elegant.

Best,
dr

Dan Rosen

unread,
Mar 20, 2012, 11:32:29 PM3/20/12
to scala-l...@googlegroups.com
On Tuesday, March 20, 2012 8:20:30 PM UTC-7, Dan Rosen wrote:
implicit val tuple2FirstFunctor = new Functor[Tuple2] {
  def map[A, B](f: A => B)(fa: Tuple2[A]): Tuple2[B] = f(fa._1) -> fa._2
}

Actually, I guess it'd have to be:

implicit def tuple2FirstFunctor[X] = new Functor[Tuple2[T2 = X]] {
  def map[A, B](f: A => B)(fa: Tuple2[A, X]): Tuple2[B, X] = f(fa._1) -> fa._2
}

or something like that?

dr

martin odersky

unread,
Mar 21, 2012, 4:22:10 AM3/21/12
to scala-l...@googlegroups.com
Dan,

I might not have stated it clearly enough. The motivation stated in
the roadmap has nothing to do with _perceived_ complexity. If that was
all, then probably better to not to talk about it at all and do some
marketing fluff that papers over it.

It's rather that, when it comes to complexity, I want to set the bar
very high. I want to develop Scala into a language that's truly
simple, not to placate or convince the naysayers but because I think
it will improve the language.

Cheers

- Martin

--

Message has been deleted

Miles Sabin

unread,
Mar 21, 2012, 6:58:12 AM3/21/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 4:26 PM, Daniel Spiewak <djsp...@gmail.com> wrote:
> However, you have no way to encode the type forall a . a -> a.  Naively, one
> might want to try something like this:
>
> type Id = { type A; def apply(a: A): A }
>
> Unfortunately, that falls over immediately:
>
> def foo(id: Id) = id(42)         // error!!
>
> The reason this falls over is A is not any type, it is some type.

I'm sure Adriaan will correct me if I've got this wrong, but I think
the idea is to add a concept of type "un-members" which precisely
capture the universally quantified aspect that you're missing.

Cheers,


Miles

--
Miles Sabin
tel: +44 7813 944 528
gtalk: mi...@milessabin.com
skype: milessabin
g+: http://www.milessabin.com
http://twitter.com/milessabin
http://underscoreconsulting.com
http://www.chuusai.com

Adriaan Moors

unread,
Mar 21, 2012, 8:54:12 AM3/21/12
to scala-l...@googlegroups.com
yes, this is rougly what Ch 4 of my thesis is about (scalina)

in a nutshell, there are three ways to "configure" an abstraction:
  1. what is the domain of the abstract variable? (a type or a value)
  2. where does the variable occur? (in a type or in a value)
  3. what can we do to the variable (universally/existentially quantified?)

as examples of filling in dimensions 1&2, you can have:
- a value that abstracts over a value (a method)
- a value that abstracts over a type (a polymorphic method)
- a type that abstracts over a type (a polymorphic class)
- a type that abstracts over a value (a class with an abstract value member)

for universal quantification, the abstraction must allow the client of the abstraction to supply any concrete "value" for the abstract variable
(this can be a value or a type, depending on the domain of the variable, and of course we can further impose bounds on the variable)
(imagine the abstraction is some sort of case class with mutable slot -- a "var" -- for the abstracted variable and a val for the thing that abstracts over the variable
 -- or of course the type-level counterparts of a var/val --> a the type version of a var is called a "type un-member" in scalina)

for existential quantification, packing the existential means the "value" for the variable is "stored" into the mutable slot, but you cannot read it back concretely;
all you can do is open the existential (you skolemize it), which does not give you the real "value" of the variable, but an opaque handle for it (the skolem)

Niels

unread,
Mar 21, 2012, 9:13:45 AM3/21/12
to scala-language
Will this proposal have an impact on compilation time when options are
restricted? I think it would be really pleasant when Scala could in
principal be restricted to a language subset that compiles really fast
for code emission on runtime.

martin odersky

unread,
Mar 21, 2012, 9:28:57 AM3/21/12
to scala-l...@googlegroups.com
On Wed, Mar 21, 2012 at 3:37 AM, Paul Phillips <pa...@improving.org> wrote:
> On Tue, Mar 20, 2012 at 4:08 PM, martin odersky <martin....@epfl.ch> wrote:
>> Fingers typed too fast to keep meaning. I meant, the issue got dropped.
>
> Not entirely.  I did a lot of the Iterable/Traversable blend, but it
> was tedious and I could tell it was going to be squandered to drift if
> I didn't merge it immediately.  Also, I ran into this.
>
>  http://www.scala-lang.org/node/11957
>
> As best I recall, I saw no way to do it in a backward compatible way.
> As detailed at the above:
>
>  In Traversable, foreach is abstract
>  in Iterable, foreach is concrete, iterator is abstract
>  Iterable extends Traversable
>
> So here are two lines which could exist in the wild now:
>
>  new Traversable[Int] { def foreach[T](f: Int => Unit): Unit = ??? }
>  new Iterable[Int] { def iterator = ??? ; override def foreach[T](f:
> Int => Unit): Unit = super.foreach(f) }
>
> There's just no way not to break one.  If foreach is concrete, the
> first breaks.  If it is abstract, the second breaks.

Oh yes, it's all coming back to me now. Thanks for paging it back in.
I don't have a solution for it either I'm afraid.

- Martin


>
> This is probably resolvable by introducing one or more new types, but
> again, it's unappealing to touch it again unless I intend to merge it
> the minute it works.  So we have to agree on everything up front, in
> contrast to my usual "write all the code and only then think about
> what I'm writing" approach.

--

Justin du coeur

unread,
Mar 21, 2012, 9:56:21 AM3/21/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 11:20 PM, Dan Rosen <dan....@gmail.com> wrote:
Anyway, I appreciate that you're hoping to drive adoption by reducing the perceived complexity you find so vexing, but two comments on that:

  - First, "perceived" is the key word there.  One thing I've learned in this business is that marketing dictates truth.

Hear, hear.  My favorite example of this is Ada.  Ada '95 was actually a pretty good language.  (Disclaimer: I sat two offices away from one of the leads, and I wrote the first-ever IDE for the language.  Still, I found it fairly elegant and powerful.)  But nobody outside of government circles picked it up because it was "too complicated": it never overcame the reputation of the too-far-ahead-of-its-time Ada '83.  So I was amused, a few years later, to realize that the language spec for C++ was *longer* than that of Ada, and for good reason: there were a lot more nooks and crannies to it.  C++ was in many ways a good deal more complicated than Ada '95, but reputation won out.

The thing I love about Scala is that it is genuinely intuitive, in the sense that I often say, "it seems like this should work" and it actually *does*.  I rarely find the compiler preventing me from doing things that seem logical.  That's more unusual than people tend to credit, and a testimony in favor of the driving philosophy of being as consistent as is feasible.  It means that, while the language *spec* is long and complex, you can pick up concepts and *use* them consistently, as opposed to having to learn all the inconsistencies of many languages.

(Indeed, this whole conversation reminds me of the people who insist on bespoke scripting languages because they are "less complicated", and quietly ignore the fact that those languages sometimes have 900-page reference manuals describing all of their little idiosyncracies...)

Rex Kerr

unread,
Mar 21, 2012, 1:22:31 PM3/21/12
to scala-l...@googlegroups.com
On Tue, Mar 20, 2012 at 6:43 PM, martin odersky <martin....@epfl.ch> wrote:
Yes, but I guess they are used much more rarely. One thing I do not
understand. Why outlaw conversions from Long to Float? I mean, we know
Float is a lossy approximation no matter what you do, so why is bit
loss in the conversion a problem?

No, it's not lossy, not for a while:

  scala> Iterator.from(1).map(i => i -> (i == math.round(i.toFloat))).dropWhile(_._2).next
  res0: (Int, Boolean) = (8388609,false)

So you get up to 8M before you run into problems with Float.

In contrast, if you have a Long instead of an Int, you are wrong _as soon as it matters that you have a Long not an Int_ (with the exception of 2^31 which Float can of course represent exactly):

  scala> (Int.MaxValue.toLong + 2) == (Int.MaxValue.toLong + 2).toFloat.toLong
  res1: Boolean = false

However, with Double, you can use Long for a while until you run out of bits for an exact representation:

  scala> ((1L << 52) + 1) == ((1L << 52) + 1).toDouble.toLong
  res2: Boolean = true

  scala> ((1L << 53) + 1) == ((1L << 53) + 1).toDouble.toLong
  res3: Boolean = false

so there's at least an argument that Long -> Double automatic conversion is sometimes the right thing to do, whereas with Float it's _never_ the right thing to do unless you didn't mean to have a Long in the first place.

  --Rex

Runar Bjarnason

unread,
Mar 21, 2012, 2:20:08 PM3/21/12
to scala-l...@googlegroups.com


On Tuesday, March 20, 2012 7:29:05 AM UTC-4, martin wrote:

Their job will be easier because they will work with fewer but
more powerful concepts.

 
I believe that's exactly the right vision and you should pursue this consistently and relentlessly. The language ought to give you very few and powerful abstractions, and moreover those abstraction should be highly integrated and form a unified whole. E.g. higher-kinded types are not at all complex, but their implementation in Scala is a little bit of a second-class citizen. They're not fully integrated into the language and so they feel tacked on, which in turn feels complicated because there is a disconnect between the model of kinds in your head and the model of kinds in Scala. Another example is from the library, where there are interfaces that provide "map" and "flatMap" methods but sometimes you really want to reach for the highly general and powerful abstraction "Monad". Some people perceive such abstractions as complex, but they actually greatly simplify development and make our job easier because we have a concept under which we can unify a great number of different data types.

 

The potential breakthrough
idea here is to unify type parameters and abstract type members. The
idea would be to treat the following two types as equivalent

  trait Seq[type Elem]     and   trait Seq { type Elem }


This is very interesting. It does seem like it would be possible if somewhat strange. Are the abstract type members ordered? If you partially apply, how do you know which one you're applying? I am also afraid it would make it difficult to work with polymorphic functions of rank 2 or higher. I'm curious how the following would work. Right now in Scala we can model a universally quantified type as follows:

trait Forall[F[_]] {
  def apply[A]: F[A]
}

Then a type like
x. F(x)

is encoded as
Forall[({type λ[x] = F[x]})#λ]

An existentially quantified type is currently modeled this way:

trait Exists[F[_]] {
  type A
  def apply: F[A]
}


Then the type
x. F(x)

becomes
Exists[({type λ[x] = F[x]})#λ]

How would this be modeled in the new scheme? Something like this?

trait Forall {
  type F
  type A
  def apply: F[A]
}

and then Exists[F = F] ?

It seems likely that F would need a kind annotation here in order for things to remain sane.

  new HashMap[String, List[Int]] with SynchronizedMap

I can see how that would work if the type names agree, but what if they don't? Seems like you would need type-level operators to rename, project, basically all the usual tuple calculus suspects. Let me suggest this as an alternative:

new (HashMap with SyncrhonizedMap)[String, List[Int]]


The main thing that does get
lost is early checking of kind-correctness. Nobody will complain that
you have left out type parameters of a type, because the result will
be legal. At the latest, you will get an error when you try to

instantiate a value of the problematic type. So type-checking will be
delayed. Everything will still be done at compile-time. But some of
the checks that used to raise errors at the declaration site will now
raise errors at the use site.


Delaying kind checking until the typer is very much like delaying type checking until runtime. It basically makes the type-level language untyped (or, to borrow from "dynamic" language parlance, it would be dynamically kinded). That is, you could construct all kinds of crazy type-level things that make no sense whatsoever, and you would never know they don't make sense until you try to instantiate a value of an unsound type. Basically every poorly kinded type would just be uninhabited, i.e. equivalent to Nothing.

I think that this might be a price too high to pay. I would rather see a step in the other direction, introducing an actual kind system complete with polymorphic kinds. This would greatly simplify library development.


eliminate what I consider the worst part of the Scala compiler. It
turns out that the internal representation of higher-kinded types in
the Scala compiler is the same as the internal representation of raw
types in Java (there are good reasons for both representation
choices).

 
Yeah, this is definitely a problem. But maybe the solution is not dynamically kinded types, but a proper polymorphic kind system. The proposed simplification is not necessarily incompatible with that.


nuttycom

unread,
Mar 21, 2012, 5:49:24 PM3/21/12
to scala-language
Since we're discussing complexity, and a roadmap for Scala 3, there's
something else I'd like to throw into the mix - this seems like an
opportunity to get rid of a bit of badness that has deviled me ever
since my first week using Scala.

Can we please change the language such that PartialFunction[A, B] no
longer subclasses Function1[A, B], and perhaps provide higher-arity
PartialFunction instances? The inheritance hierarchy as it presently
stands is completely upside-down, with surprising results; for
example:

object Test {
val f: PartialFunction[Int, String] = {
case 1 => "hi"
case 2 => "there"
}

val g: PartialFunction[String, Int] = {
case "hi" => 1
}

def main(argv: Array[String]) = {
println((f andThen g)(1))
println((f andThen g).isDefinedAt(2))
}
}

// vim: set ts=4 sw=4 et:
[nuttycom@yggdrasil tmp]$ scalac Test.scala
[nuttycom@yggdrasil tmp]$ scala Test
1
true
[nuttycom@yggdrasil tmp]$

It would be much better if PartialFunction[A, B] and Function1[A, B]
did not share a subtyping relationship at all, but in a pinch it would
be acceptable for FunctionN[...] to extend PartialFunctionN[...], with
an optional (because it is of course unsafe) implicit conversion that
could be imported to allow promotion of PartialFunctionN instances to
FunctionN.

Thanks,

Kris

martin odersky

unread,
Mar 21, 2012, 5:56:11 PM3/21/12
to scala-l...@googlegroups.com
This was considered a long time ago but rejected.
The point is, a partial function is a subtype of Function1 because it
has more capabilities: It also supports the isDefinedAt method. The
confusion comes probably from the name. One thinks that a Function1
would then be a total function. But that, of course, is wrong.
Function1 can be undefined for some arguments just as PartialFunction
can. It's just that it won't let you ask about it.

Cheers

- Martin

nuttycom

unread,
Mar 21, 2012, 6:18:26 PM3/21/12
to scala-language
On Mar 21, 3:56 pm, martin odersky <martin.oder...@epfl.ch> wrote:
> This was considered a long time ago but rejected.
> The point is, a partial function is a subtype of Function1 because it
> has more capabilities: It also supports the isDefinedAt method. The
> confusion comes probably from the name. One thinks that a Function1
> would then be a total function. But that, of course, is wrong.
> Function1 can be undefined for some arguments just as PartialFunction
> can. It's just that it won't let you ask about it.

The problem is not the naming. The problem is that the relationship
between the types implies that every PartialFunction is total. There
should actually be no subtyping relationship between them at all.
However, in the case where Function1 might extend PartialFunction, the
implementation of isDefinedAt is simply true.

The fact that Function1 may not be total seems irrelevant to me; you
can get nontermination anywhere. That's no reason for the types to
lie.

Kris

megagurka

unread,
Mar 21, 2012, 7:09:27 PM3/21/12
to scala-language
trait PartialFunction[A, B] extends (A => LazyOption[B])

/Jesper Nordenberg

nuttycom

unread,
Mar 21, 2012, 7:25:53 PM3/21/12
to scala-language
Way too much overhead for high-performance code.

martin odersky

unread,
Mar 22, 2012, 4:51:22 AM3/22/12
to scala-l...@googlegroups.com
On Wed, Mar 21, 2012 at 7:20 PM, Runar Bjarnason <runar...@gmail.com> wrote:
>
>
> On Tuesday, March 20, 2012 7:29:05 AM UTC-4, martin wrote:
>>
>> Their job will be easier because they will work with fewer but
>> more powerful concepts.
>
>
> I believe that's exactly the right vision and you should pursue this
> consistently and relentlessly. The language ought to give you very few and
> powerful abstractions, and moreover those abstraction should be highly
> integrated and form a unified whole. E.g. higher-kinded types are not at all
> complex, but their implementation in Scala is a little bit of a second-class
> citizen. They're not fully integrated into the language and so they feel
> tacked on, which in turn feels complicated because there is a disconnect
> between the model of kinds in your head and the model of kinds in Scala.
> Another example is from the library, where there are interfaces that provide
> "map" and "flatMap" methods but sometimes you really want to reach for the
> highly general and powerful abstraction "Monad". Some people perceive such
> abstractions as complex, but they actually greatly simplify development and
> make our job easier because we have a concept under which we can unify a
> great number of different data types.
>
>>
>> The potential breakthrough
>> idea here is to unify type parameters and abstract type members. The
>> idea would be to treat the following two types as equivalent
>>
>>   trait Seq[type Elem]     and   trait Seq { type Elem }
>
>
> This is very interesting. It does seem like it would be possible if somewhat
> strange. Are the abstract type members ordered?

We'd have to assume an ordering. Not sure yet exactly which one to
choose. One possibility is that they would be ordered if defined with
parameter notation, but not if defined as members.

Given the current state of research I don't have definite answers to
these. But they are good use cases to keep in mind!

> I can see how that would work if the type names agree, but what if they
> don't? Seems like you would need type-level operators to rename, project,
> basically all the usual tuple calculus suspects. Let me suggest this as an
> alternative:
>
> new (HashMap with SyncrhonizedMap)[String, List[Int]]

That's interesting!


>
>
>> The main thing that does get
>> lost is early checking of kind-correctness. Nobody will complain that
>> you have left out type parameters of a type, because the result will
>> be legal. At the latest, you will get an error when you try to
>> instantiate a value of the problematic type. So type-checking will be
>> delayed. Everything will still be done at compile-time. But some of
>> the checks that used to raise errors at the declaration site will now
>> raise errors at the use site.
>
>
> Delaying kind checking until the typer is very much like delaying type
> checking until runtime. It basically makes the type-level language untyped
> (or, to borrow from "dynamic" language parlance, it would be dynamically
> kinded). That is, you could construct all kinds of crazy type-level things
> that make no sense whatsoever, and you would never know they don't make
> sense until you try to instantiate a value of an unsound type. Basically
> every poorly kinded type would just be uninhabited, i.e. equivalent to
> Nothing.
>
> I think that this might be a price too high to pay. I would rather see a
> step in the other direction, introducing an actual kind system complete with
> polymorphic kinds. This would greatly simplify library development.
>

I agree it's a tradeoff. There are some thoughts from Adriaan's side
to regain kind checking by distinguishing input and output member
types. I see that as similar in spirit to the progression from Prolog
to Mercury, say.

Cheers

- Martin

nuttycom

unread,
Mar 23, 2012, 6:54:25 PM3/23/12
to scala-l...@googlegroups.com
Another one for the wishlist: please fix the interaction of implicit resolution and contravariance.

martin odersky

unread,
Mar 24, 2012, 6:23:29 AM3/24/12
to scala-l...@googlegroups.com
On Fri, Mar 23, 2012 at 11:54 PM, nuttycom <kris.nu...@gmail.com> wrote:
> Another one for the wishlist: please fix the interaction of implicit
> resolution and contravariance.
>

Form what we know today, that would require a considerable
complication of the type system, so it's less likely to happen.

Cheers

- Martin

Message has been deleted

Jesper Nordenberg

unread,
Mar 21, 2012, 7:04:04 PM3/21/12
to scala-l...@googlegroups.com, nuttycom
trait PartialFunction[A, B] extends (A => LazyOption[B])

/Jesper Nordenberg

Qihui Sun

unread,
Mar 27, 2012, 12:05:39 PM3/27/12
to scala-l...@googlegroups.com, martin odersky
Welcome to simplify Scala.
There are many languages be made after Scala yet,such as Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that programmer community and industry aspire a more simpler language and still powerful.

2012/3/22 Jesper Nordenberg <mega...@yahoo.com>



--
Solomon
Google+: Qihui Sun



Paul Phillips

unread,
Mar 27, 2012, 12:15:50 PM3/27/12
to scala-l...@googlegroups.com
2012/3/27 Qihui Sun <qihu...@gmail.com>:

> There are many languages be made after Scala yet,such as
> Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that
> programmer community and industry aspire a more simpler language and still
> powerful.

Assumes facts not in evidence. It shows that humans in general and
programmers in particular are imbued with infinite confidence that
they can do it better than the other guy. (Which is great, because
sometimes they're right.) And that starting something is easy, and
that everything made by decent programmers starts out high on elegance
and low on tradeoffs. But the woods are lovely dark and deep, and
they have miles to go before they sleep.

Not that I disagree that people want "simpler", or think they do. Of
course simple plus powerful implies many degrees of freedom, another
thing which people appear not to want (at least when it comes to their
co-workers.) Eventually the time comes to pick something and make it
your own.

Stan Campbell

unread,
Mar 27, 2012, 12:23:19 PM3/27/12
to scala-l...@googlegroups.com
Ok, the Frost reference (very nice btw) just makes me add my 2 centimes...

Qihui is absolutely right, IMHO, that innovations and refinements in programming languages over the last couple of decades show that we're just not satisfied with the ease of expressive power we're able to achieve with current languages.  Before or after irregardless, the adoption of features such as generics, type inference, etc. in languages like Java, Scala, C#, etc. can make it more natural to say what we mean.  

However, glomming features onto a language can make it hard for new adepts to be effective quickly.  SIP-18, after thinking about this for a few days, IMHO gives an orderly way of approaching features which may or may not have immediate applicability to the teams, the projects, and the state of the particular implementation.

+1

√iktor Ҡlang

unread,
Mar 27, 2012, 5:31:24 PM3/27/12
to scala-l...@googlegroups.com, martin odersky


2012/3/27 Qihui Sun <qihu...@gmail.com>

Welcome to simplify Scala.
There are many languages be made after Scala yet,such as Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that programmer community and industry aspire a more simpler language and still powerful.


I have yet to see the same power but simpler.



--
Viktor Klang

Akka Tech Lead
Typesafe - The software stack for applications that scale

Twitter: @viktorklang

Denis Podluzhny

unread,
Mar 28, 2012, 5:42:35 AM3/28/12
to scala-language
> Qihui is absolutely right, IMHO, that innovations and refinements in
> programming languages over the last couple of decades show that we're just
> not satisfied with the ease of expressive power we're able to achieve with
> current languages.  Before or after irregardless, the adoption of features
> such as generics, type inference, etc. in languages like Java, Scala, C#,
> etc. can make it more natural to say what we mean.

Those is all good and expressive power is essential of things which
lead to decision to move our internal development to Scala. And ofc
refinement of language core is good move, but what about control? From
practice standpoint knowledge of price for using concrete features is
helpful, sometimes - required. Performance, instantiation, memory
footprint, control flow - for most usage cases those is hollow sound,
everything works just good enough. Not when you should justify changes
to code of some class with 5*10^11 instances over live cluster. And
you know - we too wish expressive power, we too wish being natural to
code what we mean - with additional feat to being able to know what
exactly we mean =)

I have no language building expertise to insist on concrete things but
keep hope to see evolution of Scala with code transparency and better
avoidance of unnecessary pessimization in mind.

--
Denis Podluzhny,
INTENIUM GmbH.

Dave

unread,
Mar 28, 2012, 8:31:50 AM3/28/12
to scala-language
On 27 mrt, 18:05, Qihui Sun <qihui....@gmail.com> wrote:
> Welcome to simplify Scala.
> There are many languages be made after Scala yet,such as
> Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that
> programmer community and industry aspire a more simpler language and still
> powerful.

I agree, but usually these new languages have only one language
feature and/or paradigm in which they are excellent (i.e. the feature
the language designers were frustrated about), but the others are less
developed or not available which is too bad. And also, nothing is
known about performance (e.g. http://shootout.alioth.debian.org/ ).
Scala is well-balanced between its paradigms (imperative, functional-
style, object-oriented, meta-programming, language-oriented/DSL and
sequential vs parallel) combined with strong statically type checking
and type inferred succinct DRY-as-possible source code and high
performance bytecode. It is also proven in the enterprise.
So as a general purpose multi-paradigm programming language targeting
different platforms Scala is the best around.
I aspire simplicity by automation, not by removing choices.



On 27 mrt, 18:05, Qihui Sun <qihui....@gmail.com> wrote:
> Welcome to simplify Scala.
> There are many languages be made after Scala yet,such as
> Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that
> programmer community and industry aspire a more simpler language and still
> powerful.
>
> 2012/3/22 Jesper Nordenberg <megagu...@yahoo.com>
>
>
>
>
>
> > trait PartialFunction[A, B] extends (A => LazyOption[B])
>
> > /Jesper Nordenberg
>
> > nuttycom skrev 2012-03-21 23:18:
>
> > On Mar 21, 3:56 pm, martin odersky<martin.oder...@epfl.ch**> wrote:
>
> >>> This was considered a long time ago but rejected.
> >>> The point is, a partial function is a subtype of Function1 because it
> >>> has more capabilities: It also supports the isDefinedAt method. The
> >>> confusion comes probably from the name. One thinks that a Function1
> >>> would then be a total function. But that, of course, is wrong.
> >>> Function1 can be undefined for some arguments just as PartialFunction
> >>> can. It's just that it won't let you ask about it.
>
> >> The problem is not the naming. The problem is that the relationship
> >> between the types implies that every PartialFunction is total. There
> >> should actually be no subtyping relationship between them at all.
> >> However, in the case where Function1 might extend PartialFunction, the
> >> implementation of isDefinedAt is simply true.
>
> >> The fact that Function1 may not be total seems irrelevant to me; you
> >> can get nontermination anywhere. That's no reason for the types to
> >> lie.
>
> >> Kris
>
> --
> Solomon
> HUAWEI <http://www.huawei.com/>
> Google+: Qihui Sun <http://gplus.to/sunqihui>- Tekst uit oorspronkelijk bericht niet weergeven -
>
> - Tekst uit oorspronkelijk bericht weergeven -

Scott Carey

unread,
Apr 30, 2012, 7:57:32 PM4/30/12
to scala-l...@googlegroups.com


On Tuesday, March 20, 2012 10:49:59 AM UTC-7, Alex Cruise wrote:

Just to hijack this thread somewhat... Given 'extends AnyVal', is it any more feasible today to revisit the old alchemists' dream of transmuting Some(x) to x, and None to null?  (i.e. an unboxed Option)


Every time I see a performance issue due to option boxing, I think to myself  "Down with Option!  Long live Option!".  Option needs to die, in the sense that it is an instantiated object in the JVM.  It must live because null checks in user code are evil.  Maybe as long as Some(null) is disallowed (or hidden from the user, allowing only an Option(val: T) where a None is produced if val is null), Option could live in the compiler and not on the runtime.  I am probably wrong.

Perhaps even interop with Java can be satisfied with such a thing, making nulls disappear, replaced with None.  The nested cases may still require an instantiated object however, e.g. Some[Some[T]] and Some[None].   Again, I am probably wrong, having not dug very deep here.

Philippe Lhoste

unread,
May 23, 2012, 6:56:46 AM5/23/12
to scala-l...@googlegroups.com
On 20/03/2012 22:06, Alex Kravets wrote:
> Using spaces vs. tabs is not entirely a matter of taste or personal preference, there's
> actually a valid, IMHO, reason to avoid tabs: Tab rendering is not standardized across all
> OS's, IDE's etc.

That's what is nice with tabs: just change a setting, and you have indentation fitting
your preferences, from 1 to 8 (or more!) units, without even changing the source code.

> Therefore sometimes tabs are rendered as 2, 4, 6 or 8 times the width of a space and
> sometimes even a fractional number, this causes misalignment of indentation when there is
> a mix of spaces and tabs (and I've never seen a tab-only source file in my 16+ years of
> professional experience).

You won't see mixed spaces and tabs in my sources, they are pure tabs, and I am happy with
this... I am OK with space-only indentation, that's what we use at work (with three
spaces...), but I still find them less convenient.
Both ways are OK, as long as they are consistently use, the real evil is indeed in mixing
spaces and tabs.

> If you want to observe the effect of this, just click through into the source on Java
> libraries and you'll observe a complete jumble of indentation.

Sigh, yes, it is nightmarish, I admit it. But don't put the blame on tabs, put the blame
on lack of tools to enforce any policy they could have chosen. Hey, you can open a file
with 3 spaces indentation, have your editor remaining in default 4 spaces (if that's your
default) and introduce inconsistent indentation in the file without noticing. In general,
not in the same function (might be too visible) but perhaps in a new function, a new
class, etc.

To please everybody, you can make the compiler to reject such mixes of spaces and tabs,
instead...

Note: I am also adept of aligned braces
{
}
again, a question of taste. I was first shocked to see that the Go language actually
forces to use the K&R style. But, well, somehow it is a good way to enforce a policy,
instead of the Java _conventions_...

But such work belong more to a tool like Checkstyle, actually. Or perhaps a compiler
plug-in, for those not using an IDE. Or a VCS plug-in... Or an Ant/Maven/Gradle/SBT/<you
name your favorite build tool> plug-in.

Mmm, we should fork this discussion to Scala-Debate, I feel...

--
Philippe Lhoste
-- (near) Paris -- France
-- http://Phi.Lho.free.fr
-- -- -- -- -- -- -- -- -- -- -- -- -- --

Shelby

unread,
Jun 14, 2015, 11:08:29 PM6/14/15
to scala-l...@googlegroups.com
How would the unification of higher-kinded and abstract types express the following semantics?

trait Functor[T, Subtype[T] <: Functor[T, Subtype[T]] {
  def map : (T => R) => Subtype[R]
}

Am I correct to assume the following?

trait Functor {
  type T
  type Subtype <: Functor
  def map : (T => R) => Subtype
}

martin odersky

unread,
Jun 15, 2015, 3:25:08 AM6/15/15
to scala-l...@googlegroups.com
That would lose information. I believe you have to write it like this:

trait Functor { functor =>
type T
type Subtype <: Functor { type T = functor.T }
def map[R] : (T => R) => Subtype { type T = R }
}

To avoid misunderstanding (because I found many people do
misunderstand this point): That would be a compiler expansion; you
will usually still write the original parameterized version:

> trait Functor[T, Subtype[T] <: Functor[T, Subtype[T]] {
> def map : (T => R) => Subtype[R]
> }
>

Cheers

- Martin

> --
> You received this message because you are subscribed to the Google Groups
> "scala-language" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to scala-languag...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Martin Odersky
EPFL

Shelby

unread,
Jun 16, 2015, 3:00:27 AM6/16/15
to scala-l...@googlegroups.com
Thank you. I do not understand what the syntax "functor =>" means in that context?

Also how can abstract type members express Liskov Substitution Principle variance?

Seth Tisue

unread,
Jun 17, 2015, 12:20:49 PM6/17/15
to scala-l...@googlegroups.com
On Tue, Jun 16, 2015 at 3:00 AM, Shelby <she...@coolpage.com> wrote:
> Thank you. I do not understand what the syntax "functor =>" means in that
> context?

it's a "self type". see SLS 5.1, and/or just google it

Seth

Shelby

unread,
Jul 25, 2015, 3:37:09 PM7/25/15
to scala-language, se...@tisue.net
That didn't take long to dig.

It appears to me that all counterexamples to preservation (which prevent a simple proof that DOT is sound), derive from the generative essence that abstract (a.k.a. virtual) types are allowed to be carried around at runtime. It is not the choice of representation as parameteric or abstract types which causes the problem (since given both the ability to encode the self type, they are rationally equivalent); rather that hidden abstract types can be refined at runtime. All the syntactical sugar that prefers one over the other is apparently irrelevant to the issue which is allowing refinement to not be checked at compile-time, e.g. val x : Animal instead of val x : Animal { Food = Grass | Meat } equivalent to some parameter representation val x : Animal[Food] instead of val x : Animal[Grass | Meat]


Thus as expected (see my quote below) the problem with DOT as currently formulated is premature optimization of abstraction at compile-time. Rather than allowing inversion-of-control and injection of the abstraction at compile-time, it optimizing by pushing abstraction to run-time.

The argument against this inversion-of-control was stated by Martin Odersky, "You could parameterize class Animal with the kind of food it eats. But in practice, when you do that with many different things, it leads to an explosion of parameters, and usually, what's more, in bounds of parameters. At the 1998 ECOOP, Kim Bruce, Phil Wadler, and I had a paper where we showed that as you increase the number of things you don't know, the typical program will grow quadratically.":


My response to Martin is that is why we must invert the control at every method per the quotes below, so that quadric explosion is shifted inside-out.

I therefor posit that DOT is broken and needs to be reformated in a new holistic model per my quotes below and the examples in the other thread.

P.S. this idea about inversion-of-control at the call site with automated assistance from the compiler is one I had in my head for a couple of years now since I was brainstorming one day with the author of Kotlin. He went with some very simple form of solution which didn't embody what I was driving for.


From the thread "Re: [scala-language] The cake’s problem, dotty design and the approach to modularity.":
 
On Sunday, July 26, 2015 at 12:28:53 AM UTC+8, Shelby wrote:
This is yet another example of premature optimization (declaring the data structure in the self type) and my idea for a solution being an inversion-of-control, where the mixin injects a method into the constructor instead of prematuring declaring itself as a constructor.

I am starting to get the strong intuition that this concept of inversion-of-control needs to be proliferated throughout Scala 3 if we want to make a huge paradigm shift win on modularity. I am studying now the DOT calculus in detail and I am hoping I can apply such concepts so that type preservation can be recovered.


On Saturday, July 25, 2015 at 1:13:13 PM UTC+8, Shelby wrote:
I believe perhaps the ideas I have presented for injection of interface (relying on DOT) are a complete solution (and more generalized) to the reasons given for needing to represent family polymorphism by tracking types in the instance (which appears to be a less general form of dependency injection):
 
http://www.cs.au.dk/~eernst/tool11/papers/ecoop01-ernst.pdf#page=8


On Wednesday, July 22, 2015 at 11:47:04 PM UTC+8, Shelby wrote:
If a set of types share a set of methods (perhaps implemented as typeclass rather than virtual inheritance so the dictionary can be injected with an object), then the disjunction of those types is the conjunction (and the conjunction of those types is the disjunction) of the implementations of that interface. But note that A ∧ A = A ∨ A, so thus both disjunction and conjunction can be operated upon if they share an interface A.

That was the point of my prior post.


On Saturday, July 18, 2015 at 9:51:18 PM UTC+8, Shelby wrote:
I believe I show herein the fundamental importance of objects (as in "OOP"), that subclassing (but not subtyping) is fundamentally an anti-pattern, and that the new DOT calculus is essential.

For the goal of completely solving the Expression Problem, I believe the requirement for a "global vtable" which I pondered upthread, is implicitly fulfilled by the injection of inversion-of-control I had proposed.

Objects are passed around as the vtable, which I believe is a form of the extensible modularity
...
Perhaps the Dotty compiler could automatically generate the implicit object `Drawable[Line ∨ Box]`. Thus we retain subtyping (i.e. `Line` and `Box` are subtypes of `Line ∨ Box`) while eliminating subclassing (i.e. there is no nominal type which is the supertype of `Line ∨ Box` or at least `Any` should only occur with a cast since I've shown it discards extensible static typing).
...

Another benefit of deprecating subsumption via subclassing in favor of subtyped disjunction, is distinct invariantly parametrized types can be added to the same List:

class TaggedLine[Tag](a: Point, b: Point, tag: TAG)...

draw(List(TaggedLine(Point(0,0), Point(1,1), 1), List(TaggedLine(Point(0,0), Point(1,1), "1"))) // Error: can not subsume to List[TaggedLine[Any]] because TAG is invariant

I assume the new DOT calculus will instead implicitly subsume `TAG` to `Int ∨ String` instead of `Any`?

Somewhat OT, I am pondering how will DOT deal with the following?

trait Invertible[T <: Invertible[T, A, B], A, B] {
  def to(a: A): B
  def from(b: B): A
}

object AB extends Invertible[AB, A, B] {
  def to(a: A): B...
  def from(b: B): A...
}

object CD extends Invertible[CD, C, D] {
  def to(c: C): D...
  def from(d: D): C...
}

def to[T, A, B](invertible: Invertible[T, A, B], a: A): B = invertible.to(a)

f(AB ∧ CD, new A)
f(AB ∧ CD, new C)

So the Dotty compiler has to automatically supply:

object `AB  CD` extends Invertible[`AB  CD`, A ∨ C, B ∨ D] {
  def to(a: A): B...
  def to(a: C): D...
  def from(b: B): A...
  def from(d: D): C...
}

On Sunday, July 5, 2015 at 11:50:07 PM UTC+8, Shelby wrote:
Coming back to this ... and sorry no time to construct a blog too rushed ... 

To summarize ideas against premature specialization... 
Reply all
Reply to author
Forward
0 new messages