Let me first say something about that:
First, the criticism of Scala that's most vexing for me is when people
say it's like C++, or it's a huge language that includes everything
including the kitchen sink. I'm hurt by the criticism because I
believe it describes Scala as the exact opposite of what I panned it
to be. I have always tried to make Scala a very powerful but at the
same beautifully simple language, by trying to find unifications of
formerly disparate concepts. Class hierarchies = algebraic types,
functions = objects, components = objects, and so on.
So I believe the criticisms are overall very unfair. But they do
contain a grain of truth, and that makes them even more vexing. While
Scala has a simple and consistent core, some of its more specialized
features are not yet as unified with the rest as they could be. My
ambition for the next 2-4 years is that we can find further
simplifications and unifications and arrive at a stage where Scala is
so obviously compact in its design that any accusations of it being a
complex language would be met with incredulity. That will be the best
counter-argument to the naysayers. But much more importantly, it will
be a big help for the people writing advanced software systems in
Scala. Their job will be easier because they will work with fewer but
more powerful concepts.
If we arrive to do the simplifications, that would be a good basis for
a Scala 3. Right now, all of this is very tentative. Scala 3 does not
have an arrival date, and it's not even sure that it will ever arrive.
But to give you an idea on what I will be working on, here are some
potential simplifications.
- The easiest win is probably XML literals. Seemed a great idea at the
time, now it sticks out like a sore thumb. I believe with the new
string interpolation scheme we will be able to put all of XML
processing in the libraries, which should be a big win. It also means
we could provide swappable alternatives to the current XML system such
as Anti XML or others.
- The type system. Ideally, Scala's types will be built from just
traits, mixin composition, refinements, and paths, and nothing else.
That's the true core of Scala as is captured in our dependent object
types formalism. We'll throw in classes for Java compatibility. We
still have to make this into a practical programming language
compatible with what Scala currently is. The potential breakthrough
idea here is to unify type parameters and abstract type members. The
idea would be to treat the following two types as equivalent
trait Seq[type Elem] and trait Seq { type Elem }
The definition
trait Seq[Elem]
could still be kept and be interpreted as a type that has an
inaccessible member Elem, similar to the distinction between
class C(x: T) and class C(val x: T)
that we have now. The big simplifications are then:
(1) Any type can be written without its parameters.
(2) Any type that has abstract type members can be retroactively parameterized.
(3) Type parameters can be unified by name.
To illustrate (3), right now to create a synchronized hashmap, one has to write:
new HashMap[String, List[Int]] with SynchronizedMap[String, List[Int]]
That's one of the thankfully few aspects where current Scala violates
the DRY principle. In the future you'd be able to write
new HashMap[String, List[Int]] with SynchronizedMap
because SynchronizedMap could refer to the type Key and Value
parameters in Map. Or you could write
new HashMap[Key = String, Value = List[Int]] with SynchronizedMap
to make it even clearer. The two formulations would be equivalent.
Now if we do that then we have suddenly gained the essential
functionality of higher-kinded types and existential types for free! A
higher-kinded type is simply a type where some parameters are left
uninstantiated. And an existential type is the same! Now clearly,
something must get lost in a scheme that unifies higher-kinded and
existential types by eliminating both. The main thing that does get
lost is early checking of kind-correctness. Nobody will complain that
you have left out type parameters of a type, because the result will
be legal. At the latest, you will get an error when you try to
instantiate a value of the problematic type. So type-checking will be
delayed. Everything will still be done at compile-time. But some of
the checks that used to raise errors at the declaration site will now
raise errors at the use site. In my mind that could be a price worth
paying for the great overall simplification and gain in expressive
power. The other thing that gets lost are the more complicated forms
of existential types that cannot be expressed as a (composition of)
types with uninstantiated type members.
One particularly amusing twist is that this could in one fell sweep
eliminate what I consider the worst part of the Scala compiler. It
turns out that the internal representation of higher-kinded types in
the Scala compiler is the same as the internal representation of raw
types in Java (there are good reasons for both representation
choices). But raw types should map to existentials, not to
higher-kinded types. We therefore need to map Java raw types to Scala
existential types. The code that does this is probably the most
fragile and intricate part of the Scala compiler. There's basically no
good way to do it without either forgetting some transformations or
accidentally tripping off cyclic reference errors. But with the
projected simplifications we would get
raw types = existentials = higher-kinded types = types with
uninstantiated parameters
so the whole issue would vanish in a puff of smoke.
To summarize: If we do this simplification,
- we could eliminate two classes of types in Scala, so that only a
simple core remains,
- we could gain expressive power through unification of concepts,
- we could avoid unnecessary repetition of parameters in mixin
compositions and extends clauses
- we could strengthen the analogies between type parameters and value
parameters.
Other more narrowly scoped ideas for the type system are to introduce
least upper bounds and greatest lower bounds of types as type
constructors. This would avoid the explosion of computed lub types
that we sometimes see in codebases today. And there are some ideas to
make type inference more powerful by making it constraint based.
One big unknown right now is how to ensure a high degree of backwards
compatibility or alternatively provide migration strategies. It's
clear that we have to do this if we want this to fly. It will require
an implementation and lots of experimentations. Therefore, I don't
expect any of these things to materialize before a timeframe of 2-4
years.
So, what does this have to do with SIP 18? Two things:
First, while we might be able to remove complexities in the definition
of the Scala language, it's not so clear that we can remove
complexities in the code that people write. The curse of a very
powerful and regular language is that it provides no barriers against
over-abstraction. And this is a big problem for people working in
teams where not everyone is an expert Scala programmer. Hence the idea
to put in an import concept that does not prevent anything but forces
people to be explicit about some of the more powerful tools that they
use. I am certain there is no way we can let macros and dynamic types
into the language without such a provision.
Second, the discussion here shows that complex existentials might
actually be something we want to remove from a Scala 3. And
higher-kinded types might undergo some (hopefully smallish) changes to
syntax and typing rules. So I think it is prudent to make people flag
these two constructs now with explicit imports, because, unlike for
the rest of the language we do not want to project that these two
concepts will be maintained as they are forever. If you are willing to
keep your code up to date, no reason to shy away from them. But if you
want a codebase that will run unchanged 5 years from now, maybe you
should think before using complex existentials or higher kinded types.
Of course the docs for these two feature flags will contain a
discussion of these aspects, so people can make an informed choice for
themselves.
I know that despite these explanations SIP 18 will still be
contentious. But let's keep the discussion of SIP 18 on scala-sips. Of
course I'd be happy to see responses to all other parts of this mail
in this thread.
Cheers
- Martin
I am certain there is no way we can let macros and dynamic types
into the language without such a provision.
Cheers
- Martin
I still don't get SPI-18. If I start typing...macro def
...I'm writing a macro. Why is my IDE/paperclip saying "it looks like you are writing a macro - want me to import language.macroDefs?" adding anything? Why is it being any more explicit about whether macros are being used in the program than the program, um, containing macros?
- The type system. Ideally, Scala's types will be built from just
traits, mixin composition, refinements, and paths, and nothing else.
That's the true core of Scala as is captured in our dependent object
types formalism. We'll throw in classes for Java compatibility. We
still have to make this into a practical programming language
compatible with what Scala currently is. The potential breakthrough
idea here is to unify type parameters and abstract type members. The
idea would be to treat the following two types as equivalent
trait Seq[type Elem] and trait Seq { type Elem }
They are considerably more powerful since they can be used as input
types as well as output types. What you are arguing is that the basis
of any good type systems should be F_omega_sub, or something close. I
want to explore something completely different. But let me finish the
research before going into details.
> def identity[A](a: A): A = a
>
> So…is this a universal type? There aren't any classes here onto which you
> could inject type members (maybe Function1?), so I don't see a way to carry
> your unification strategy through every case.
Nothing new here. Polymorphic types for methods already exist and will
be maintained. In a calculus we could model identity as a member of a
parameterized class, but in a practical programming language things
will stay as they are.
Cheers
- Martin
PLUS:
I think having a look at the stuff Kotlin (reified generics) or Ceylon (union types instead of nullable reference types) are trying to do makes sense.
I don't think the feature flags are the right way to pull off a migration to a future version. Imho it makes more sense to not annoy people about existing stuff but offer them a way to opt-in to "future implementations" on a non-global base. E. g. the thing Adrian mentioned about virtpatmat.
I still think import is the wrong way and I also dislike using "language" for it. We don't have "utilities" but "util", so I think "lang" is more consistent and looks more familiar to Java people.
Imho the crucial thing is to have an existing implementation before starting to make any warning noises.
In the end I think Scala 3 makes sense, but it should be done in a more continous way, e. g. folding these changes into the next 2.x release when they become ready and declare Scala 3 when all the changes are already in. E. g. "This is what we think is worth calling 3.0, because it is stable mature and all features are well-tested. They are no compatibility issues." E.g. don't repeat Python2 -> Python3. And don't keep people wating forever like in Scala 2.7 -> Scala 2.8.Thanks and bye!
Simon
Daniel
They are considerably more powerful since they can be used as input
types as well as output types. What you are arguing is that the basis
of any good type systems should be F_omega_sub, or something close. I
want to explore something completely different. But let me finish the
research before going into details.
Nothing new here. Polymorphic types for methods already exist and will
be maintained. In a calculus we could model identity as a member of a
parameterized class, but in a practical programming language things
will stay as they are.
I am a bit more optimistic than you. Java has published a roadmap and
some of the changes might well be breaking (I would not see how else
to introduce reified types). Now we all know that roadmaps like that
are tentative and if the breakage is too serious one won't be able to
do it. But I believe it's better to be clear about the directions of
_work on the language_ without already being explicit about releases.
> A value proposition which would make many people more willing to move to
> Scala 3:
>
> The changes you mentioned
>
> PLUS:
>
> Simplifications to reduce the complexity of signatures in the collection
> space.
I do not see how you will be able to do this without complicating
greatly the use of collections. If you want to convince me otherwise,
write an alternative implementation and convince more then 10% of
Scala programmers to use it. Then, and only then, I will give it a
serious look.
> Better/reified Generics. Seeing Oracle slides mentioning the possibility of
> reified Generics in a future version of the JVM/Java I think Scala can push
> this forward. The main thing in Generics I care about is not reflection
> stuff, but the whole overloading/overriding/subclassing problem. Other
> platforms don't use the erasure scheme either.
Our answer to that is manifests / type tags. I am convinced we can
make this work well so that no reified types are needed. Regarding
other platforms: The only platform that uses reified types is .NET and
it's by no means accepted in their core developer team that it was a
good idea. Haskell, SML, OCaml use erased types just like Scala and
Java. C++ templates do not count, IMO, because that's a compile-time
expansion mechanism.
> Having better default/named arguments so that overloading can be put to rest
> completely. (Makes reflection much more simpler)
I am very sympathetic to avoid overloading but do not see how we can
do that and maintain Java compatibility.
> No nullable reference types by default.
We'd need to add non-null types. Not convinced it is worth it because
Option fulfills this role. I'm sitting on the fence on this one, but
my gut feeling is it's better to improve Option.
> Further unification of AnyVal/AnyRef.
Will hopefully happen in 2.10. See value classes SIP.
> No any2StringAdd.
I believe once we have string interpolation in 2.10 (needs a vote, but
I believe this one will be accepted),
we can deprecate any2StringAdd afterwards. Maybe even deprecate in
2.10.1 if we want to go fast, otherwise 2.11.
> No unsafe implicit conversions for primitive types.
Well, it was a design decision of Scala to keep Java expressions
as-is, and I believe it was a good one. Maybe at some point in the
future we want to revise that. But right now I prefer we keep it.
>
> I don't think the feature flags are the right way to pull off a migration to
> a future version. Imho it makes more sense to not annoy people about
> existing stuff but offer them a way to opt-in to "future implementations" on
> a non-global base. E. g. the thing Adrian mentioned about virtpatmat.
>
We can do it for the pattern matcher. There is simply no way to do it
for the core type system, without maintaining two different compilers
at the same time.
Cheers
- Martin
So, your email put me on to the precise issue that arises when you try to do this, which I have now posted to the list. Basically, type members are existentials, it's just that the pack/unpack is hidden by the language (as it is with most languages that support existential quantification). It's pretty easy to see this existentiality though if you look:
type Id = { type A; def apply(a: A): A }
def foo(id: Id) = id(42) // error!
Let-bound polymorphism works just fine, since there's no difference between an instantiated universal and an instantiated existential. Higher-rank types (true universal polymorphism) do not work at all, and that's where the weakness of this approach shows up. It's possible that this may be resolved by leveraging the fact that module members are late-bound in the resolution (basically, the same trick we currently use to wrestle higher-rank types out of what is fundamentally let-bound polymorphism), but I'm not sure.
Higher-kinded types seem like the most dubious part of the proposal. I'm not even sure how it would all work out, and the theory here is generally untested waters. I'm still thinking about it though, and I look forward to seeing what Martin comes up with!
DanielOn Tue, Mar 20, 2012 at 10:05 AM, Luke Vilnis <lvi...@gmail.com> wrote:
Hi Daniel,It's probably off topic for this thread, but I couldn't help but start thinking about the problem you and Martin were discussing. (FYI, I'm just an amateur who enjoys functional programming and has read TAPL/ATTAPL, so take this for what it's worth). I've enjoyed reading some of your blog posts (and your data structures talk), so I was wondering if you would give me your perspective on this interpretation:I think if you think of Scala objects as first-class modules, then the ability to get universal quantification out of type members makes sense. Type members are far from regular existentials because they don't require pack/unpack (I think that's what TAPL called it) , which can only be done inside the scope of the function, where results of the existential type can't be returned. So you could imagine a function that takes a module and returns a modified copy of that module, while treating the input module's type member in a generic (a.k.a universally quantified way). I have to admit this is complete hand-waving (and very much out of my depth) but this is my intuition of how you get back F-sub type behavior from first-class modules (still not sure how to get the omega part).
So your identity example would be like:
type valueWithType = new {type Tdef value: T}And the function would then just be valueWithType => valueWithType#T
I think what Martin is saying is that you can turn the argument list of a function into a module, and then the type parameters of the function become abstract type members of the module. Not sure how higher kinded types works into there. Any thoughts?
Best,Luke
I think it still works because you have to translate the argument list to "apply" into a module as well... So:type Id = { def apply[A](a:A): A }becomestype Id = { def apply(a: { type A; def value: A } ): a#A }And thendef foo(id: Id) = id(42)works as long as you have a mechanism to automatically translate argument lists into modules, which was I think the gist of Martin's original idea.
One is tempted to observe that this is also how people propose
periodically to end real wars (the complete extermination of the
Other) with generally unpleasant results. I'm not a big fan of tabs
either, but it seems unwise to martyr the tabbies.
Le 20/03/2012 18:27, Alex Kravets a écrit :
> Hi Martin,
>
> Page 32 of Fortress Language Specification
> <http://labs.oracle.com/projects/plrg/Publications/fortress.1.0.pdf> specifies blank non-space
> characters that are /not allowed /in source (except in comments).
>
> If one looks at most files in java.lang or java.util. packages they represent a jumble of
> space-based and tab-based indentation.
>
> It's a very small thing, but simply restricting valid spacing to only the space character
> would, IMHO, be very beneficial.
What a nightmare!! The result would be massive space filling (by IDEs or editors) to produce
indentation !
>
> It would end all the space-vs-tabs-vs-mix wars and make the source indentation much more regular.
To me whitespace is perfectly defined by combinations of space + tab + nl, like in regexp's.
>
> Is there any chance that tabs can be prohibited from source (outside of comment blocks) in any
> future version of Scala ?
>
> Cheers...
>
> .r { text-align: right; } body { font-family: arial; } td { width: 120; font: 10pt tahoma;
> padding: 4px 8px; } td.mail { font: 10pt courier new; } td.bg { background: lightcyan; }
Kr,
Robert.
On Tue, Mar 20, 2012 at 4:02 PM, Simon Ochsenreither
<simon.och...@googlemail.com> wrote:> No nullable reference types by default.
We'd need to add non-null types. Not convinced it is worth it because
Option fulfills this role. I'm sitting on the fence on this one, but
my gut feeling is it's better to improve Option.
We'd need a new Option type, one which explicitly disallowed null.
Anything you used it with would have to disallow null as well. Maybe
if we had a fully working NotNull there could be an Option[T <:
NotNull].
I don't think there's any way to do it as things stand.
Second, the discussion here shows that complex existentials might
actually be something we want to remove from a Scala 3. And
higher-kinded types might undergo some (hopefully smallish) changes to
syntax and typing rules. So I think it is prudent to make people flag
these two constructs now with explicit imports, because, unlike for
the rest of the language we do not want to project that these two
concepts will be maintained as they are forever. If you are willing to
keep your code up to date, no reason to shy away from them. But if you
want a codebase that will run unchanged 5 years from now, maybe you
should think before using complex existentials or higher kinded types.
Of course the docs for these two feature flags will contain a
discussion of these aspects, so people can make an informed choice for
themselves.
Is this feasible?
Some(Some(...(x))) --> x
Some(Some(...(None))) --> NoneN (e.g. None1, None2, depending on
number of nested Somes)
None -> null (not necessary?)
Option[Option[...[T]]] would have to be stored underneath as AnyRef
and not T, since it would need to be able to refer to T and NoneN, but
that doesn't seem like a blocker. Maybe it slows things down with more
polymorphism, but it still reduces allocation of Some instances.
Technically,
def ident[TP](args: T): T = macro ...
I found it confusing at first, but the distinction is interesting and
I'm very much in agreement with. The definition is exactly the same as
every other in Scala: it takes some parameters and produces a result,
all according to whatever types you specify. A macro does not change
the definition: if you are taking two strings and returning a boolean,
you are taking two strings and returning a boolean, period. The
*implementation* of said definition that is *produced* by a macro.
Off topic, I know, but I like nipping misconceptions at the bud (all
sorts of sp?).
>
> ...I'm writing a macro. Why is my IDE/paperclip saying "it looks like you
> are writing a macro - want me to import language.macroDefs?" adding
> anything? Why is it being any more explicit about whether macros are being
> used in the program than the program, um, containing macros?
>
> Having said all that; very interesting update on the direction of scala
>
> Chris
>
> On Tue, Mar 20, 2012 at 11:29 AM, martin odersky <martin....@epfl.ch>
> wrote:
>>
>> I am certain there is no way we can let macros and dynamic types
>> into the language without such a provision.
>>
>> Cheers
>>
>> - Martin
>
>
--
Daniel C. Sobral
I travel to the future all the time.
Yes, but I guess they are used much more rarely. One thing I do not
understand. Why outlaw conversions from Long to Float? I mean, we know
Float is a lossy approximation no matter what you do, so why is bit
loss in the conversion a problem?
Cheers
- Martin
>
> We can do it for the pattern matcher. There is simply no way to do it
> for the core type system, without maintaining two different compilers
> at the same time.
>
> My idea was basically to introduce the heavy stuff first, so that when we
> arrive at 3 there are no big compatibility issues to expect.
> But then of course if it won't work it doesn't sound too great :-/ Although
> maintaining Scala 2 _and_ Scala 3 cannot be avoided for some time either,
> right?
>
>
> Thanks and bye,
>
>
> Simon
--
Martin Odersky
Prof., EPFL and Chairman, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967
Well, and spaces have poor indentation on fonts of non-fixed size.
But, by all means, do bring it up on scala-debate, and once consensus
is formed, bring it back to scala-language. Until then, _please_ do
not inject discussions about tabs vs spaces on threads about the
evolution of Scala's type system. Let me end this with a quote by
James Iry: "1940s - Various "computers" are "programmed" using direct
wiring and switches. Engineers do this in order to avoid the tabs vs
spaces debate."
>
> Therefore sometimes tabs are rendered as 2, 4, 6 or 8 times the width of a
> space and sometimes even a fractional number, this causes misalignment
> of indentation when there is a mix of spaces and tabs (and I've never seen a
> tab-only source file in my 16+ years of professional experience).
>
> If you want to observe the effect of this, just click through into the
> source on Java libraries and you'll observe a complete jumble of
> indentation.
>
> Cheers...
>
>
>
> On Tue, Mar 20, 2012 at 10:41 AM, Paul Phillips <pa...@improving.org> wrote:
>>
>> On Tue, Mar 20, 2012 at 10:27 AM, Alex Kravets <kra...@gmail.com> wrote:
>> > It's a very small thing, but simply restricting valid spacing to only
>> > the
>> > space character would, IMHO, be very beneficial.
>> >
>> > It would end all the space-vs-tabs-vs-mix wars and make the source
>> > indentation much more regular.
>>
>> One is tempted to observe that this is also how people propose
>> periodically to end real wars (the complete extermination of the
>> Other) with generally unpleasant results. I'm not a big fan of tabs
>> either, but it seems unwise to martyr the tabbies.
>
>
>
>
> --
> Alex Kravets def redPill = 'Scala
> [[ brutal honesty is the best policy ]]
>
--
I can't think of the exact problem I ran into when working on Spire [1]
but the unsafe conversions did give me problems, and I would also like
to be rid of them.
The thing that is galling is that Int/Long are precise, and the user
should need to be explicit about an action that will move to an
approximate type (unless the operation could only be done with an
approximate type). There are valid reasons to prefer pow(Double) to
pow(Long) in some cases (it's a bit faster) but it's easy for a user to
get this wrong, and often you really do want pow(Long), which Scala
doesn't provide.
In general I would like it Scala supported more arithmetic operations
on all the numeric types (rather than relying on implicit conversions).
-- Erik
It's hardly about precision, it's about losing type safety. That
numbers all support about the same operations don't make type mistakes
right, just make them more difficult to catch. At least, that's my own
reasons.
>
> Cheers
>
> - Martin
>
>>
>> We can do it for the pattern matcher. There is simply no way to do it
>> for the core type system, without maintaining two different compilers
>> at the same time.
>>
>> My idea was basically to introduce the heavy stuff first, so that when we
>> arrive at 3 there are no big compatibility issues to expect.
>> But then of course if it won't work it doesn't sound too great :-/ Although
>> maintaining Scala 2 _and_ Scala 3 cannot be avoided for some time either,
>> right?
>>
>>
>> Thanks and bye,
>>
>>
>> Simon
>
>
>
> --
> Martin Odersky
> Prof., EPFL and Chairman, Typesafe
> PSED, 1015 Lausanne, Switzerland
> Tel. EPFL: +41 21 693 6863
> Tel. Typesafe: +41 21 691 4967
--
> You're right, it makes no sense to have an discussion without a working
> proposal. Btw, is there any decision yet about the Traversable/Iterable
> merge?
No, I think that would got dropped. We have to consider it again
before freezing for 2.10.
> > No unsafe implicit conversions for primitive types.
One thing I do not understand. Why outlaw conversions from Long to Float?
I mean, we know Float is a lossy approximation no matter what you do, so
why is bit loss in the conversion a problem?
Fingers typed too fast to keep meaning. I meant, the issue got dropped.
- Martin
>
>> > > No unsafe implicit conversions for primitive types.
>
>
>>
>> One thing I do not understand. Why outlaw conversions from Long to Float?
>>
>> I mean, we know Float is a lossy approximation no matter what you do, so
>> why is bit loss in the conversion a problem?
>
> Because it is often not visible. E.g. when integer types are used in
> arguments to a method accepting floating point values.
> Another probably more severe example is
> scala> (123456789).round
> res0: Int = 123456792
>
> When I learned about implicits the big rule was that implicits shouldn't be
> used for unsafe operations, but only for stuff where it is sure that it
> won't go wrong for all inputs.
>
>
> Thanks and bye,
>
>
> Simon
--
I didn't expect that I'd have to start a post to a Scala mailing list
like that again, but here it goes:
I sincerely hope this proposal is a joke. Formatting is certainly *not*
a compiler issue. Furthermore, declaring your opinion as the end of such
an issue is presumptuous to say the least.
Not entirely. I did a lot of the Iterable/Traversable blend, but it
was tedious and I could tell it was going to be squandered to drift if
I didn't merge it immediately. Also, I ran into this.
http://www.scala-lang.org/node/11957
As best I recall, I saw no way to do it in a backward compatible way.
As detailed at the above:
In Traversable, foreach is abstract
in Iterable, foreach is concrete, iterator is abstract
Iterable extends Traversable
So here are two lines which could exist in the wild now:
new Traversable[Int] { def foreach[T](f: Int => Unit): Unit = ??? }
new Iterable[Int] { def iterator = ??? ; override def foreach[T](f:
Int => Unit): Unit = super.foreach(f) }
There's just no way not to break one. If foreach is concrete, the
first breaks. If it is abstract, the second breaks.
This is probably resolvable by introducing one or more new types, but
again, it's unappealing to touch it again unless I intend to merge it
the minute it works. So we have to agree on everything up front, in
contrast to my usual "write all the code and only then think about
what I'm writing" approach.
My ambition for the next 2-4 years is that we can find further
simplifications and unifications and arrive at a stage where Scala is
so obviously compact in its design that any accusations of it being a
complex language would be met with incredulity. That will be the best counter-argument to the naysayers
So, what does this have to do with SIP 18? Two things:
First, while we might be able to remove complexities in the definition
of the Scala language, it's not so clear that we can remove
complexities in the code that people write. The curse of a very
powerful and regular language is that it provides no barriers against
over-abstraction.
it will be a big help for the people writing advanced software systems in
Scala. Their job will be easier because they will work with fewer but
more powerful concepts.
The potential breakthrough idea here is to unify type parameters and abstract type members.
Now if we do that then we have suddenly gained the essential
functionality of higher-kinded types and existential types for free! A
higher-kinded type is simply a type where some parameters are left
uninstantiated.
Now clearly, something must get lost in a scheme that unifies higher-kinded and
existential types by eliminating both. The main thing that does get
lost is early checking of kind-correctness. Nobody will complain that
you have left out type parameters of a type, because the result will
be legal. At the latest, you will get an error when you try to
instantiate a value of the problematic type.
implicit val tuple2FirstFunctor = new Functor[Tuple2] {def map[A, B](f: A => B)(fa: Tuple2[A]): Tuple2[B] = f(fa._1) -> fa._2}
or something like that?
I might not have stated it clearly enough. The motivation stated in
the roadmap has nothing to do with _perceived_ complexity. If that was
all, then probably better to not to talk about it at all and do some
marketing fluff that papers over it.
It's rather that, when it comes to complexity, I want to set the bar
very high. I want to develop Scala into a language that's truly
simple, not to placate or convince the naysayers but because I think
it will improve the language.
Cheers
- Martin
--
I'm sure Adriaan will correct me if I've got this wrong, but I think
the idea is to add a concept of type "un-members" which precisely
capture the universally quantified aspect that you're missing.
Cheers,
Miles
--
Miles Sabin
tel: +44 7813 944 528
gtalk: mi...@milessabin.com
skype: milessabin
g+: http://www.milessabin.com
http://twitter.com/milessabin
http://underscoreconsulting.com
http://www.chuusai.com
Oh yes, it's all coming back to me now. Thanks for paging it back in.
I don't have a solution for it either I'm afraid.
- Martin
>
> This is probably resolvable by introducing one or more new types, but
> again, it's unappealing to touch it again unless I intend to merge it
> the minute it works. So we have to agree on everything up front, in
> contrast to my usual "write all the code and only then think about
> what I'm writing" approach.
--
Anyway, I appreciate that you're hoping to drive adoption by reducing the perceived complexity you find so vexing, but two comments on that:- First, "perceived" is the key word there. One thing I've learned in this business is that marketing dictates truth.
understand. Why outlaw conversions from Long to Float? I mean, we knowYes, but I guess they are used much more rarely. One thing I do not
Float is a lossy approximation no matter what you do, so why is bit
loss in the conversion a problem?
Their job will be easier because they will work with fewer but
more powerful concepts.
The potential breakthrough
idea here is to unify type parameters and abstract type members. The
idea would be to treat the following two types as equivalenttrait Seq[type Elem] and trait Seq { type Elem }
new HashMap[String, List[Int]] with SynchronizedMap
The main thing that does get
lost is early checking of kind-correctness. Nobody will complain that
you have left out type parameters of a type, because the result will
be legal. At the latest, you will get an error when you try to
instantiate a value of the problematic type. So type-checking will be
delayed. Everything will still be done at compile-time. But some of
the checks that used to raise errors at the declaration site will now
raise errors at the use site.
eliminate what I consider the worst part of the Scala compiler. It
turns out that the internal representation of higher-kinded types in
the Scala compiler is the same as the internal representation of raw
types in Java (there are good reasons for both representation
choices).
Cheers
- Martin
We'd have to assume an ordering. Not sure yet exactly which one to
choose. One possibility is that they would be ordered if defined with
parameter notation, but not if defined as members.
Given the current state of research I don't have definite answers to
these. But they are good use cases to keep in mind!
> I can see how that would work if the type names agree, but what if they
> don't? Seems like you would need type-level operators to rename, project,
> basically all the usual tuple calculus suspects. Let me suggest this as an
> alternative:
>
> new (HashMap with SyncrhonizedMap)[String, List[Int]]
That's interesting!
>
>
>> The main thing that does get
>> lost is early checking of kind-correctness. Nobody will complain that
>> you have left out type parameters of a type, because the result will
>> be legal. At the latest, you will get an error when you try to
>> instantiate a value of the problematic type. So type-checking will be
>> delayed. Everything will still be done at compile-time. But some of
>> the checks that used to raise errors at the declaration site will now
>> raise errors at the use site.
>
>
> Delaying kind checking until the typer is very much like delaying type
> checking until runtime. It basically makes the type-level language untyped
> (or, to borrow from "dynamic" language parlance, it would be dynamically
> kinded). That is, you could construct all kinds of crazy type-level things
> that make no sense whatsoever, and you would never know they don't make
> sense until you try to instantiate a value of an unsound type. Basically
> every poorly kinded type would just be uninhabited, i.e. equivalent to
> Nothing.
>
> I think that this might be a price too high to pay. I would rather see a
> step in the other direction, introducing an actual kind system complete with
> polymorphic kinds. This would greatly simplify library development.
>
I agree it's a tradeoff. There are some thoughts from Adriaan's side
to regain kind checking by distinguishing input and output member
types. I see that as similar in spirit to the progression from Prolog
to Mercury, say.
Cheers
- Martin
Form what we know today, that would require a considerable
complication of the type system, so it's less likely to happen.
Cheers
- Martin
/Jesper Nordenberg
Assumes facts not in evidence. It shows that humans in general and
programmers in particular are imbued with infinite confidence that
they can do it better than the other guy. (Which is great, because
sometimes they're right.) And that starting something is easy, and
that everything made by decent programmers starts out high on elegance
and low on tradeoffs. But the woods are lovely dark and deep, and
they have miles to go before they sleep.
Not that I disagree that people want "simpler", or think they do. Of
course simple plus powerful implies many degrees of freedom, another
thing which people appear not to want (at least when it comes to their
co-workers.) Eventually the time comes to pick something and make it
your own.
Welcome to simplify Scala.There are many languages be made after Scala yet,such as Kotlin(JetBrains)、Xtend(Eclipse)、Ceylon(Redhat) etc,It shows that programmer community and industry aspire a more simpler language and still powerful.
Just to hijack this thread somewhat... Given 'extends AnyVal', is it any more feasible today to revisit the old alchemists' dream of transmuting Some(x) to x, and None to null? (i.e. an unboxed Option)
This is yet another example of premature optimization (declaring the data structure in the self type) and my idea for a solution being an inversion-of-control, where the mixin injects a method into the constructor instead of prematuring declaring itself as a constructor.
I am starting to get the strong intuition that this concept of inversion-of-control needs to be proliferated throughout Scala 3 if we want to make a huge paradigm shift win on modularity. I am studying now the DOT calculus in detail and I am hoping I can apply such concepts so that type preservation can be recovered.
I believe perhaps the ideas I have presented for injection of interface (relying on DOT) are a complete solution (and more generalized) to the reasons given for needing to represent family polymorphism by tracking types in the instance (which appears to be a less general form of dependency injection):http://www.cs.au.dk/~eernst/tool11/papers/ecoop01-ernst.pdf#page=8
If a set of types share a set of methods (perhaps implemented as typeclass rather than virtual inheritance so the dictionary can be injected with an object), then the disjunction of those types is the conjunction (and the conjunction of those types is the disjunction) of the implementations of that interface. But note that A ∧ A = A ∨ A, so thus both disjunction and conjunction can be operated upon if they share an interface A.That was the point of my prior post.
I believe I show herein the fundamental importance of objects (as in "OOP"), that subclassing (but not subtyping) is fundamentally an anti-pattern, and that the new DOT calculus is essential.For the goal of completely solving the Expression Problem, I believe the requirement for a "global vtable" which I pondered upthread, is implicitly fulfilled by the injection of inversion-of-control I had proposed.Objects are passed around as the vtable, which I believe is a form of the extensible modularity...Perhaps the Dotty compiler could automatically generate the implicit object `Drawable[Line ∨ Box]`. Thus we retain subtyping (i.e. `Line` and `Box` are subtypes of `Line ∨ Box`) while eliminating subclassing (i.e. there is no nominal type which is the supertype of `Line ∨ Box` or at least `Any` should only occur with a cast since I've shown it discards extensible static typing)....Another benefit of deprecating subsumption via subclassing in favor of subtyped disjunction, is distinct invariantly parametrized types can be added to the same List:class TaggedLine[Tag](a: Point, b: Point, tag: TAG)...draw(List(TaggedLine(Point(0,0), Point(1,1), 1), List(TaggedLine(Point(0,0), Point(1,1), "1"))) // Error: can not subsume to List[TaggedLine[Any]] because TAG is invariantI assume the new DOT calculus will instead implicitly subsume `TAG` to `Int ∨ String` instead of `Any`?Somewhat OT, I am pondering how will DOT deal with the following?trait Invertible[T <: Invertible[T, A, B], A, B] {def to(a: A): Bdef from(b: B): A}object AB extends Invertible[AB, A, B] {def to(a: A): B...def from(b: B): A...}object CD extends Invertible[CD, C, D] {def to(c: C): D...def from(d: D): C...}def to[T, A, B](invertible: Invertible[T, A, B], a: A): B = invertible.to(a)f(AB ∧ CD, new A)f(AB ∧ CD, new C)So the Dotty compiler has to automatically supply:def to(a: A): B...
object `AB ∧ CD` extends Invertible[`AB ∧ CD`, A ∨ C, B ∨ D] {
def to(a: C): D...def from(b: B): A...def from(d: D): C...}
On Sunday, July 5, 2015 at 11:50:07 PM UTC+8, Shelby wrote:Coming back to this ... and sorry no time to construct a blog too rushed ...
To summarize ideas against premature specialization...