Actual complexity in Scala that warrants simplification

1,178 views
Skip to first unread message

Rex Kerr

unread,
Nov 15, 2011, 12:28:32 PM11/15/11
to scala-debate
After yet another "Scala is too complex" discussion, it occurred to me that, although these discussions very often mistake novelty for complexity, or mistake the complexity of the problem with the complexity of the solution, there _are_ still a number of areas where Scala could benefit from simplification (i.e. the resulting code would be simpler, even if Scala as a language was more complex, since it would allow a simple solution to be used where a complex one now must be).

I'm ignoring the obvious style choice ones like "make Scala like Haskell".

I was curious what areas other people have identified, and what, if any, solutions they've identified?  Or, perhaps people will take issue with some of mine?

My list includes:

(1) Specialization.  The basic concept isn't too bad, but the implementation is so full of bugs and limitations that employing it in anything but the most straightforward cases is painful at best.  I really don't know how to solve this except for serious manpower to be devoted to solving the problems, or to avoid the "feature".  Conceptual problems include enforced independence of types leading to combinatorial explosion of classes, inability to specify solutions for particular types or type combinations, inability to inherit, and more.
Solution: ?

(2) No MyType.  The intent behind
  abstract class A {
    def clone(): MyType
  }
  class B extends A {
    def clone() = new B
  }
is amazingly obvious, yet Scala provides no simple way to achieve this.
Solution: Yes, there are issues with contravariance (which causes issues with taking MyType as a parameter of a method), but at least the compiler could just bail out with an error instead of not even having the feature.

(3) Basic features (mostly FP) are missing from parts of the library, causing unnecessarily verbose and hard-to-follow code.
  (a) It seems like "re-invent |>" is the answer to a StackOverflow question about once a week.
  (b) Using `copy` to transform data is amazingly awkward: x.copy(_1 = x._1+1).  How about x.modify1(_ + 1)?
  (c) Option has no fold
  (d) There is no default container for a single mutable variable.
  ... (this could go on for a while; I hope these are some of the most important)
Solution: just add these things to the library!

(4) Collections library has too many methods, and the ones it has are not consistently present by function:
  (a) collect, collectFirst but no collectLast
  (b) drop, dropRight, dropWhile, but no dropRightWhile
  (c) filter/filterNot but no findNot, fornone, doesnotexist, etc.
  (d) headOption but no lastOption
  (e) reverseMap but no reverseFilter, reverseFlatMap, etc.
  (f) minBy, maxBy but no sumBy, productBy
  ...(various others)
This leads to a large memorization burden.
Solution: more views, less duplication.  Option view wraps any single-element return in an option.  Reverse (or right) view does things from other end.
Open problem: how can this have acceptable performance?

(5) Generic numeric code is awkward and low-performance.
Solution: specialize.
Problem: see (1).

(6) "Enrich" my library pattern is awkward to write and often does a lot more than is needed (e.g. creates a class where an extension method would provide superior performance).
Solution: SIP for implicit classes may be enough.  This requires escape analysis to work flawlessly, or for the compiler to provide inlining, or it only solves half the problem--but at least it (mostly) solves the complexity half, which is what we're after for the purposes of this discussion.

(7) Overloading method names by type is a feature that acts as a second-class citizen.  The most notable offender is the restriction on default values in overloaded method names even when the two are still completely unambiguous.  If I am to understand Martin's StackOverflow answer, this is because the _language specification_ would become more complex.  But this is pushing the complexity from the language specification onto the programmer in the form of an extra rule ("only one overloaded method can have default parameters") instead of the obvious "ambiguity is not allowed".  Also, since ambiguity is presently allowed: class Z { def foo(s: String, a: Int = 42) = a; def foo(t: String) = t }, one has yet more to remember regarding how ties are actually resolved (shorter form is used; if return type is different, type inference can be used to get longer form; if parameters are named differently, referring to a parameter by name can be used to switch; otherwise if the default parameter is hidden there's no way to get it back).
Solution: allow anything unambiguous.  Disallow anything ambiguous.

(8) Default parameter values and implicits don't free you from remembering that you actually have parameters there when you use functions.  In particular,
  def dup(s: String, n: Int = 2) = s*n
  dup("fish")   // "fishfish"
  List("fish","cow").map(dup)  // Error
which makes a feature that should reduce your conceptual burden actually increase it.  You're better off with
  def dup(s: String, n: Int) = s*n
  def dup(s: String) = dup(s,2)
except now it complains about recursive types, argh (back to (7)--second class citizen!--as there's no possible way dup(s,2) could actually be recursive).  If we fix the types, then the map will work just fine.  But why?  It's conceptually identical to the default parameter case.  Now we're (horrifyingly) tempted to
  def dup(s: String)(implicit n: Int = 5)
as a hack; this does work, unless you throw in an implicit int somewhere without realizing it.
Solution: shorter forms of methods with defaults should go through type inference.  Implicits are trickier, but the interaction between all of these things should perhaps be considered a little more.  (In particular, it seems like a conversion from f(x: X)(implicit y:Y): A to Y => X => A would be an obvious thing to have.

  --Rex

Erik Osheim

unread,
Nov 15, 2011, 1:43:13 PM11/15/11
to scala-debate, Rex Kerr
On Tue, Nov 15, 2011 at 12:28:32PM -0500, Rex Kerr wrote:
> (1) Specialization. The basic concept isn't too bad, but the
> implementation is so full of bugs and limitations that employing it in
> anything but the most straightforward cases is painful at best. I really
> don't know how to solve this except for serious manpower to be devoted to
> solving the problems, or to avoid the "feature". Conceptual problems
> include enforced independence of types leading to combinatorial explosion
> of classes, inability to specify solutions for particular types or type
> combinations, inability to inherit, and more.
> Solution: ?

I agree that specialization right has some issues. But some of the
problems you list are worse than others. For instance, it would be nice
to be able to specify solutions for particular types, but the inability
to inherit from specialized classes is a much bigger issue.

Also, as a library author you leave out the biggest issue that I've
faced with specialization: if I expose an interface that has a
specialized type parameter, I have to document this fact to my users,
who in turn have to remember to "specialize through" to the place where
the type param is provided. It's easy to mess this up, and there's
nothing (other than poor performance) to clue you in to what happened.

In my particular case, this has made working with a specialized Numeric
library much less attractive... because I'm forced to choose between
beautiful and fast code.



> (5) Generic numeric code is awkward and low-performance.
> Solution: specialize.
> Problem: see (1).

Have you looked at https://github.com/azavea/numeric? After a bunch of
testing and profiling it gets good performance (e.g. comparable to
direct code) under Scala 2.9. I'd be interested in your comments or
critiques.

Tom Switzer and I are working on a series of proposals dealing with
Numeric (as well as numbers and math in Scala more generally).
Hopefully we'll be able to share that soon.

-- Erik

Rex Kerr

unread,
Nov 15, 2011, 2:17:04 PM11/15/11
to Erik Osheim, scala-debate
On Tue, Nov 15, 2011 at 1:43 PM, Erik Osheim <er...@plastic-idolatry.com> wrote:
On Tue, Nov 15, 2011 at 12:28:32PM -0500, Rex Kerr wrote:
> (1) Specialization.  The basic concept isn't too bad, but the
> implementation is so full of bugs and limitations that employing it in
> anything but the most straightforward cases is painful at best.  I really
> don't know how to solve this except for serious manpower to be devoted to
> solving the problems, or to avoid the "feature".  Conceptual problems
> include enforced independence of types leading to combinatorial explosion
> of classes, inability to specify solutions for particular types or type
> combinations, inability to inherit, and more.
> Solution: ?

I agree that specialization right has some issues. But some of the
problems you list are worse than others. For instance, it would be nice
to be able to specify solutions for particular types, but the inability
to inherit from specialized classes is a much bigger issue.

Depends what you're doing.

I tried writing some mathematics specialization, and the first one killed the project.  I tried writing some specialized collections, and the second killed the project.  I tried writing a specialized tuple of sorts, and was narrowly able to dodge the second, but the first resulted in a library that is ~4 MB in order to cover the most basic use cases.  (Although the second caused the size to double.)
 
Also, as a library author you leave out the biggest issue that I've
faced with specialization: if I expose an interface that has a
specialized type parameter, I have to document this fact to my users,
who in turn have to remember to "specialize through" to the place where
the type param is provided. It's easy to mess this up, and there's
nothing (other than poor performance) to clue you in to what happened.

I had already resigned myself to writing library code that is meant only to be used, not extended (not in the general case, anyway).  I agree that this is a major issue.
 
In my particular case, this has made working with a specialized Numeric
library much less attractive... because I'm forced to choose between
beautiful and fast code.

> (5) Generic numeric code is awkward and low-performance.
> Solution: specialize.
> Problem: see (1).

Have you looked at https://github.com/azavea/numeric? After a bunch of
testing and profiling it gets good performance (e.g. comparable to
direct code) under Scala 2.9. I'd be interested in your comments or
critiques.

Tom Switzer and I are working on a series of proposals dealing with
Numeric (as well as numbers and math in Scala more generally).
Hopefully we'll be able to share that soon.

-- Erik

I haven't tried it, but it looks promising.  This could be a solution to some common use-cases.

The extra operators are not so great, however.

Bad:
  <=> looks way too much like the Pascal <> "not equal" operator and the scalaz <===> "distance" operator.
  ** looks like a collection operator--multiplication on matricies or somesuch (although I understand the historical attraction).  Use something that doesn't conflict with the "doubling means over a collection" meme.  Also, this has the same precedence as *, which yields surprising results.  Use something with higher precedence like \^ or ~^.
 
Not sure:
  !== does not match the scalaz /== (and I have also seen =/=), but as long as it is typesafe, that's fine.  I think scalaz made the wrong (i.e. inconsistent) choice on that one; as long as people coordinate on !== vs. =/= for "typesafe not-equals" I think it's fine.

Good:
  === should be defined everywhere.  Typesafe equals is great.  (I think this is typesafe...if not, this is the worst possible choice of symbol.)

The lack of extensibility is a bit problematic, but can't one just define the appropriate implicits?

  --Rex


Erik Osheim

unread,
Nov 15, 2011, 2:44:31 PM11/15/11
to Rex Kerr, scala-debate
Rex,

Thanks for the feedback! The library as it stands is intended as a
jumping off point and a proof of what is possible... I certainly don't
think the names are necessarily final.

On Tue, Nov 15, 2011 at 02:17:04PM -0500, Rex Kerr wrote:
> Bad:
> <=> looks way too much like the Pascal <> "not equal" operator and the
> scalaz <===> "distance" operator.

I forget which language I used this compare operator in, but it could
easily be removed. Are there any other good/canonical operators for
cmp(lhs, rhs): Int?

> ** looks like a collection operator--multiplication on matricies or
> somesuch (although I understand the historical attraction). Use something
> that doesn't conflict with the "doubling means over a collection" meme.
> Also, this has the same precedence as *, which yields surprising results.
> Use something with higher precedence like \^ or ~^.

Good suggestion. I would probably use something like ~^ just because \^
might suggest division.



> Not sure:
> !== does not match the scalaz /== (and I have also seen =/=), but as long
> as it is typesafe, that's fine. I think scalaz made the wrong (i.e.
> inconsistent) choice on that one; as long as people coordinate on !== vs.
> =/= for "typesafe not-equals" I think it's fine.

I got this from things like Javascript (where === and !== are used in
preference of the "normal" != and !==). I'm amendable to changing these
if necessary (that goes for all the operator choices) although I agree
that /== seems less used than either !== or =/=.



> Good:
> === should be defined everywhere. Typesafe equals is great. (I think
> this is typesafe...if not, this is the worst possible choice of symbol.)

Yes, this (and !==) are type safe, with the caveat that the library
obviously does some Numeric conversions to support literals. So you can
say things like:

def isZero[T:Numeric](t:T) = t === 0

Which is only type safe due to conversions from Int => T.



> The lack of extensibility is a bit problematic, but can't one just define
> the appropriate implicits?

I think that due to the way I built the ConvertableFrom/ConvertableTo
stuff things might not always work as expected. This would mostly mean
that things like:

def addRationalOne[T:Numeric](t:T) = t + Rational(1, 1)

...would not work (for some user-defined Rational class). So maybe it's
not a big deal.

-- Erik

Jason Zaugg

unread,
Nov 15, 2011, 3:00:46 PM11/15/11
to Erik Osheim, Rex Kerr, scala-debate
On Tue, Nov 15, 2011 at 8:44 PM, Erik Osheim <er...@plastic-idolatry.com> wrote:
Rex,

Thanks for the feedback! The library as it stands is intended as a
jumping off point and a proof of what is possible... I certainly don't
think the names are necessarily final.

On Tue, Nov 15, 2011 at 02:17:04PM -0500, Rex Kerr wrote:
> Bad:
>   <=> looks way too much like the Pascal <> "not equal" operator and the
> scalaz <===> "distance" operator.

I forget which language I used this compare operator in, but it could
easily be removed. Are there any other good/canonical operators for
cmp(lhs, rhs): Int?

That's the spaceship operator from Ruby :) 

One note from the Scalaz side: we're working on a version that lets you selectively import syntax (read, um, enriched methods), rather than it being an all or nothing version. In this case, `import scalaz.syntax.equal._` will provide `===`. We're ensuring that if you choose not to import the syntax, all the functionality in the library is still available to you.

So while it's nice to think of us when picking your method names, it will no longer be quite as annoying to resolve the clashes when they occur.

-jason

martin odersky

unread,
Nov 15, 2011, 3:20:23 PM11/15/11
to Rex Kerr, scala-debate
On Tue, Nov 15, 2011 at 6:28 PM, Rex Kerr <ich...@gmail.com> wrote:
After yet another "Scala is too complex" discussion, it occurred to me that, although these discussions very often mistake novelty for complexity, or mistake the complexity of the problem with the complexity of the solution, there _are_ still a number of areas where Scala could benefit from simplification (i.e. the resulting code would be simpler, even if Scala as a language was more complex, since it would allow a simple solution to be used where a complex one now must be).

I'm ignoring the obvious style choice ones like "make Scala like Haskell".

I was curious what areas other people have identified, and what, if any, solutions they've identified?  Or, perhaps people will take issue with some of mine?

My list includes:

(1) Specialization.  The basic concept isn't too bad, but the implementation is so full of bugs and limitations that employing it in anything but the most straightforward cases is painful at best.  I really don't know how to solve this except for serious manpower to be devoted to solving the problems, or to avoid the "feature".  Conceptual problems include enforced independence of types leading to combinatorial explosion of classes, inability to specify solutions for particular types or type combinations, inability to inherit, and more.
Solution: ?

Specialization will indeed take time to get right and push further. My advice for now would be: Treat it as a systems feature. Don't use @specialized in your code, but simply profit from the improved performance of specialized functions and tuples.


(2) No MyType.  The intent behind
  abstract class A {
    def clone(): MyType
  }
  class B extends A {
    def clone() = new B
  }
is amazingly obvious, yet Scala provides no simple way to achieve this.
Solution: Yes, there are issues with contravariance (which causes issues with taking MyType as a parameter of a method), but at least the compiler could just bail out with an error instead of not even having the feature.

MyType has a lot of hidden complexities. All formal treatments of MyType have included "precise types", which are an anathema to OOP. 


(3) Basic features (mostly FP) are missing from parts of the library, causing unnecessarily verbose and hard-to-follow code.
  (a) It seems like "re-invent |>" is the answer to a StackOverflow question about once a week.

That's intentional. I believe |> is not idiomatic Scala style; and I would tend to use method chaining instead. Generally, trying to do point-free style in Scala is a dead end.
 
  (b) Using `copy` to transform data is amazingly awkward: x.copy(_1 = x._1+1).  How about x.modify1(_ + 1)?
  (c) Option has no fold
  (d) There is no default container for a single mutable variable.
  ... (this could go on for a while; I hope these are some of the most important)
Solution: just add these things to the library!

 
Maybe. It amounts to having a larger library. Not saying this is bad, but it is hard to argue that this will reduce complexity.
  
(4) Collections library has too many methods, and the ones it has are not consistently present by function:
  (a) collect, collectFirst but no collectLast
  (b) drop, dropRight, dropWhile, but no dropRightWhile
  (c) filter/filterNot but no findNot, fornone, doesnotexist, etc.
  (d) headOption but no lastOption
  (e) reverseMap but no reverseFilter, reverseFlatMap, etc.
  (f) minBy, maxBy but no sumBy, productBy
  ...(various others)
This leads to a large memorization burden.
Solution: more views, less duplication.  Option view wraps any single-element return in an option.  Reverse (or right) view does things from other end.
Open problem: how can this have acceptable performance? 

And code size. Yes. I'd be interested in seeing solutions to this one. 
Or maybe decide to drop some methods from collections instead of adding new ones? 

(5) Generic numeric code is awkward and low-performance.
Solution: specialize.
Problem: see (1).

(6) "Enrich" my library pattern is awkward to write and often does a lot more than is needed (e.g. creates a class where an extension method would provide superior performance).
Solution: SIP for implicit classes may be enough.  This requires escape analysis to work flawlessly, or for the compiler to provide inlining, or it only solves half the problem--but at least it (mostly) solves the complexity half, which is what we're after for the purposes of this discussion.

(7) Overloading method names by type is a feature that acts as a second-class citizen.  The most notable offender is the restriction on default values in overloaded method names even when the two are still completely unambiguous.  If I am to understand Martin's StackOverflow answer, this is because the _language specification_ would become more complex.  But this is pushing the complexity from the language specification onto the programmer in the form of an extra rule ("only one overloaded method can have default parameters") instead of the obvious "ambiguity is not allowed". 
Also, since ambiguity is presently allowed: class Z { def foo(s: String, a: Int = 42) = a; def foo(t: String) = t }, one has yet more to remember regarding how ties are actually resolved (shorter form is used; if return type is different, type inference can be used to get longer form; if parameters are named differently, referring to a parameter by name can be used to switch; otherwise if the default parameter is hidden there's no way to get it back).
Solution: allow anything unambiguous.  Disallow anything ambiguous.

That looks like a play with words to me. You have to specify (including all corner cases!) what ambiguous means. That specification is hard enough for plain overloading. Adding default arguments into the mix risks making it totally incomprehensible.

Cheers

 -- Martin

Daniel Spiewak

unread,
Nov 15, 2011, 4:15:37 PM11/15/11
to scala-...@googlegroups.com, Rex Kerr

That's intentional. I believe |> is not idiomatic Scala style; and I would tend to use method chaining instead. Generally, trying to do point-free style in Scala is a dead end. 
 
To elaborate on this just a little bit, inability to effectively encode point-free (in general!) is a fundamental limitation of object-oriented languages.  Effective point-free style absolutely requires that function parameters are ordered from least-to-most specific.  For example:

Prelude> :t foldl
foldl :: (a -> b -> a) -> a -> [b] -> a

This ordering is required for point-free because it makes it possible to apply a function, specifying a subset of the parameters, receiving a function that is progressively more specific (and less general).  If we invert the arguments, then the most specific parameter must come *first*, meaning that the function has lost all generality before any of the other parameters have been specified!

Unfortunately, most-specific-first is precisely the ordering imposed by any object-oriented language.  Think about this: what is the most specific parameter to the fold function?  Answer: the list!  In object-oriented languages, dispatch is always defined on the most-specific parameter in an expression.  This is another way of saying that we define the foldLeft function on List, *not* on Function2.

Now, it is possible to extend an object-oriented language with reified messages to achieve an effective point-free dispatch mechanism (basically, by making it possible to construct a method dispatch prior to having the dispatch receiver), but Scala does not do this.  In any case, such a mechanism imposes a lot of other requirements, such as a rich structural type system (much richer than Scala's anonymous interfaces).

There are certainly special cases where point-free does work in Scala, but by and large, you're not going to be able to use it effectively.  Everything in the language is conspiring against you, from the syntax to the type system to the fundamental object-oriented nature itself!

Daniel

Paul Phillips

unread,
Nov 15, 2011, 4:16:26 PM11/15/11
to Rex Kerr, scala-debate
On Tue, Nov 15, 2011 at 9:28 AM, Rex Kerr <ich...@gmail.com> wrote:
>   (b) Using `copy` to transform data is amazingly awkward: x.copy(_1 =
> x._1+1).  How about x.modify1(_ + 1)?

Is that subject an open ticket? I have a note somewhere that

foo.copy(x += 1)

should work, but I don't know if it's in the database.
(You're not getting "modify1" though, that is hardly
the road to lowering complexity.)

> (4) Collections library has too many methods, and the ones it has are not
> consistently present by function:
>   (a) collect, collectFirst but no collectLast

No collectLast, really, this troubles you? "First" and "last" are not
exactly symmetric concepts. (What's the last element of Stream from 1?)

>   (d) headOption but no lastOption

scala> Nil.lastOption
res0: Option[Nothing] = None

Tony Morris

unread,
Nov 15, 2011, 4:17:05 PM11/15/11
to scala-...@googlegroups.com
The problem with your proposal is that although I agree that there is
room for simplification without penalty, these very (simpler) solutions
provoke the Complex Complainant Crowd into a frenzy. In other words, the
simpler it is with respect to the problem being solved, the more
irrational complaints of complexity you must endure. Are you prepared to
weather that storm? Honestly, I yearn for more interesting matters of
discussion and contention and I take these very real social disorders
into account when assessing how to approach a problem (am I going to be
distracted with nonsense? to what extent?).

Such is the nature of masochism. *shrug* but be aware that there is a
social penalty to simplification. It's not free and you cannot ignore it.


--
Tony Morris
http://tmorris.net/


Lukas Rytz

unread,
Nov 15, 2011, 4:22:35 PM11/15/11
to Rex Kerr, scala-debate
No, the reason is that there's no easy way to implement it while keeping compilation stable, see here.

 
  But this is pushing the complexity from the language specification onto the programmer in the form of an extra rule ("only one overloaded method can have default parameters") instead of the obvious "ambiguity is not allowed".  Also, since ambiguity is presently allowed: class Z { def foo(s: String, a: Int = 42) = a; def foo(t: String) = t }, one has yet more to remember regarding how ties are actually resolved (shorter form is used; if return type is different, type inference can be used to get longer form; if parameters are named differently, referring to a parameter by name can be used to switch; otherwise if the default parameter is hidden there's no way to get it back).
Solution: allow anything unambiguous.  Disallow anything ambiguous.

(8) Default parameter values and implicits don't free you from remembering that you actually have parameters there when you use functions.  In particular,
  def dup(s: String, n: Int = 2) = s*n
  dup("fish")   // "fishfish"
  List("fish","cow").map(dup)  // Error

This could maybe be fixed

Rex Kerr

unread,
Nov 15, 2011, 4:29:36 PM11/15/11
to martin odersky, scala-debate
On Tue, Nov 15, 2011 at 3:20 PM, martin odersky <martin....@epfl.ch> wrote:
On Tue, Nov 15, 2011 at 6:28 PM, Rex Kerr <ich...@gmail.com> wrote:
After yet another "Scala is too complex" discussion, it occurred to me that, although these discussions very often mistake novelty for complexity, or mistake the complexity of the problem with the complexity of the solution, there _are_ still a number of areas where Scala could benefit from simplification (i.e. the resulting code would be simpler, even if Scala as a language was more complex, since it would allow a simple solution to be used where a complex one now must be).


I'm ignoring the obvious style choice ones like "make Scala like Haskell".

I was curious what areas other people have identified, and what, if any, solutions they've identified?  Or, perhaps people will take issue with some of mine?

My list includes:

(1) Specialization.  The basic concept isn't too bad, but the implementation is so full of bugs and limitations that employing it in anything but the most straightforward cases is painful at best.  I really don't know how to solve this except for serious manpower to be devoted to solving the problems, or to avoid the "feature".  Conceptual problems include enforced independence of types leading to combinatorial explosion of classes, inability to specify solutions for particular types or type combinations, inability to inherit, and more.
Solution: ?

Specialization will indeed take time to get right and push further. My advice for now would be: Treat it as a systems feature. Don't use @specialized in your code, but simply profit from the improved performance of specialized functions and tuples.

For those of us trying to create high-performance code now, this is of small comfort.  But, granted, newcomers to Scala need pay no attention to the man behind the curtain.

 

(2) No MyType.  The intent behind
  abstract class A {
    def clone(): MyType
  }
  class B extends A {
    def clone() = new B
  }
is amazingly obvious, yet Scala provides no simple way to achieve this.
Solution: Yes, there are issues with contravariance (which causes issues with taking MyType as a parameter of a method), but at least the compiler could just bail out with an error instead of not even having the feature.

MyType has a lot of hidden complexities. All formal treatments of MyType have included "precise types", which are an anathema to OOP. 

If MyType cannot be in contravariant position, and generation of MyType is acyclic terminating in a method that is declared in the current class, then I don't think there can be a problem.  Detecting this will require a bit of tooling, but I don't think it's conceptually very tricky to get something MyTypelike (i.e. that will solve the currently difficult typing problems with something simple that does what you want 90% of the time).

 

(3) Basic features (mostly FP) are missing from parts of the library, causing unnecessarily verbose and hard-to-follow code.
  (a) It seems like "re-invent |>" is the answer to a StackOverflow question about once a week.

That's intentional. I believe |> is not idiomatic Scala style; and I would tend to use method chaining instead. Generally, trying to do point-free style in Scala is a dead end.

I don't use |> for point-free sytle; I use it when I need to refer to the same thing multiple times.

Example:
  { val temp = list.map(_ < 5).filter(f); temp ::: temp.reverse }
  list.map(_ < 5).filter(f) |> { xs => xs ::: xs.reverse }

Maybe this would make Scala seem even more "complex" to people with a Java background, but when something is re-invented over and over and over, there's a case for library inclusion.

Also, method chaining only works when the methods return something appropriate.  Very often they don't.  So I use a side-effecting variant (which I call "effect") all the time, even though I almost never use point-free style. 

Example:
  xs.map(_ < 5).filter(f)./*argh I need to print this out for debugging */.flatMap(1 to _)
  xs.map(_ < 5).filter(f).effect(println).flatMap(1 to _)
 
 
  (b) Using `copy` to transform data is amazingly awkward: x.copy(_1 = x._1+1).  How about x.modify1(_ + 1)?
  (c) Option has no fold
  (d) There is no default container for a single mutable variable.
  ... (this could go on for a while; I hope these are some of the most important)
Solution: just add these things to the library!

 
Maybe. It amounts to having a larger library. Not saying this is bad, but it is hard to argue that this will reduce complexity.

Well, it reduces actual complexity, but perhaps not apparent-complexity-to-newbies.  For example:
  m.copy(
    index = m.index.copy(ci = -1),
    size = m.size.copy(tiles = oneone),
    tiles = Vector(m.tiles(m.v2i(v)))
  )
is a hideous mess compared to
  m.indexed(_.copy(ci = -1)).sized(_.tiled(_ => oneone)).tiled(t => t(v))
which would be the alternative if there were single-element copiers.

Paul's suggestion may get most of the way there without the awkward naming problem.
 
  
(4) Collections library has too many methods, and the ones it has are not consistently present by function:

  (a) collect, collectFirst but no collectLast
  (b) drop, dropRight, dropWhile, but no dropRightWhile
  (c) filter/filterNot but no findNot, fornone, doesnotexist, etc.
  (d) headOption but no lastOption
  (e) reverseMap but no reverseFilter, reverseFlatMap, etc.
  (f) minBy, maxBy but no sumBy, productBy
  ...(various others)
This leads to a large memorization burden.
Solution: more views, less duplication.  Option view wraps any single-element return in an option.  Reverse (or right) view does things from other end.
Open problem: how can this have acceptable performance? 

And code size. Yes. I'd be interested in seeing solutions to this one. 
Or maybe decide to drop some methods from collections instead of adding new ones?

I don't know how to address this one by dropping methods.  Almost all of the methods either have good performance reasons to be there or are beloved by many (even if not by me).

(Paul pointed out a few errors of mine: yes, lastOption exists.  tailOption does not.  Neither maxOption nor maxOrElse do.  This just highlights that I can't keep them straight.)

I can imagine complex language features that would make this work, but I'm not sure I even want to propose them.  (E.g. allowing method names indicate a choice of implicit parameter.)

 
 

(5) Generic numeric code is awkward and low-performance.
Solution: specialize.
Problem: see (1).

(6) "Enrich" my library pattern is awkward to write and often does a lot more than is needed (e.g. creates a class where an extension method would provide superior performance).
Solution: SIP for implicit classes may be enough.  This requires escape analysis to work flawlessly, or for the compiler to provide inlining, or it only solves half the problem--but at least it (mostly) solves the complexity half, which is what we're after for the purposes of this discussion.

(7) Overloading method names by type is a feature that acts as a second-class citizen.  The most notable offender is the restriction on default values in overloaded method names even when the two are still completely unambiguous.  If I am to understand Martin's StackOverflow answer, this is because the _language specification_ would become more complex.  But this is pushing the complexity from the language specification onto the programmer in the form of an extra rule ("only one overloaded method can have default parameters") instead of the obvious "ambiguity is not allowed". 
Also, since ambiguity is presently allowed: class Z { def foo(s: String, a: Int = 42) = a; def foo(t: String) = t }, one has yet more to remember regarding how ties are actually resolved (shorter form is used; if return type is different, type inference can be used to get longer form; if parameters are named differently, referring to a parameter by name can be used to switch; otherwise if the default parameter is hidden there's no way to get it back).
Solution: allow anything unambiguous.  Disallow anything ambiguous.

That looks like a play with words to me. You have to specify (including all corner cases!) what ambiguous means. That specification is hard enough for plain overloading. Adding default arguments into the mix risks making it totally incomprehensible.

Named arguments are harder than default arguments, and they're already not checked at definition time, leading to stuff like:

object Oops {
  def f(i: Int, j: String) = j*i
  def f(j: String, i: Int) = i.toString*j.length
}

scala> Oops.f(i = 5, j = "Hi")
<console>:10: error: ambiguous reference to overloaded definition,
both method f in object Oops of type (j: String, i: Int)String
and  method f in object Oops of type (i: Int, j: String)String
match argument types (i: Int,j: java.lang.String)
              Oops.f(i=5, j="Hi")

If you want to solve the named argument problem, that's tricky but relatively doable.

If you want to ignore named arguments and just catch that when used, it's easier.

Let Ts = Es ::: Ds be the list of all types in a method, where Es are the explicit parameters and Ds are those with defaults.  The valid abbreviations of the defaults are (pseudocode)
  Abr = Ds.reverse.tails.toList
And the apparent signatures are then
  Sig = Abr.map(Es ::: _)
Methods with types Ts and Tz and corresponding apparent signatures Sig and Zig are _ambiguous_ if and only if Sig /\ Zig is nonempty (where /\ is intersection).  A *-ed parameter is part of the explicit parameters, but will match any number of explicit or unnamed default items of the same type (so you need a special comparator for equality for intersection).

Modification for N parameter lists: have to consider Es1 ::: Ds1, Es2 ::: Ds2, ..., EsN ::: DsN _and_ make sure that the full erased signatures are different.

Unspecified but easy to add in: return parameter considered distinguishing?  Implicit treated normally or specially (e.g. as an elidable parameter list)?

If multiple methods apply to a particular invocation (due to subclassing), the most derived type that works is chosen, filling in from left to right (i.e. the usual rules).

I'm not going to argue that this is trivial, but I also don't think it's a monument of bewildering complexity.

  --Rex

martin odersky

unread,
Nov 15, 2011, 5:23:26 PM11/15/11
to Rex Kerr, scala-debate
On Tue, Nov 15, 2011 at 10:29 PM, Rex Kerr <ich...@gmail.com> wrote:
On Tue, Nov 15, 2011 at 3:20 PM, martin odersky <martin....@epfl.ch> wrote:
On Tue, Nov 15, 2011 at 6:28 PM, Rex Kerr <ich...@gmail.com> wrote:
After yet another "Scala is too complex" discussion, it occurred to me that, although these discussions very often mistake novelty for complexity, or mistake the complexity of the problem with the complexity of the solution, there _are_ still a number of areas where Scala could benefit from simplification (i.e. the resulting code would be simpler, even if Scala as a language was more complex, since it would allow a simple solution to be used where a complex one now must be).


I'm ignoring the obvious style choice ones like "make Scala like Haskell".

I was curious what areas other people have identified, and what, if any, solutions they've identified?  Or, perhaps people will take issue with some of mine?

My list includes:

(1) Specialization.  The basic concept isn't too bad, but the implementation is so full of bugs and limitations that employing it in anything but the most straightforward cases is painful at best.  I really don't know how to solve this except for serious manpower to be devoted to solving the problems, or to avoid the "feature".  Conceptual problems include enforced independence of types leading to combinatorial explosion of classes, inability to specify solutions for particular types or type combinations, inability to inherit, and more.
Solution: ?

Specialization will indeed take time to get right and push further. My advice for now would be: Treat it as a systems feature. Don't use @specialized in your code, but simply profit from the improved performance of specialized functions and tuples.

For those of us trying to create high-performance code now, this is of small comfort.  But, granted, newcomers to Scala need pay no attention to the man behind the curtain.

Yes. But note that you already get much more than Java provides.
Help is welcome. Specialization is very very hard, that's why no other language has it. 
 

(2) No MyType.  The intent behind
  abstract class A {
    def clone(): MyType
  }
  class B extends A {
    def clone() = new B
  }
is amazingly obvious, yet Scala provides no simple way to achieve this.
Solution: Yes, there are issues with contravariance (which causes issues with taking MyType as a parameter of a method), but at least the compiler could just bail out with an error instead of not even having the feature.

MyType has a lot of hidden complexities. All formal treatments of MyType have included "precise types", which are an anathema to OOP. 

If MyType cannot be in contravariant position, and generation of MyType is acyclic terminating in a method that is declared in the current class, then I don't think there can be a problem.  Detecting this will require a bit of tooling, but I don't think it's conceptually very tricky to get something MyTypelike (i.e. that will solve the currently difficult typing problems with something simple that does what you want 90% of the time).

MyType only in covariant position is (a) quite restrictive and (b) requires complicated rules. Furthermore, it's not always what you want. For instance, you might have an abstraction with several implementation classes, but you do not want for MyType to specialize further in the implementation classes. In the end you get very limited usefulness for  considerably complexity. Scala's abstract types are more general because they can be used anywhere, and add very little extra cost in notation. 


 

(3) Basic features (mostly FP) are missing from parts of the library, causing unnecessarily verbose and hard-to-follow code.
  (a) It seems like "re-invent |>" is the answer to a StackOverflow question about once a week.

That's intentional. I believe |> is not idiomatic Scala style; and I would tend to use method chaining instead. Generally, trying to do point-free style in Scala is a dead end.

I don't use |> for point-free sytle; I use it when I need to refer to the same thing multiple times.

Example:
  { val temp = list.map(_ < 5).filter(f); temp ::: temp.reverse }
  list.map(_ < 5).filter(f) |> { xs => xs ::: xs.reverse }


In fact, I think that shows precisely what's wrong with |>! I have noted a tendency among the more functional Scala users to avoid local val definitions at all cost. This is wrong! Good functional programming uses lots of meaningful names for every intermediate result. I personally think your first line without |> is much clearer than the second one.

You can do even better by picking meaningful names and avoiding unnecessary punctuation:

  val firstHalf = list map f filter (_ < 5)
  firstHalf ++ firstHalf.reverse

Do you see the difference in style? Reduced punctuation, increased meaning. Every closure, every special symbol has a cost in complexity. Where you can, avoid it!
  
 

That looks like a play with words to me. You have to specify (including all corner cases!) what ambiguous means. That specification is hard enough for plain overloading. Adding default arguments into the mix risks making it totally incomprehensible.

Named arguments are harder than default arguments, and they're already not checked at definition time, leading to stuff like:

object Oops {
  def f(i: Int, j: String) = j*i
  def f(j: String, i: Int) = i.toString*j.length
}

scala> Oops.f(i = 5, j = "Hi")
<console>:10: error: ambiguous reference to overloaded definition,
both method f in object Oops of type (j: String, i: Int)String
and  method f in object Oops of type (i: Int, j: String)String
match argument types (i: Int,j: java.lang.String)
              Oops.f(i=5, j="Hi")

That's not a problem, because the two f's can still be distinguished by passing arguments positionally. So they should not be thrown out at definition time.

Cheers

 -- Martin

Tony Morris

unread,
Nov 15, 2011, 5:47:39 PM11/15/11
to scala-...@googlegroups.com
On 11/16/2011 08:23 AM, martin odersky wrote:
> I have noted a tendency among the more functional Scala users to avoid
> local val definitions at all cost.

Where?

Michael Schmitz

unread,
Nov 15, 2011, 6:08:57 PM11/15/11
to Rex Kerr, scala-debate
What is MyType (#2 -- I understand the example intuitively however)?
What languages have it? What is the preferred method in Scala to get
around the lack of this feature?

On Tue, Nov 15, 2011 at 9:28 AM, Rex Kerr <ich...@gmail.com> wrote:

John Nilsson

unread,
Nov 15, 2011, 6:26:35 PM11/15/11
to scala-...@googlegroups.com

I don't think it's OOP that is the problem. You assume parameter ordering in you argument.

If we assume indexed parameters it's conceivable that point free composition would work by matching names instead.

Or some other mechanism to declare roles besides order.

BR,
John

Chris Twiner

unread,
Nov 15, 2011, 6:32:19 PM11/15/11
to Michael Schmitz, Rex Kerr, scala-debate
http://www.scala-lang.org/node/6649

this.type example

http://scalada.blogspot.com/2008/02/thistype-for-chaining-method-calls.html

but that's actually the singleton type of the object. So you use
abstract type members:

http://code.google.com/p/scala-scales/wiki/VirtualConstructorPreSIP

and you pick when to allow a set type. The problem is the boilerplate
for providing both a class / trait that can be extended and one that
is directly usable.

You can ignore the rest of that link though, until code generation in
a Scala plugin is truly possible that is (within the same compilation
unit - you can generate outside it simply enough).

Thing is the plugin of Gerolfs might be more what one is after anyway
(except for the currently necessary separate compilation units) if you
don't also require extensibility:

https://github.com/gseitz/Lensed

A combo of the two would be lovely, ie. allow fluent immutable but
parameterised classes but also allow functions that capture those
transformations. I can dream ----

@Kevin - my salivation is starting to dry up waiting for your solution ^_^

Daniel Spiewak

unread,
Nov 15, 2011, 7:06:23 PM11/15/11
to scala-...@googlegroups.com

I don't think it's OOP that is the problem. You assume parameter ordering in you argument.

Not really, no.  What I'm assuming is polymorphic modules.  If the function namespace is polymorphic based on a value (generally modeled as an implicit parameter), then there is a dependency from that function value onto the polymorphic module.  Whether that parameter is first or last is irrelevant, it must still be specified *before* you can have a useful function value.  By definition, that polymorphic module is going to be the most specific parameter, and also by definition, it must be specified first (logically, if not lexically).

Daniel

Daniel Sobral

unread,
Nov 15, 2011, 7:53:06 PM11/15/11
to Rex Kerr, Erik Osheim, scala-debate
On Tue, Nov 15, 2011 at 17:17, Rex Kerr <ich...@gmail.com> wrote:
Bad:
  <=> looks way too much like the Pascal <> "not equal" operator and the scalaz <===> "distance" operator.
  ** looks like a collection operator--multiplication on matricies or somesuch (although I understand the historical attraction).  Use something that doesn't conflict with the "doubling means over a collection" meme.  Also, this has the same precedence as *, which yields surprising results.  Use something with higher precedence like \^ or ~^.

Well, that's their problem for having something that looks like Perl's compare, isn't it? :-)
 
 
Not sure:
  !== does not match the scalaz /== (and I have also seen =/=), but as long as it is typesafe, that's fine.  I think scalaz made the wrong (i.e. inconsistent) choice on that one; as long as people coordinate on !== vs. =/= for "typesafe not-equals" I think it's fine.

/= is Haskell. I can understand why Scalaz would pick it, but I think it is the wrong choice for Scala.
 
Good:
  === should be defined everywhere.  Typesafe equals is great.  (I think this is typesafe...if not, this is the worst possible choice of symbol.)

The lack of extensibility is a bit problematic, but can't one just define the appropriate implicits?


Odersky  once actually proposed giving up on having == be based on equals, and, instead, have it based on an Eq[-T], with a fallback to equals, iirc. That is one proposal I wish had gone forward...

I wonder if it would be useful to pick up all these little proposals that never saw the light of the day and write them up as SIP? At least they'd get documented and preserved in a more searchable place, and even if rejected we could have a rationale for the decision attached to it.

By the way, the generalization for copy is lenses. They might be more verbose than the modify you suggested, but they are composable and have some theory about them and their properties.

--
Daniel C. Sobral

I travel to the future all the time.

Tony Morris

unread,
Nov 15, 2011, 7:56:37 PM11/15/11
to scala-...@googlegroups.com
On 11/16/2011 10:53 AM, Daniel Sobral wrote:
> /= is Haskell. I can understand why Scalaz would pick it, but I think
> it is the wrong choice for Scala.
A lot of these names are simply to avoid clashing, rather than because
they were the most preferred.

André van Delft

unread,
Nov 15, 2011, 8:35:06 PM11/15/11
to scala-debate
On 15 nov, 21:20, martin odersky <martin.oder...@epfl.ch> wrote:
> MyType has a lot of hidden complexities. All formal treatments of MyType
> have included "precise types", which are an anathema to OOP.

Last year you wrote "To make it work and useful in contravariant
positions, you need to also introduce exact types. So it's a much
bigger can of worms than it looks at first". http://www.scala-lang.org/node/6649

I think it is a kind of confusion that one would want to use the
"MyType" in contravariant positions; the can of worms is there. IMO
you only should want to use "MyType" in covariant positions. It is
useful enough there.

From the language specification:
"p.type, where p is a path pointing to a value expected to conform
(§6.1) to scala.AnyRef. The type denotes the set of values consisting
of
null and the value denoted by p"

If you would change the latter part of the definition by "the set of
values consisting of null and all instances of the class of p", would
there be any complexities? (apart from breaking the "trick" of
Tightened Pattern Match, but that use case might be covered in another
way).
The "covarianceness" would remain the same, IMO.
would the new definition be equal to what you mean by "Exact type"?
Anyway, I would want exactly this type, not the rather vague MyType
that Kim Bruce described.

About a decade ago, I built such an experimental extension to Java:
types "This" and "p.This" seemed well definable, and implementation
was quite easy (looking back, that is).
The extension also had inheritable constructors, also named "This"
"This" became the type of "this", and also the return type of the
clone() method and of the inheritable constructors.

These new constructs seemed useful to enrich Java collections with a
filter method, and with 2 constructors prescribed by the Javadoc
comments (one constructor parameterless, one constructor with a
Collection as a parameter).

E.g., interface Collection<E> would get:
This();
This(Collection<E> col);
This filter(Predicate<E> p);

In AbstractCollection<E> this would become:
public This() {}
public This(Collection<E> col) {addAll(col);}
public This filter(Predicate<E> p) {
This result = new This();
...// add all suitable elements to result

 return result;

 }

I think (but do not know yet) that there would be more use cases like
this.
I wonder what the complexities for something similar in Scala would
be.
I don't see how something similar may be done using abstract types.
(I wrote part of this text on 20 June in the scala-language group; my
use case there was a bit flawed).

Daniel Sobral

unread,
Nov 15, 2011, 8:50:47 PM11/15/11
to martin odersky, Rex Kerr, scala-debate
On Tue, Nov 15, 2011 at 20:23, martin odersky <martin....@epfl.ch> wrote:


(3) Basic features (mostly FP) are missing from parts of the library, causing unnecessarily verbose and hard-to-follow code.
  (a) It seems like "re-invent |>" is the answer to a StackOverflow question about once a week.

That's intentional. I believe |> is not idiomatic Scala style; and I would tend to use method chaining instead. Generally, trying to do point-free style in Scala is a dead end.

I don't use |> for point-free sytle; I use it when I need to refer to the same thing multiple times.

Example:
  { val temp = list.map(_ < 5).filter(f); temp ::: temp.reverse }
  list.map(_ < 5).filter(f) |> { xs => xs ::: xs.reverse }


In fact, I think that shows precisely what's wrong with |>! I have noted a tendency among the more functional Scala users to avoid local val definitions at all cost. This is wrong! Good functional programming uses lots of meaningful names for every intermediate result. I personally think your first line without |> is much clearer than the second one.

You can do even better by picking meaningful names and avoiding unnecessary punctuation:

  val firstHalf = list map f filter (_ < 5)
  firstHalf ++ firstHalf.reverse

Do you see the difference in style? Reduced punctuation, increased meaning. Every closure, every special symbol has a cost in complexity. Where you can, avoid it!
  

I find this very interesting, because I do not use |> like Rex mentions. Instead, I use it *precisely* to get in line with method chaining. For example

list map f filter p |> h flatMap g

instead of

h(list map f filter p) flatMap g

or even

val tmp = list map f filter p
h(tmp) flatMap g

In this case, I could add h with the enrich my library pattern, but it's rather cumbersome for single definitions. I *like* chaining infix methods, but functions completely break up the flow.
 

Rex Kerr

unread,
Nov 16, 2011, 12:01:47 AM11/16/11
to martin odersky, scala-debate
On Tue, Nov 15, 2011 at 5:23 PM, martin odersky <martin....@epfl.ch> wrote:


On Tue, Nov 15, 2011 at 10:29 PM, Rex Kerr <ich...@gmail.com> wrote:

For those of us trying to create high-performance code now, this is of small comfort.  But, granted, newcomers to Scala need pay no attention to the man behind the curtain.

Yes. But note that you already get much more than Java provides.
Help is welcome. Specialization is very very hard, that's why no other language has it.

This is on the very top of my list of things to try to help with, but the barrier to entry is enormous, as one has to not only solve the conceptual problems with the current implementation, but also fix bugs in the existing implementation, and be highly aware of the workings of the compiler.  It would probably take me a month to be productive, and barring a major career change I can't afford that kind of time.
 
 
If MyType cannot be in contravariant position, and generation of MyType is acyclic terminating in a method that is declared in the current class, then I don't think there can be a problem.  Detecting this will require a bit of tooling, but I don't think it's conceptually very tricky to get something MyTypelike (i.e. that will solve the currently difficult typing problems with something simple that does what you want 90% of the time).

MyType only in covariant position is (a) quite restrictive and (b) requires complicated rules. Furthermore, it's not always what you want. For instance, you might have an abstraction with several implementation classes, but you do not want for MyType to specialize further in the implementation classes. In the end you get very limited usefulness for  considerably complexity. Scala's abstract types are more general because they can be used anywhere, and add very little extra cost in notation.

I couldn't really characterize the giant CanBuildFroms dangling off of most collections methods as "very little extra cost".  There's even a use case in the docs to prevent people from having to think about these.  Even an extra extends Creates[C[X]] starts getting annoying (and is less useful).  If, for example, you don't want MyType to specialize further, just override the generator method(s) to be the exact type, and have the compiler understand what this means.

Easy?  No.  Solves every possible desired case?  No.  But it's the kind of feature that can shift an imposing amount of complexity away from library coders, which means more people can code libraries, and those who already are coding libraries can often get their work done faster and with less chance for error.
 
 
I don't use |> for point-free sytle; I use it when I need to refer to the same thing multiple times.

Example:
  { val temp = list.map(_ < 5).filter(f); temp ::: temp.reverse }
  list.map(_ < 5).filter(f) |> { xs => xs ::: xs.reverse }


In fact, I think that shows precisely what's wrong with |>! I have noted a tendency among the more functional Scala users to avoid local val definitions at all cost. This is wrong! Good functional programming uses lots of meaningful names for every intermediate result. I personally think your first line without |> is much clearer than the second one.

Fair enough, but my second example (with effect), and Daniel Sobral's example, with interleaved method calls and functions, are a lot harder to argue with.  Personally, I often write code where the intermediate variables are just nuisance variables.  There isn't always an obvious description for them; it's a manipulation in process.  In those cases, I prefer for the variable to appear and disappear inline, since it's really irrelevant except as a temporary placeholder.

Avoiding f6(f5(f3(x.f1.f2).f4)).f7 messes is a good enough reason on its own, though.
 
You can do even better by picking meaningful names and avoiding unnecessary punctuation:

  val firstHalf = list map f filter (_ < 5)
  firstHalf ++ firstHalf.reverse

Do you see the difference in style? Reduced punctuation, increased meaning. Every closure, every special symbol has a cost in complexity. Where you can, avoid it!

Agreed; I chose a poor example, and yours is arguably a better way to do it.
 
  
Named arguments are harder than default arguments, and they're already not checked at definition time, leading to stuff like:

object Oops {
  def f(i: Int, j: String) = j*i
  def f(j: String, i: Int) = i.toString*j.length
}

scala> Oops.f(i = 5, j = "Hi")
<console>:10: error: ambiguous reference to overloaded definition,
both method f in object Oops of type (j: String, i: Int)String
and  method f in object Oops of type (i: Int, j: String)String
match argument types (i: Int,j: java.lang.String)
              Oops.f(i=5, j="Hi")

That's not a problem, because the two f's can still be distinguished by passing arguments positionally. So they should not be thrown out at definition time.

I didn't say they should be thrown out.  I'm just saying that the compiler solves this problem already, which I would consider harder than merely detecting ambiguity with default arguments.  Though not trivial, it's really not that hard (unless I'm missing something major) as I showed in my previous post.

  --Rex

Ben Hutchison

unread,
Nov 16, 2011, 12:52:00 AM11/16/11
to Erik Osheim, scala-debate, Rex Kerr
On Wed, Nov 16, 2011 at 5:43 AM, Erik Osheim <er...@plastic-idolatry.com> wrote:
> Have you looked at https://github.com/azavea/numeric? After a bunch of
> testing and profiling it gets good performance (e.g. comparable to
> direct code) under Scala 2.9. I'd be interested in your comments or
> critiques.

Some feedback on the code & ideas in your repo:

1. The speedups from Numeric specialisation are really welcome. I have
the feeling thats the most widely accepted part of your proposal.
After all, most people using or evaluating Numeric would/will need
non-boxed math to make serious use of Numeric.

2. The other changes, those not strictly required to specialise
Numeric, are more controversial and less backwards compatible; new
operators, incompatible changing the hierarchy, the conversion
framework. I don't really have use cases that require these
personally. Some of them seem to conflict with my aspirations to make
Numeric operations work for a broader range of data types, typically
non-scalars like Intervals and Vectors. Features like typesafe
equality don't seem relevant only to numbers in particular, and
overlap other libraries domains.

If you could "unbundle" these changes into smaller, more focused
change sets, it might make them easier to standardize, especially the
"low hanging fruit" of Numeric specialisation.

-Ben

martin odersky

unread,
Nov 16, 2011, 1:53:54 AM11/16/11
to Daniel Sobral, Rex Kerr, Erik Osheim, scala-debate
On Wed, Nov 16, 2011 at 1:53 AM, Daniel Sobral <dcso...@gmail.com> wrote:
On Tue, Nov 15, 2011 at 17:17, Rex Kerr <ich...@gmail.com> wrote:
Bad:
  <=> looks way too much like the Pascal <> "not equal" operator and the scalaz <===> "distance" operator.
  ** looks like a collection operator--multiplication on matricies or somesuch (although I understand the historical attraction).  Use something that doesn't conflict with the "doubling means over a collection" meme.  Also, this has the same precedence as *, which yields surprising results.  Use something with higher precedence like \^ or ~^.

Well, that's their problem for having something that looks like Perl's compare, isn't it? :-)
 
 
Not sure:
  !== does not match the scalaz /== (and I have also seen =/=), but as long as it is typesafe, that's fine.  I think scalaz made the wrong (i.e. inconsistent) choice on that one; as long as people coordinate on !== vs. =/= for "typesafe not-equals" I think it's fine.

/= is Haskell. I can understand why Scalaz would pick it, but I think it is the wrong choice for Scala.
 
Good:
  === should be defined everywhere.  Typesafe equals is great.  (I think this is typesafe...if not, this is the worst possible choice of symbol.)

The lack of extensibility is a bit problematic, but can't one just define the appropriate implicits?


Odersky  once actually proposed giving up on having == be based on equals, and, instead, have it based on an Eq[-T], with a fallback to equals, iirc. That is one proposal I wish had gone forward...

The proposal might still go forward at some point, but it's more involved than I first thought. The problem is that the proposal cannot be seen in isolation. It also influences automatic equality for case classes. Then, throw in the need for backwards compatibility and things are not so easy anymore...

 -- Martin


Lukas Rytz

unread,
Nov 16, 2011, 2:22:12 AM11/16/11
to Rex Kerr, martin odersky, scala-debate
As I tried to convey before: the first implementation of default arguments supported overloads. The reason we disabled
it was the non-deterministic naming scheme. The added spec complexity might have supported the decision, but it was
not the primary reason.

Rex Kerr

unread,
Nov 16, 2011, 10:48:20 AM11/16/11
to Lukas Rytz, martin odersky, scala-debate

On Wed, Nov 16, 2011 at 2:22 AM, Lukas Rytz <lukas...@epfl.ch> wrote:

As I tried to convey before: the first implementation of default arguments supported overloads. The reason we disabled
it was the non-deterministic naming scheme. The added spec complexity might have supported the decision, but it was
not the primary reason.

What do you mean by "non-deterministic naming scheme"?  There's no inherent difference in determinism between default arguments and a list of methods with the same name, each adding one more argument (with all previous arguments named as they were before), until you start using named parameters, where you've _already_ made the decision to detect potential problems at use-sites rather than at declaration.

Or do you mean the (hidden) names of the default values, which would then have to be shared across several methods?  I have a hard time envisioning a scenario where the compiler would become confused about which variables to use.

I guess I'm not seeing the issue.

  --Rex

Jason Zaugg

unread,
Nov 16, 2011, 11:04:01 AM11/16/11
to Rex Kerr, Lukas Rytz, martin odersky, scala-debate
On Wed, Nov 16, 2011 at 4:48 PM, Rex Kerr <ich...@gmail.com> wrote:
Or do you mean the (hidden) names of the default values, which would then have to be shared across several methods?  I have a hard time envisioning a scenario where the compiler would become confused about which variables to use.

I guess I'm not seeing the issue.

Yep, that's what Lukas was referring to. I suppose that for binary compatbility, you should be able to add a second overloaded method with defaults without requiring a recompile of the clients of the existing method. You could encode the signature (perhaps just the erased parameter types) of the method into the names of the default getters. That would have a nice C++ feel to it! Those getters are also virtual, so you have to make sure that sub-classes end up with the same names.

-jason

Rex Kerr

unread,
Nov 16, 2011, 11:53:42 AM11/16/11
to Jason Zaugg, Lukas Rytz, martin odersky, scala-debate
On Wed, Nov 16, 2011 at 11:04 AM, Jason Zaugg <jza...@gmail.com> wrote:
On Wed, Nov 16, 2011 at 4:48 PM, Rex Kerr <ich...@gmail.com> wrote:
Or do you mean the (hidden) names of the default values, which would then have to be shared across several methods?  I have a hard time envisioning a scenario where the compiler would become confused about which variables to use.

I guess I'm not seeing the issue.

Yep, that's what Lukas was referring to. I suppose that for binary compatbility, you should be able to add a second overloaded method with defaults without requiring a recompile of the clients of the existing method.

Okay, I agree that would be nice.
 
You could encode the signature (perhaps just the erased parameter types) of the method into the names of the default getters. That would have a nice C++ feel to it!

Just because C++ does something doesn't mean it's a bad idea.  If you want power and transparency, you _must_ do something like that.  The current choice in Scala sacrifices apparent simplicity and power in the language for easier implementation in the compiler.  In the short run, that's understandable: there is limited manpower that can be devoted to the compiler, so it makes sense to shift some of the burden onto the users.  But I don't think this should be the long-term goal.

If you want the capability and are willing to sacrifice a little bit of transparency, allow an optional annotation for a string added to the method names.  Clunky?  A bit.  As clunky as not even having the functionality?  I doubt it.

Alternatively, encode the parameter names.  Then you state: if you want binary compatibility, you have to use different names for your default parameters.  So for a parameter called "index", instead of f$default$1 you would have f$default$index if there was no clash on the name between methods, and f$default$index$1 and f$default$index$2 if there was.  If the type was the same _and_ the default value was the same, then multiple methods could share f$default$index.  This naming scheme would also make Java interop a bit simpler.  (I tend to know that something is called "index" before I realize it is the second argument with a default.)
 
Although I try to avoid doubling up method names when possible, sometimes you have _exactly the same operation conceptually_ and therefore using multiple names distracts from what is going on.  I not infrequently have things like

  def display(s: String, columns: Int = 80) { ... }
  def display(f: File, columns: Int) { ... }
  def display(f: File) { display(f,80) }
  def display(mc: MyClass, columns: Int ) { ... }
  def display(mc: MyClass) { display(mc,80) }
  ...

Naming things displayString, displayFile really does not help understanding at all here.  Implicitly converting everything to a Displayable (so you can have one display(d: Displayable, columns: Int = 80)) is even more work (though I have taken that approach also at times, and will do so more when implicit classes are implemented).
 
Those getters are also virtual, so you have to make sure that sub-classes end up with the same names.

Use whatever method that is currently used to make sure the sub-classes know how to get the default parameters at all.  Solving already-solved problems is easy.

Anyway, I agree that solving this will not magically make all complaints about the complexity of Scala go away.

  --Rex

Jason Zaugg

unread,
Nov 16, 2011, 12:14:24 PM11/16/11
to Rex Kerr, Lukas Rytz, martin odersky, scala-debate
On Wed, Nov 16, 2011 at 5:53 PM, Rex Kerr <ich...@gmail.com> wrote:

You could encode the signature (perhaps just the erased parameter types) of the method into the names of the default getters. That would have a nice C++ feel to it!

Just because C++ does something doesn't mean it's a bad idea.  If you want power and transparency, you _must_ do something like that.  The current choice in Scala sacrifices apparent simplicity and power in the language for easier implementation in the compiler.  In the short run, that's understandable: there is limited manpower that can be devoted to the compiler, so it makes sense to shift some of the burden onto the users.  But I don't think this should be the long-term goal.

If you want the capability and are willing to sacrifice a little bit of transparency, allow an optional annotation for a string added to the method names.  Clunky?  A bit.  As clunky as not even having the functionality?  I doubt it.

Alternatively, encode the parameter names.  Then you state: if you want binary compatibility, you have to use different names for your default parameters.  So for a parameter called "index", instead of f$default$1 you would have f$default$index if there was no clash on the name between methods, and f$default$index$1 and f$default$index$2 if there was.  If the type was the same _and_ the default value was the same, then multiple methods could share f$default$index.  This naming scheme would also make Java interop a bit simpler.  (I tend to know that something is called "index" before I realize it is the second argument with a default.)

Param names can be different in a subclass.

scala> class A { def foo(a: Int = 0) = a }
defined class A

scala> class B extends A { override def foo(b: Int) = b }
defined class B

scala> new B().foo()
res0: Int = 0
 
Anyway, obviously these things can be encoded one way or another, so I'll quit poking holes.

Although I try to avoid doubling up method names when possible, sometimes you have _exactly the same operation conceptually_ and therefore using multiple names distracts from what is going on.  I not infrequently have things like

  def display(s: String, columns: Int = 80) { ... }
  def display(f: File, columns: Int) { ... }
  def display(f: File) { display(f,80) }
  def display(mc: MyClass, columns: Int ) { ... }
  def display(mc: MyClass) { display(mc,80) }
  ...

Naming things displayString, displayFile really does not help understanding at all here.  Implicitly converting everything to a Displayable (so you can have one display(d: Displayable, columns: Int = 80)) is even more work (though I have taken that approach also at times, and will do so more when implicit classes are implemented).

Giving them different names helps clients to understand compiler error messages, helps argument type inference, allows, eta expansion, and bunch of other little benefits.

But when you want to give a single name, I think the small amount of boilerplate involved with implicits is worth it. I would prefer an implicit parameter to a view.

  def display[D: Display](d: D)
 
  
Those getters are also virtual, so you have to make sure that sub-classes end up with the same names.

Use whatever method that is currently used to make sure the sub-classes know how to get the default parameters at all.  Solving already-solved problems is easy.

:) 

Anyway, I agree that solving this will not magically make all complaints about the complexity of Scala go away.

Pity that, otherwise we ought to add them post haste.

-jason

Rex Kerr

unread,
Nov 16, 2011, 1:17:22 PM11/16/11
to Jason Zaugg, scala-debate
This is getting off-topic, but it's still interesting, so:


On Wed, Nov 16, 2011 at 12:14 PM, Jason Zaugg <jza...@gmail.com> wrote:
On Wed, Nov 16, 2011 at 5:53 PM, Rex Kerr <ich...@gmail.com> wrote:
Although I try to avoid doubling up method names when possible, sometimes you have _exactly the same operation conceptually_ and therefore using multiple names distracts from what is going on.  I not infrequently have things like

  def display(s: String, columns: Int = 80) { ... }
  def display(f: File, columns: Int) { ... }
  def display(f: File) { display(f,80) }
  def display(mc: MyClass, columns: Int ) { ... }
  def display(mc: MyClass) { display(mc,80) }
  ...

Naming things displayString, displayFile really does not help understanding at all here.  Implicitly converting everything to a Displayable (so you can have one display(d: Displayable, columns: Int = 80)) is even more work (though I have taken that approach also at times, and will do so more when implicit classes are implemented).

Giving them different names helps clients to understand compiler error messages, helps argument type inference, allows, eta expansion, and bunch of other little benefits.

Given the limitations of the JVM stack trace (i.e. it tells you method names but not which method when arguments are overloaded), I agree with the first one.

The other two seem again like shifting work from the compiler-writer to the programmer.  Type inference?  It's just a mildly larger search space.  So search it.  Eta expansion?  Static typing to the rescue: the types are not the same.
 
But when you want to give a single name, I think the small amount of boilerplate involved with implicits is worth it. I would prefer an implicit parameter to a view.

  def display[D: Display](d: D)

What benefit do you gain here?  Well--okay, I can see one now, but what benefit would one gain if SI-5191 were fixed?  (Or is that the issue?)

  --Rex

P.S. I think the MyType and collections methods solutions _might_ make a significant dent in the perceived complexity of Scala.  Of course there's the initial shock value of (xs, ys).zipped.map(_ + 2*_) that is hard to offset (hopefully followed by wonder at how much has happened with so little code), but things like
  def +: [B >: A, That] (elem: B)(implicit bf: CanBuildFrom[ArraySeq[A], B, That]): That
are kind of imposing.  Even
  def +: [B >: A] (elem: B): this.class[B]
is scary enough, what with the weird >: and the backwards-acting +: (where this.class is MyType, of course).  Part of the problem is that it's not obvious what That is--this lends great power, but for all anyone reading the signature knows, That could be a Function0.  I'm not sure we're willing to give up the extra power, though.  I do quite like the string interoperability, for instance.

Jason Zaugg

unread,
Nov 16, 2011, 6:15:59 PM11/16/11
to Rex Kerr, scala-debate
On Wed, Nov 16, 2011 at 7:17 PM, Rex Kerr <ich...@gmail.com> wrote:
This is getting off-topic, but it's still interesting, so:

In -debate? Never! 

Giving them different names helps clients to understand compiler error messages, helps argument type inference, allows, eta expansion, and bunch of other little benefits.

Given the limitations of the JVM stack trace (i.e. it tells you method names but not which method when arguments are overloaded), I agree with the first one.

The stack trace gives you the line number, so with some digging you get the particular method. I'm more worried abou the scalac error of "could not apply arguments of types ... to <one of these six methods>". Sure, it's just like Java, other than for the uncertainty caused by type inference at, or prior to, the call site. 

The other two seem again like shifting work from the compiler-writer to the programmer.  Type inference?  It's just a mildly larger search space.  So search it.  Eta expansion?  Static typing to the rescue: the types are not the same.

I don't agree that complexity for the implementor/spec-writer necessarily buys simplicity for the programmer in this case. Change a signature; break argument or eta-expansion inference in an unrelated area of your program; try to clean up. (Admittedly, this can occur today when adding an overloaded method, but given that I'm not cheerleading static overloading in the first place, I'm not going to weigh up the relative suckiness of the two problems.)
 
But when you want to give a single name, I think the small amount of boilerplate involved with implicits is worth it. I would prefer an implicit parameter to a view.

  def display[D: Display](d: D)
 
What benefit do you gain here?  Well--okay, I can see one now, but what benefit would one gain if SI-5191 were fixed?  (Or is that the issue?)

There are a few ways to express this:

1. def display(a: String)
2. def display(a: Displayable)
3. def display[A <% Displayable](a: A)
4. def display[A: Display](a: A)

#1 violates the sacred Rule of Ortiz -- you ask your caller to implicitly an argument to a `String`, perhaps by providing a suite of conversion methods, hurting type safety in that scope in other places where `String` is used. Not what you suggested, but I thought I'd place it on the table before saying it was off the table.
#2 is better, as the conversion is to a dedicated type. But the conversion will be applied to the argument *before* it arrives in your method.
#3 is pretty much like #2, but you would also be able to accept, say, `List[A]` and convert each element of the collection to `Displayable`.
#4 means you don't actually need to allocate a `Displayable` wrapper; instead you just pass in the `Display` instance, to which you can pass the `A`. Furthermore, you can write implicits to provide, say, `Displayable[(A, List[A]))]`, given a `Displayable[A]`. Relying on the caller to invoke an implicit view, or requiring a view bound, are a bit clunkier when you get to that.
 
P.S. I think the MyType and collections methods solutions _might_ make a significant dent in the perceived complexity of Scala.  Of course there's the initial shock value of (xs, ys).zipped.map(_ + 2*_) that is hard to offset (hopefully followed by wonder at how much has happened with so little code), but things like
  def +: [B >: A, That] (elem: B)(implicit bf: CanBuildFrom[ArraySeq[A], B, That]): That
are kind of imposing.  Even
  def +: [B >: A] (elem: B): this.class[B]
is scary enough, what with the weird >: and the backwards-acting +: (where this.class is MyType, of course).  Part of the problem is that it's not obvious what That is--this lends great power, but for all anyone reading the signature knows, That could be a Function0.  I'm not sure we're willing to give up the extra power, though.  I do quite like the string interoperability, for instance.

I too would prefer it if there was a simple version of `map`/`flatMap` implemented by the silent majority of the collections that handle arbitrary element types. But this could be encoded with an abstract type, rather than with `MyType`, with only a small cost to the implementor, and, given the right smarts in Scaladoc, minimal cost to the API user. But it can't easily co-exist with the general form of these operations: for example for-comprehensions won't work if `map` is overloaded.

-jason

Rex Kerr

unread,
Nov 16, 2011, 11:54:36 PM11/16/11
to Jason Zaugg, scala-debate
On Wed, Nov 16, 2011 at 6:15 PM, Jason Zaugg <jza...@gmail.com> wrote:
On Wed, Nov 16, 2011 at 7:17 PM, Rex Kerr <ich...@gmail.com> wrote:
This is getting off-topic, but it's still interesting, so:

In -debate? Never! 

Giving them different names helps clients to understand compiler error messages, helps argument type inference, allows, eta expansion, and bunch of other little benefits.

Given the limitations of the JVM stack trace (i.e. it tells you method names but not which method when arguments are overloaded), I agree with the first one.

The stack trace gives you the line number, so with some digging you get the particular method. I'm more worried abou the scalac error of "could not apply arguments of types ... to <one of these six methods>". Sure, it's just like Java, other than for the uncertainty caused by type inference at, or prior to, the call site. 

The other two seem again like shifting work from the compiler-writer to the programmer.  Type inference?  It's just a mildly larger search space.  So search it.  Eta expansion?  Static typing to the rescue: the types are not the same.

I don't agree that complexity for the implementor/spec-writer necessarily buys simplicity for the programmer in this case. Change a signature; break argument or eta-expansion inference in an unrelated area of your program; try to clean up. (Admittedly, this can occur today when adding an overloaded method, but given that I'm not cheerleading static overloading in the first place, I'm not going to weigh up the relative suckiness of the two problems.)

I really think this is a non-issue.  If you have an actual ambiguity, the code should catch it _at the declaration site_.  If you do not have an ambiguity, other code will not change when properly typed.  If you are playing with named arguments in order to make your life more interesting, then I would argue that catching even possible collision of named arguments should happen at the declaration site.
 
 
But when you want to give a single name, I think the small amount of boilerplate involved with implicits is worth it. I would prefer an implicit parameter to a view.

  def display[D: Display](d: D)
 
What benefit do you gain here?  Well--okay, I can see one now, but what benefit would one gain if SI-5191 were fixed?  (Or is that the issue?)

There are a few ways to express this:

1. def display(a: String)
2. def display(a: Displayable)
3. def display[A <% Displayable](a: A)
4. def display[A: Display](a: A)

#1 violates the sacred Rule of Ortiz -- you ask your caller to implicitly an argument to a `String`, perhaps by providing a suite of conversion methods, hurting type safety in that scope in other places where `String` is used. Not what you suggested, but I thought I'd place it on the table before saying it was off the table.

Agreed: this is obviously a bad solution.
 
#2 is better, as the conversion is to a dedicated type. But the conversion will be applied to the argument *before* it arrives in your method.

Indeed.  It will contain all the information to display, packaged in some form suitable for display, but without any knowledge of how to display it.
 
#3 is pretty much like #2, but you would also be able to accept, say, `List[A]` and convert each element of the collection to `Displayable`.

I don't think this would be hard with #2 either, though Displayable would need a join method.
 
#4 means you don't actually need to allocate a `Displayable` wrapper; instead you just pass in the `Display` instance, to which you can pass the `A`. Furthermore, you can write implicits to provide, say, `Displayable[(A, List[A]))]`, given a `Displayable[A]`. Relying on the caller to invoke an implicit view, or requiring a view bound, are a bit clunkier when you get to that.

But if the method itself is actually going to do serious work, the Display instance is just going to have to package up or present the information in A in some form, so you'll end up generating a wrapper there instead.
 
P.S. I think the MyType and collections methods solutions _might_ make a significant dent in the perceived complexity of Scala.  Of course there's the initial shock value of (xs, ys).zipped.map(_ + 2*_) that is hard to offset (hopefully followed by wonder at how much has happened with so little code), but things like
  def +: [B >: A, That] (elem: B)(implicit bf: CanBuildFrom[ArraySeq[A], B, That]): That
are kind of imposing.  Even
  def +: [B >: A] (elem: B): this.class[B]
is scary enough, what with the weird >: and the backwards-acting +: (where this.class is MyType, of course).  Part of the problem is that it's not obvious what That is--this lends great power, but for all anyone reading the signature knows, That could be a Function0.  I'm not sure we're willing to give up the extra power, though.  I do quite like the string interoperability, for instance.

I too would prefer it if there was a simple version of `map`/`flatMap` implemented by the silent majority of the collections that handle arbitrary element types. But this could be encoded with an abstract type, rather than with `MyType`, with only a small cost to the implementor

I'm not sure this ever quite worked smoothly in my old question on Stack Overflow on "Minimal framework in Scala for collections with inheriting return type", but I never ended up using that strategy, so I'm not sure.  In any case, it requires a level of understanding of generic types and a level of discipline that aren't necessary with (limited) MyType capabilities.

  --Rex

Daniel Sobral

unread,
Nov 17, 2011, 10:13:08 AM11/17/11
to Rex Kerr, Jason Zaugg, scala-debate
On Thu, Nov 17, 2011 at 02:54, Rex Kerr <ich...@gmail.com> wrote:
>
> I really think this is a non-issue.  If you have an actual ambiguity, the
> code should catch it _at the declaration site_.  If you do not have an

Can it? In that case, every parameter would need non-intersecting LUBs
and GLBs. Take, for example:

def f(x: String, y: Int = 0) = x
def f(x: String, y: Double = 0.0) = x

Though they are not ambiguous at the declaration site, f("abc") is
ambiguous. Or, consider something even worse:

def f[T1 >: Nothing <: String](x: T1, y: Int = 0) = x
def f[T2 >: Null <: AnyRef](x: T2, y: Double = 0.0) = x

T1 is not more specific than T2, and T2 is not more specific than T1
-- T1 could be Nothing, while T2 could be AnyRef. Yet, both can be
String.

It seems to me that the only way to guarantee non-ambiguity at the
declaration site is to demand that all types are fully disjunct.

Rex Kerr

unread,
Nov 17, 2011, 11:03:44 AM11/17/11
to Daniel Sobral, Jason Zaugg, scala-debate
On Thu, Nov 17, 2011 at 10:13 AM, Daniel Sobral <dcso...@gmail.com> wrote:
On Thu, Nov 17, 2011 at 02:54, Rex Kerr <ich...@gmail.com> wrote:
>
> I really think this is a non-issue.  If you have an actual ambiguity, the
> code should catch it _at the declaration site_.  If you do not have an

Can it? In that case, every parameter would need non-intersecting LUBs
and GLBs. Take, for example:

def f(x: String, y: Int = 0) = x
def f(x: String, y: Double = 0.0) = x

Though they are not ambiguous at the declaration site,

They are totally ambiguous in the context of default parameters.  My pseudocode to Martin catches this as an error.  I wouldn't propose to make the above valid code; you could only overload methods with default parameters when the defaulted forms were unambiguous.
 
f("abc") is
ambiguous. Or, consider something even worse:

def f[T1 >: Nothing <: String](x: T1, y: Int = 0) = x
def f[T2 >: Null <: AnyRef](x: T2, y: Double = 0.0) = x

T1 is not more specific than T2, and T2 is not more specific than T1
-- T1 could be Nothing, while T2 could be AnyRef. Yet, both can be
String.

I would tend to use the erasure for determining ambiguity, which makes this no different than your first example.  (Scala already uses erasures for determining whether you're defining the same method twice.)
 
It seems to me that the only way to guarantee non-ambiguity at the
declaration site is to demand that all types are fully disjunct.

The non-default list of types must be different, and the shorter non-default list must not, when defaults are explicitly given, extend to the longer non-default list.

If you want to avoid explicit-parameter-name ambiguity also, you must also consider all truncations of the parameter list that then launch into names, and the sorted-by-name remaining (name,type) pairs must not match either exactly or in any truncation.

(Right now, the compiler only catches this ambiguity at the use-site; this renders certain overloaded methods unusable with fully named parameters, if the names are chosen to collide, as I showed in my Oops example.)

Anyway, the pairwise check method even in the most difficult case would be no more than a couple dozen lines of code (assuming the compiler already has methods to extract parameter names and erased types and such in a compact way).  If you wanted to produce an ordering instead of an equivalence, it'd be a little longer (though the only point in doing that would be if O(n^2) check was intolerable for methods with the same name; could reduce that to O(n log n) via an ordering).

  --Rex

Paul Phillips

unread,
Nov 18, 2011, 12:38:53 PM11/18/11
to Rex Kerr, Jason Zaugg, scala-debate
On Wed, Nov 16, 2011 at 8:54 PM, Rex Kerr <ich...@gmail.com> wrote:
> I really think this is a non-issue.  If you have an actual ambiguity, the
> code should catch it _at the declaration site_.

It can't, even without introducing defaults. With defaults the
ambiguity can only grow.

class C {
protected def f(x: String) = 1
}
object Foo extends C {
def f(x: AnyRef) = 2
}

Is f("abc") ambiguous? From inside Foo yes, from anywhere else no.

scala> Foo.f("abc")
res0: Int = 2

Rex Kerr

unread,
Nov 18, 2011, 2:13:29 PM11/18/11
to Paul Phillips, Jason Zaugg, scala-debate

This isn't ambiguous because String is more specific than AnyRef, and the rule is (or the proposed rule would be) to use the most specific type signature that matches.

It _does_ raise the issue of visibility changes causing code to break in non-obvious ways. 

  --Rex

martin odersky

unread,
Nov 18, 2011, 3:39:43 PM11/18/11
to Rex Kerr, Paul Phillips, Jason Zaugg, scala-debate
On Fri, Nov 18, 2011 at 8:13 PM, Rex Kerr <ich...@gmail.com> wrote:
On Fri, Nov 18, 2011 at 12:38 PM, Paul Phillips <pa...@improving.org> wrote:
On Wed, Nov 16, 2011 at 8:54 PM, Rex Kerr <ich...@gmail.com> wrote:
> I really think this is a non-issue.  If you have an actual ambiguity, the
> code should catch it _at the declaration site_.

It can't, even without introducing defaults.  With defaults the
ambiguity can only grow.

class C {
 protected def f(x: String) = 1
}
object Foo extends C {
 def f(x: AnyRef) = 2
}

Is f("abc") ambiguous? From inside Foo yes, from anywhere else no.

This isn't ambiguous because String is more specific than AnyRef, and the rule is (or the proposed rule would be) to use the most specific type signature that matches.

That's not what the rule is. The rule takes into account both the receiver and the arguments. If you change that,
lots of code will break including the whole collections library. But that was only one example why ambiguity of overload resolution cannot be resolved at the declaration site. There are many others. Here's another example:

  class C[T] {
    def foo(x: T)
    def foo(x: String)
  }
  val x: C[String]
  x.foo

Cheers

 -- Martin

Paul Phillips

unread,
Nov 18, 2011, 5:31:21 PM11/18/11
to Rex Kerr, Jason Zaugg, scala-debate
On Fri, Nov 18, 2011 at 11:13 AM, Rex Kerr <ich...@gmail.com> wrote:
> This isn't ambiguous because String is more specific than AnyRef, and the
> rule is (or the proposed rule would be) to use the most specific type
> signature that matches.

In addition to "that's not the rule" and "the rule takes into account
both receiver and arguments", the rule takes into account inheritance.
String is more specific than AnyRef, but a method defined in a
subclass is more specific than one defined in a superclass. Plenty of
code depends on it.

Rex Kerr

unread,
Nov 19, 2011, 1:16:16 AM11/19/11
to martin odersky, Paul Phillips, Jason Zaugg, scala-debate
On Fri, Nov 18, 2011 at 3:39 PM, martin odersky <martin....@epfl.ch> wrote:


On Fri, Nov 18, 2011 at 8:13 PM, Rex Kerr <ich...@gmail.com> wrote:
On Fri, Nov 18, 2011 at 12:38 PM, Paul Phillips <pa...@improving.org> wrote:
On Wed, Nov 16, 2011 at 8:54 PM, Rex Kerr <ich...@gmail.com> wrote:
> I really think this is a non-issue.  If you have an actual ambiguity, the
> code should catch it _at the declaration site_.

It can't, even without introducing defaults.  With defaults the
ambiguity can only grow.

class C {
 protected def f(x: String) = 1
}
object Foo extends C {
 def f(x: AnyRef) = 2
}

Is f("abc") ambiguous? From inside Foo yes, from anywhere else no.

This isn't ambiguous because String is more specific than AnyRef, and the rule is (or the proposed rule would be) to use the most specific type signature that matches.

That's not what the rule is. The rule takes into account both the receiver and the arguments. If you change that,
lots of code will break including the whole collections library.

Fair enough--just use the same rule.
 
But that was only one example why ambiguity of overload resolution cannot be resolved at the declaration site. There are many others. Here's another example:

  class C[T] {
    def foo(x: T)
    def foo(x: String)
  }
  val x: C[String]
  x.foo

There's no ambiguity there if one uses erased types (as I mentioned before, but omitted this time).

Likewise, whatever rule one already has about subclassing trumping or not trumping more specific method types (as Paul brought up) can just be used again.

Anyway, I'm not strongly advocating that one find errors at declaration time for everything, just in the case of default parameters, since the consequence of type signature collisions can be particularly hard to intuit in that case.  I am, however, strongly advocating the position that one can know (without extraordinary work) whether or not there is a potential for ambiguity at declaration time, and one could choose to act on that information.

  --Rex

martin odersky

unread,
Nov 19, 2011, 9:13:59 AM11/19/11
to Rex Kerr, Paul Phillips, Jason Zaugg, scala-debate
You mean, pick the foo(String) because the erasure of the foo(T) is foo(Object)? What about that one then:


class C[T] {
  def foo(x: T)
  def foo(x: Throwable)
}
val x: C[Exception]
x.foo

? If you still say that the foo(x: Throwable) should be picked you are in direct contradiction with the ways overloading resolution is done in Java and in Scala.

I hope Paul and I have given enough arguments why it's hopeless to decide ambiguity at the declaration site, at least if you want to stay compatible with the way things are done in Java and Scala.

Cheers

 -- Martin

Rex Kerr

unread,
Nov 19, 2011, 10:45:09 AM11/19/11
to martin odersky, Paul Phillips, Jason Zaugg, scala-debate
On Sat, Nov 19, 2011 at 9:13 AM, martin odersky <martin....@epfl.ch> wrote:

You mean, pick the foo(String) because the erasure of the foo(T) is foo(Object)? What about that one then:


class C[T] {
  def foo(x: T)
  def foo(x: Throwable)
}
val x: C[Exception]
x.foo

? If you still say that the foo(x: Throwable) should be picked you are in direct contradiction with the ways overloading resolution is done in Java and in Scala.

Okay, agreed.  Compatibility would break--bad idea.

But you can still detect default argument ambiguities, because in that case there is nothing to be compatible with--you use the same rules you've already got on potentially conflicting reduced sets of parameters.  The compiler may not be set up to make this easy, but conceptually I don't think it's that hard.  (Practical difficulty can be a good reason to avoid something, however, especially in the short-to-medium term.)

  --Rex

Naftoli Gugenheim

unread,
Nov 22, 2011, 1:12:05 AM11/22/11
to Rex Kerr, scala-debate
On Tue, Nov 15, 2011 at 12:28 PM, Rex Kerr <ich...@gmail.com> wrote:
  (a) It seems like "re-invent |>" is the answer to a StackOverflow question about once a week.

If I'm not completely missing something, it seems that actually Scala has a limited, special version of |> --- called match. If we had |> would you ever need the 'match' keyword?

x match {
  case n if n%2 == 0 => 2
}

Isn't that roughly the same as

({ case n if n%2==0 => 2}) apply x

or

x |> { case n if n%2 == 0 => 2 }

?

Naftoli Gugenheim

unread,
Nov 22, 2011, 1:13:05 AM11/22/11
to Rex Kerr, scala-debate
Or to put it differently: Why should match only take a PartialFunction?

Lars Hupel

unread,
Nov 22, 2011, 8:43:54 AM11/22/11
to scala-...@googlegroups.com
> Or to put it differently: Why should match only take a PartialFunction?

I wondered the very same thing. The difference between `x match { ... }`
and `{ ... } apply x` is that they generate different byte codes. The
former doesn't actually produce an instance of `PartialFunction` (where
the pattern match code has to be duplicated for `isDefinedAt`).

Adriaan Moors

unread,
Nov 22, 2011, 6:16:14 PM11/22/11
to Lars Hupel, scala-...@googlegroups.com
a match generates a "quizzable" function (which some say would be a better name for a partial function)
this allows implementing a method such as collect, which relies both on the result returned from the partial function,
and whether there is such a result (well, whether it is defined) in the first place

also, Martin has just come up with an efficient yet elegant way to get rid of isDefinedAt -- more on that later (or check the commit logs for the last few hours ;-))

cheers
adriaan

√iktor Ҡlang

unread,
Nov 22, 2011, 6:23:43 PM11/22/11
to adriaa...@epfl.ch, Lars Hupel, scala-...@googlegroups.com

Lol, already pushed? :-)
 

cheers
adriaan



--
Viktor Klang

Akka Tech Lead
Typesafe - Enterprise-Grade Scala from the Experts

Twitter: @viktorklang

Rex Kerr

unread,
Nov 22, 2011, 6:41:56 PM11/22/11
to adriaa...@epfl.ch, scala-...@googlegroups.com
If you mean the change I think you mean, are you sure the synchronization won't result in a substantial performance penalty for simple partial functions (if isDefinedAt is actually removed)?

  --Rex

Naftoli Gugenheim

unread,
Nov 24, 2011, 7:55:17 PM11/24/11
to Lars Hupel, scala-...@googlegroups.com
Okay but that's an implementation detail. It still seems that match is a limited case of |>.

Piotr Kołaczkowski

unread,
Nov 25, 2011, 12:19:31 PM11/25/11
to scala-...@googlegroups.com
W dniu 2011-11-15 18:28, Rex Kerr pisze:
> After yet another "Scala is too complex" discussion, it occurred to me
> that, although these discussions very often mistake novelty for
> complexity, or mistake the complexity of the problem with the complexity
> of the solution, there _are_ still a number of areas where Scala could
> benefit from simplification (i.e. the resulting code would be simpler,
> even if Scala as a language was more complex, since it would allow a
> simple solution to be used where a complex one now must be).
>
> I'm ignoring the obvious style choice ones like "make Scala like Haskell".
>
> I was curious what areas other people have identified, and what, if any,
> solutions they've identified? Or, perhaps people will take issue with
> some of mine?
>
> My list includes:
>
> (1) Specialization. The basic concept isn't too bad, but the
> implementation is so full of bugs and limitations that employing it in
> anything but the most straightforward cases is painful at best. I
> really don't know how to solve this except for serious manpower to be
> devoted to solving the problems, or to avoid the "feature". Conceptual
> problems include enforced independence of types leading to combinatorial
> explosion of classes, inability to specify solutions for particular
> types or type combinations, inability to inherit, and more.
> Solution: ?


I don't like the whole concept of specialization made by the static
compiler. This feels sooo C++, I mean, an ugly hack, useful for 1% of
cases, but with lots of sharp edges. The right place for specialization
is in the JVM - it has all the information about which types are
actually used so JVM's HotSpot could do that much better. I'd also like
to see some day a good support for building small concrete immutables
e.g. for representing complex numbers - which could also benefit from
specialization and object inlining. Again - this is JVM's task, not
Scalac. So the solution would be to lobby Oracle for appropriate JVM
improvements. Or at least to move the effort to OpenJDK, to create a
working prototype for that feature and then make that included into
official JDK.

BTW: Type based JVM level code specialization can also help inlining of
virtual calls from megamorhic call sites - and these are very frequent
in Scala.


As for the rest of your points, I agree completely and I add one minor
thing:

Generic type inference in pattern matching should be more consistent
with the generic type inference in other parts of the language.

class A
case class B[T](param: T)

val a = new A
val b = B(a) // infers B[A], ok

Now let's try similar thing with pattern matching:

def test(obj: AnyRef): B[A] = {
obj match {
case b @ B(a: A) => b
}
}

<console>:12: error: type mismatch;
found : B[Any]
required: B[A]
Note: Any >: A, but class B is invariant in type T.
You may wish to define T as -T instead. (SLS 4.5)
case b @ B(a: A) => b
^

To fix this I have to add ugly "asInstanceOf[]" suggesting something is
not statically type-safe, although it perfectly is. Things like this
make Scala feel complex, although it need not to be.

Anyway, probably the whole "type inference" part of the language is a
source of many surprises. I don't know if much can be done about this,
though.

Regards,
Piotr

Rex Kerr

unread,
Nov 25, 2011, 1:16:46 PM11/25/11
to Piotr Kołaczkowski, scala-...@googlegroups.com


2011/11/25 Piotr Kołaczkowski <pkol...@elka.pw.edu.pl>

W dniu 2011-11-15 18:28, Rex Kerr pisze:

After yet another "Scala is too complex" discussion, it occurred to me
that, although these discussions very often mistake novelty for
complexity, or mistake the complexity of the problem with the complexity
of the solution, there _are_ still a number of areas where Scala could
benefit from simplification (i.e. the resulting code would be simpler,
even if Scala as a language was more complex, since it would allow a
simple solution to be used where a complex one now must be).

I'm ignoring the obvious style choice ones like "make Scala like Haskell".

I was curious what areas other people have identified, and what, if any,
solutions they've identified?  Or, perhaps people will take issue with
some of mine?

My list includes:

(1) Specialization.  The basic concept isn't too bad, but the
implementation is so full of bugs and limitations that employing it in
anything but the most straightforward cases is painful at best.  I
really don't know how to solve this except for serious manpower to be
devoted to solving the problems, or to avoid the "feature".  Conceptual
problems include enforced independence of types leading to combinatorial
explosion of classes, inability to specify solutions for particular
types or type combinations, inability to inherit, and more.
Solution: ?


I don't like the whole concept of specialization made by the static compiler. This feels sooo C++, I mean, an ugly hack, useful for 1% of cases, but with lots of sharp edges. The right place for specialization is in the JVM - it has all the information about which types are actually used so JVM's HotSpot could do that much better.

(Prescript: I think the sharp edges are mostly the results of bugs.  In theory, it should be almost entirely transparent.)

The JVM has the usage patterns but, critically, _not_ the guarantees about what is supposed to happen.  It doesn't even understand the difference between a generic argument an an Object!  Without universal access of fields via methods, it really has a ton of work to do, because it has to do speculative box-lifting of the form
  class Example { Object o; }   // Know that o = java.lang.Long(x)
  class Example$L { long x; }   // Autogenerate this
and then find all bytecode for example.o and replace it with a check for the class of Example$L with unboxed operations plus a branch to retain the Example with normal (possibly boxed) operations.

The compiler has a MUCH easier time of it because it can sort out acceptable use cases.

I agree that it would be nice if this were implemented (and done so efficiently), but I think this would be by far the most complex optimization transformation present in the JVM because it is nonlocal (i.e. the specialization of Example would require changes not just to Example code but to all code referring to it), and the cost of maintaining that might exceed the benefits of specialization.

I suspect that typesafe macros (which are nearly C++ templates) will fix some of the problem, but not in a way to make things seem less like C++ if you pay attention to the inner workings.

Alternatively, specialization could compile _not_ the actual classes used, but a _class bytecode generator_ which would on the first use in the code generate and load the appropriate specialized class (if done at runtime, or generate and compile the appropriate specialized class if done at compile time).  That way you could just blithely write Function3[@specialize -T1, @specialized -T2, @specialized -T3, @specialized +R] and you wouldn't have to deal with 10k different classes in general; each program would get those classes that actually were used.  This would be FAR less work than having the JVM try to do the equivalent without understanding what was going on.

Also, don't underestimate the value of specialization.  Using FP methods on primitives--which relies on creating Function objects--is fantastically slower than direct access unless you use specialization or do the equivalent manually.  The number of use cases may not be that large while writing a library, but if you can't provide high-performance access to your library, the places where you have to use less desirable code grows and grows.  Without specialization, Scala techniques are for I/O, while Java-style (perhaps within Scala) is for computation.  With specialization, Scala handles both in typical style.  It's a huge difference.

  --Rex

Chris Marshall

unread,
Nov 25, 2011, 12:58:52 PM11/25/11
to pkol...@elka.pw.edu.pl, scala-...@googlegroups.com
> To: scala-...@googlegroups.com
> From: pkol...@elka.pw.edu.pl
> Subject: [scala-debate] Re: Actual complexity in Scala that warrants simplification

>
> The right place for specialization is in the JVM - it has all the information about which types are
> actually used

You have heard of erasure, I trust?

Piotr Kołaczkowski

unread,
Nov 25, 2011, 3:43:16 PM11/25/11
to scala-...@googlegroups.com, scala-...@googlegroups.com
W dniu 2011-11-25 18:58, Chris Marshall pisze:

Yes, but I don't think it is a problem.
Whole lot of optimisations in JVM are speculative and do not rely on the
type system. Just detect tight loops where lots of boxing occurs and
specialize from there, basing on type information gathered at runtime.

This should work e.g. in such a scenario:

trait Function {
def apply(x: Any): Any
}

class Duplicate extends Function {
def apply(x: Any): Any = x match {
case a: Int => a * 2
case a: Long => a * 2
case a: Double => a * 2
}
}

And now the calling code:

val function = new Duplicate
for (i <- 1 to 10000000)
function(someConst) // here the boxing occurs


Step 1: Detect the line where lots of boxing occurs
Step 2: Dynamically generate a specialized method apply for the type of
the argument observed - by the way eliminating all the match cases
except one.
Step 3: Patch the loop with the call to the newly generated code,
removing the boxing.

Regards,
Piotr

Paul Phillips

unread,
Nov 26, 2011, 2:36:52 AM11/26/11
to Rex Kerr, Piotr Kołaczkowski, scala-...@googlegroups.com
2011/11/25 Rex Kerr <ich...@gmail.com>:

> Alternatively, specialization could compile _not_ the actual classes used,
> but a _class bytecode generator_ which would on the first use in the code
> generate and load the appropriate specialized class (if done at runtime, or
> generate and compile the appropriate specialized class if done at compile
> time).  That way you could just blithely write Function3[@specialize -T1,
> @specialized -T2, @specialized -T3, @specialized +R] and you wouldn't have
> to deal with 10k different classes in general; each program would get those
> classes that actually were used.  This would be FAR less work than having
> the JVM try to do the equivalent without understanding what was going on.

I thought of doing this a while ago and I keep buttonholing people
trying to get someone to give me a reason it won't work. (I usually
find out why something won't work the hard way, but in this case I
don't have the time if I can avoid it.) So far nobody has come up with
a reason. That you propose it independently makes it sound that much
more plausible.

Piotr Kołaczkowski

unread,
Nov 26, 2011, 3:58:09 AM11/26/11
to scala-...@googlegroups.com, Rex Kerr, scala-...@googlegroups.com
W dniu 2011-11-26 08:36, Paul Phillips pisze:

I like the idea. A minor problem I can see is running in restricted
environments, where you cannot install your own classloaders, e.g. in
unsigned applets. There *are* people who do use Scala for unsigned
applets. So there would have to be at least some global compiler switch
to turn off the specialization at all, both in the compiled code and
Scala library (or a separate non-specialized Scala library jar).

Regards,
Piotr

Nate Nystrom

unread,
Nov 26, 2011, 5:46:35 AM11/26/11
to Paul Phillips, Rex Kerr, Piotr Kołaczkowski, scala-...@googlegroups.com

It's definitely plausible. Eric Allen's NextGen implementation of
generics for Java did something similar: generating a template for the
generic class, then using a class loader to specialize the class on
demand. An unreleased version of X10 did something similar.

Nate

martin odersky

unread,
Nov 26, 2011, 6:00:06 AM11/26/11
to nate.n...@usi.ch, Paul Phillips, Rex Kerr, Piotr Kołaczkowski, scala-...@googlegroups.com
Generation on demand is very plausible, and we should do this also for tuples, functions, etc, thus breaking the 22 barrier. For specialization there's the other cost that for a function with N specialized variants we need N subclasses and also N versions of every method
mentioning the specialized type parameters. we can save on the classes, but not on the methods. 

Cheers

 -- Martin



--
Martin Odersky
Prof., EPFL and Chairman, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967

Nathaniel Nystrom

unread,
Nov 26, 2011, 6:16:07 AM11/26/11
to martin odersky, Paul Phillips, Rex Kerr, Piotr Kołaczkowski, scala-...@googlegroups.com

True. I was not thinking about Scala's method specialization. Both NextGen and X10 specialized generic classes on all type parameters. I don't recall what NextGen did, but the approach we implemented in X10 was to translate generic methods into generic inner classes, reducing the problem to class specialization. All of this never got out of a proof-of-concept phase, however. The main complication and the reason we abandoned the whole endeavor was that the X10 compiler was generating Java source, not bytecode and it was difficult generating code acceptable to javac. The plan was to revisit generics after completing the bytecode backend, but I left IBM before that happened and I'm not sure what they're doing now.

Nate


Matthew Pocock

unread,
Nov 26, 2011, 7:08:05 AM11/26/11
to Rex Kerr, Piotr Kołaczkowski, scala-...@googlegroups.com


2011/11/25 Rex Kerr <ich...@gmail.com>


2011/11/25 Piotr Kołaczkowski <pkol...@elka.pw.edu.pl>

 
Alternatively, specialization could compile _not_ the actual classes used, but a _class bytecode generator_ which would on the first use in the code generate and load the appropriate specialized class (if done at runtime, or generate and compile the appropriate specialized class if done at compile time).  That way you could just blithely write Function3[@specialize -T1, @specialized -T2, @specialized -T3, @specialized +R] and you wouldn't have to deal with 10k different classes in general; each program would get those classes that actually were used.  This would be FAR less work than having the JVM try to do the equivalent without understanding what was going on.

This would be ideal in most situations. Unfortunately, not every deployment scenario allows you to do bytecode generation. For example, I'm not sure this would work very well on android. However, I guess there could be a post-processor stage that chews through a jar and generates a jar containing all the *actually used* cases, and then you could bundle this with your class-loader-challenged application.
 
Matthew

  --Rex




--
Dr Matthew Pocock
Integrative Bioinformatics Group, School of Computing Science, Newcastle University
skype: matthew.pocock
tel: (0191) 2566550

Paul Butcher

unread,
Nov 26, 2011, 9:59:53 AM11/26/11
to Matthew Pocock, Rex Kerr, Piotr Kołaczkowski, scala-...@googlegroups.com
On 26 Nov 2011, at 12:08, Matthew Pocock wrote:
> This would be ideal in most situations. Unfortunately, not every deployment scenario allows you to do bytecode generation. For example, I'm not sure this would work very well on android.

I'm certain that it would not work at all on Android :-)

--
paul.butcher->msgCount++

Snetterton, Castle Combe, Cadwell Park...
Who says I have a one track mind?

http://www.paulbutcher.com/
LinkedIn: http://www.linkedin.com/in/paulbutcher
MSN: pa...@paulbutcher.com
AIM: paulrabutcher
Skype: paulrabutcher

Daniel Sobral

unread,
Nov 27, 2011, 10:16:21 AM11/27/11
to Paul Butcher, Matthew Pocock, Rex Kerr, Piotr Kołaczkowski, scala-...@googlegroups.com
On Sat, Nov 26, 2011 at 12:59, Paul Butcher <pa...@paulbutcher.com> wrote:
> On 26 Nov 2011, at 12:08, Matthew Pocock wrote:
>> This would be ideal in most situations. Unfortunately, not every deployment scenario allows you to do bytecode generation. For example, I'm not sure this would work very well on android.
>
> I'm certain that it would not work at all on Android :-)

Well, ok, it wouldn't work on Android. But Scala *isn't* working on
Android right now because _static_ generation is resulting in way too
much code.

I think the idea of having a build step that generates it statically
for the closed set of usages that it can find might be the best way to
go, particularly used with SBT. Libraries could leave it open, and
then applications would "close" it and generate the required files.

Much like a link step, sadly.

--
Daniel C. Sobral

I travel to the future all the time.

Kevin Wright

unread,
Nov 27, 2011, 10:27:51 AM11/27/11
to Daniel Sobral, Paul Butcher, Matthew Pocock, Rex Kerr, Piotr Kołaczkowski, scala-...@googlegroups.com
In practice though, the volume of static generation and the size of the scala library itself can be largely ameliorated by using proguard.
--
Kevin Wright
mail: kevin....@scalatechnology.com
gtalk / msn : kev.lee...@gmail.com
vibe / skype: kev.lee.wright
steam: kev_lee_wright

"My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger" ~ Dijkstra

Paul Butcher

unread,
Nov 29, 2011, 10:16:26 AM11/29/11
to Kevin Wright, Daniel Sobral, Matthew Pocock, Rex Kerr, Piotr Kołaczkowski, scala-...@googlegroups.com
On 27 Nov 2011, at 15:27, Kevin Wright wrote:
> In practice though, the volume of static generation and the size of the scala library itself can be largely ameliorated by using proguard.

Except for the fact that, for reasons which we still haven't (as far as I'm aware?) got to the bottom of, something changed between 2.8.x and 2.9.x which means that even after being proguarded, the Scala library is often too large to fit on an Android device. See this thread for a recent discussion of this issue:

https://groups.google.com/forum/#!topic/scala-debate/El3QZEbIwHg

Sophie

unread,
Nov 30, 2011, 12:10:52 PM11/30/11
to scala-...@googlegroups.com
On 2011-11-15 15:29:36 -0600, Rex Kerr <ich...@gmail.com> said:

> If you want to solve the named argument problem, that's tricky but
> relatively doable.

I have never understood why named arguments are not treated as part of
the method name (like smalltalk, perhaps prying out the 1st parameter
better than smalltalk does) e.g.

object . move from: aPlace to: anotherPlace by: aTime via: aRoute

(smalltalk would do object.moveFrom: ... to: ... by: .... via: ...)

method name = move:from:to:by:via:
param ordering can be treated as fixed, can even allow positional
invocation for those who won't have it otherwise:

object . move (aPlace, aPlace, aTime, aRoute)

Just an observation, I know there is zero chance of anything like this
happening.


Cédric Beust ♔

unread,
Nov 30, 2011, 1:39:26 PM11/30/11
to Sophie, scala-...@googlegroups.com
On Wed, Nov 30, 2011 at 9:10 AM, Sophie <itsm...@hotmail.com> wrote:
I have never understood why named arguments are not treated as part of the method name (like smalltalk, perhaps prying out the 1st parameter better than smalltalk does) e.g.

Probably because it means that parameter names now become part of the API, so a simple renaming will break existing users.

-- 
Cédric
 

Daniel Sobral

unread,
Nov 30, 2011, 1:53:34 PM11/30/11
to Cédric Beust ♔, Sophie, scala-...@googlegroups.com
2011/11/30 Cédric Beust ♔ <ced...@beust.com>:

That is already true with named arguments.

Josh Suereth

unread,
Nov 30, 2011, 9:47:50 PM11/30/11
to Daniel Sobral, Cédric Beust ♔, Sophie, scala-...@googlegroups.com
Exactly.  Argument names are part of the API.  Here's a fun one:

val x : Int => Int = _ + 1

x(v1 = 2)  // Right, v1 become part of our official API in 2.8.x

2011/11/30 Daniel Sobral <dcso...@gmail.com>
2011/11/30 Cédric Beust ♔ <ced...@beust.com>:
>
> On Wed, Nov 30, 2011 at 9:10 AM, Sophie <itsm...@hotmail.com> wrote:
>>
>> I have never understood why named arguments are not treated as part of the
>> method name (like smalltalk, perhaps prying out the 1st parameter better
>> than smalltalk does) e.g.
>
> Probably becauset means that parameter names now become part of the API,

> so a simple renaming will break existing users.

Josh Suereth

unread,
Nov 30, 2011, 9:48:57 PM11/30/11
to Sophie, scala-...@googlegroups.com
There's a decent answer:  Java interoperability.

If we made the names part of the method, calling it from java and, more importantly *overriding* a java method would be difficult.

- Josh

On Wed, Nov 30, 2011 at 12:10 PM, Sophie <itsm...@hotmail.com> wrote:
On 2011-11-15 15:29:36 -0600, Rex Kerr <ich...@gmail.com> said:

If you want to solve the named argument problem, that's tricky but
relatively doable.

I have never understood why named arguments are not treated as part of the method name (like smalltalk, perhaps prying out the 1st parameter better than smalltalk does) e.g.

       object . move from: aPlace to: anotherPlace by: aTime via: aRoute

(smalltalk woucld do object.moveFrom: ... to: ... by: .... via: ...)

martin odersky

unread,
Dec 1, 2011, 3:16:00 AM12/1/11
to Josh Suereth, Daniel Sobral, Cédric Beust ♔, Sophie, scala-...@googlegroups.com
On Thu, Dec 1, 2011 at 3:47 AM, Josh Suereth <joshua....@gmail.com> wrote:
Exactly.  Argument names are part of the API.  Here's a fun one:

val x : Int => Int = _ + 1

x(v1 = 2)  // Right, v1 become part of our official API in 2.8.x

Actually, the official policy is that you should not rely on parameter names like `x` or `v1` being kept stable. If the parameter name is an intelligible word, users should be able to rely on them and APIs should honor that by keeping the names stable or using a deprecation path. But for identifiers that are simple letter/digit combinations that does not hold. Or so we decided when 2.8 came out. Admittedly all of this is convention, not hardcoded. But then, most naming aspects are conventions.

Cheers

 -- Martin

Jason Zaugg

unread,
Dec 1, 2011, 3:22:37 AM12/1/11
to martin odersky, Josh Suereth, Daniel Sobral, Cédric Beust ♔, Sophie, scala-...@googlegroups.com
It's probably worth mentioning @deprecatedName, that allows parameter names to be evolved without breaking backwards compatibility.

-jason
Reply all
Reply to author
Forward
0 new messages