[scala-debate] Mainstreaming Scala without trade-offs

603 views
Skip to first unread message

Shelby Moore

unread,
Sep 11, 2013, 5:32:56 PM9/11/13
to scala-...@googlegroups.com
(3rd attempt to start a new thread, I don't know what I am doing wrong.
These keep appears as replies to the mentioned thread)

Carrying on from discussion in, "Making Scala (more) portable - cutting
ties with the JVM?", about the tension between ease-of-syntax for
aspiring-to-be Scala programmers (especially the droves using C-like
derivative languages) versus the preferences of Scala devotees.

Perhaps we can eliminate the tension entirely and serve both markets.

Could there exist rough acceptance (if not outright enthusiasm) that a
potentially worthwhile goal (if there are no trade-offs or costs) is for
Scala to gain the language marketshare so that we are more often offered
employment coding in Scala instead of requirements for Java or C# or other
mainstream language which we like less than Scala? And to gain
economy-of-scale for pushing for changes to the JVM or rolling our own VM
in the future.

I am working on a syntax (Copute) which I believe is simple and consistent
enough to appeal to the mainstream, yet also powerful enough to drive
demand away from Java and C#. Not yet proven yet for the sake of argument,
can we accept that such a syntax could be created by someone or discuss
why you think it is impossible or unlikely.

Not yet implemented and proven, Copute's compiler doesn't need a typing
engine. I believe it to be an injective conversion to Scala, although with
a subset of Scala features, it should be bijective.

I should be straight forward to code REAL-TIME conversion engines
(compilers) from this proposed syntax to any Scala syntax style that one
prefers. Thus it should be possible to offer a configurable IDE plugin
which displays this proposed syntax in the Scala style that matches each
of our
preferences and biases.

I detect an unspoken justified fear that Scala devotees would lose control
or flexibility if a bunch of n00bs took over the direction of Scala. I
also don't want Scala to be so bound-in-inertia that it can't continue to
innovate and stay on the cutting-edge of language design.

The simple solution I am proposing and implementing is an
orthogonal layer (syntax) that novices can choose if they prefer (yet not
forced on them for they can choose to go straight to full Scala), which
doesn't impact on Scala except to make Scala more popular. I don't expect
Scala's community to devote scarce resources to, nor even recommend
officially, as I think I can handle the initial development and fund. I
hope I may even drive financial resources to help improve Scala in the
process, e.g. funding first-class disjunctions, possibly generalizing
tooling which to be applicable to DSLs, etc..

I am composing my next post to elaborate some on my Copute design, so this
forum can formulate some opinion if this is a yawn or worthwhile.

David Landis

unread,
Sep 11, 2013, 7:52:12 PM9/11/13
to she...@coolpage.com, scala-debate
On Wed, Sep 11, 2013 at 5:32 PM, Shelby Moore <she...@coolpage.com> wrote:

Could there exist rough acceptance (if not outright enthusiasm) that a
potentially worthwhile goal (if there are no trade-offs or costs) is for
Scala to gain the language marketshare so that we are more often offered
employment coding in Scala instead of requirements for Java or C# or other
mainstream language which we like less than Scala? And to gain
economy-of-scale for pushing for changes to the JVM or rolling our own VM
in the future.

Well it seems to be an admirable goal, but I'm surprised you seem to be focused so strongly on the syntax itself.

In my experience and from reading stack overflow and other things, I think more programmers are hindered by either:

a) implementation issues (e.g. the proper order to initialize fields in class, what it means to map over a set, performance surprises due to boxing, etc, etc, etc) or, 
b) concept issues, e.g. monads, etc or,
c) tooling support to some extent

So I don't think the syntax itself is reducing Scala adoption -- and on the contrary I think it is one of the main *attractions* to many developers, who are enthusiastic about the much cleaner-than-Java look and reduction in boiler plate. Granted, there are always corner cases like type-lambdas and such that are pretty ugly, but you may go years as a professional scala developer without writing one of those.

Shelby Moore

unread,
Sep 11, 2013, 8:47:51 PM9/11/13
to scala-...@googlegroups.com
I haven't yet composed a good summary of Copute. The
stream-of-consciousness writing on the website (which was my research and
learning stage) is not current (and was abruptly interrupted circa late
2011 and again April/May 2012 forward to 2013). I am sort of begging you
not to read it :$ (-_- symbol puzzle for those who love symbols).

Here are few (not a comprehensive list or) highlights of features that add
capabilities to Scala, just to see if I can get any feedback or interest.

1. Named tuples in Copute are concise, e.g. (index=0,name="Foo",true)
which are converted to the Scala value (0,"Foo",true){def index=_1;def
name=_2;def __3=_3;} with the type (Int,String,Boolean){def index=_1;def
name=_2;def __3=_3;}. If in the future, Scala had first-class named
tuples, then this would be easy to support with Copute with no impact on
pre-existing Copute code.

2. First-class disjunctions. Copute allows you to write them, but since
Scala doesn't type-check them, they get subsumed to Any for now. It is
possible I might try to find a way to emulate them in current Scala, but
appears to be a can-of-worms with corner cases.

3. Traits are called INTERFACE (and there is a separate MIXIN syntax),
because they do not contain non-static implementation. This restraint and
optimization is allowed because Copute is supporting pure functional
(immutable, referentially transparent) programming only. For mutable
coding, use Scala. Note that pure functional programming doesn't mean
everything is a val, because the internals of a function are free to use
mutable algorithms and they still remain multi-thread concurrent.

4. A concise syntax to write higher-kinded types for one special case.
Just use the keyword "Sub" any where in the interface, mixin, or class,
and this is converted to Scala e.g. as follows for Applicative:

trait Applicative[+Sub[A] <: Applicative[Sub,A], +A] { ...

5. More concise way to write typeclasses, that integrates well with
subtyping. Before I describe the syntax and translation to Scala, this
feature as well as #4 above are primarily motivated by the desire to
support category theory (a la Scalaz) in a more intuitive, less verbose,
and less tsuris syntax.

Basically I understand a typeclass to be a structural type, such that code
can reference a member (e.g. method) of that structural type on any
instance of a type which implements that structure. Some incorrectly refer
to this as ad-hoc polymorphism but I understand the latter term be to a
more general concept of combining structural typing with function
overloading.

Daniel Spiewak wrote the first example Scala typeclass I encountered:

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors

trait Monad[+M[_]] {
def unit[A](a: A): M[A]
def bind[A, B](m: M[A])(f: A => M[B]): M[B]
}

implicit object ThingMonad extends Monad[Thing] {
def unit[A](a: A) = Thing(a)
def bind[A, B](thing: Thing[A])(f: A => Thing[B]) = thing bind f
}

implicit object OptionMonad extends Monad[Option] {
def unit[A](a: A) = Some(a)
def bind[A, B](opt: Option[A])(f: A => Option[B]) = opt bind f
}

sealed trait Option[+A] {
def bind[B](f: A => Option[B]): Option[B]
}

case class Some[+A](value: A) extends Option[A] {
def bind[B](f: A => Option[B]) = f(value)
}

case object None extends Option[Nothing] {
def bind[B](f: Nothing => Option[B]) = None
}

def sequence[M[_], A](ms: List[M[A]])(implicit tc: Monad[M]) = {
ms.foldRight(tc.unit(List[A]())) { (m, acc) =>
tc.bind(m) { a => tc.bind(acc) { tail => tc.unit(a :: tail) } }
}
}

For comparison, I will show some proposed equivalent functionality Copute
code, which you shall note is drastically more concise as well unified
with subtyping:

INTERFACE Monad[A] {
STATIC unit: A Sub[A]
bind[B]: (A Sub[B]) Sub[B]
}

INTERFACE Option[A]: Monad[A] {
Monad.unit(a) = Some(a)
}

CLASS Some[A](value: A): Option[A] {
bind(f) = f(value)
}

OBJECT None: Option[All] {
bind(_) = None
}

OBJECT sequence {
apply[M[A]: Monad[A],A](ms): List[M[A]] M[List[A]] {
ms.foldRight(M.unit(List[A]()))( (m, acc) {
m.bind( (a) {acc.bind( (tail) {M.unit(a :: tail)} )} )
})
}
}

The proposed translation to Scala:

trait Monad[+Sub[A] <: Monad[Sub,A], +A] {
def bind[B](_1: (A) => Sub[B]): Sub[B]
}

trait Option[+A] extends Monad[Option,A]

case class Some[A](value: A) extends Option[A] {
def bind[B](f: (A) => Option[B]): Option[B] = f(value)
}

case object None extends Option[Nothing] with staticOption {
def bind[B](_1: (Nothing) => Option[B]): Option[B] = None
}

trait staticMonad[+Sub[Any] <: Monad[Sub,Any]] {
def unit[A](_1: A): Sub[A]
}

trait staticOption extends staticMonad[Option] {
def unit[A](a: A): Option[A] = Some(a)
}

object Option extends staticOption
object Some extends staticOption

object Implicits {
implicit object OptionImplicit extends staticOption
}
import Implicits._

object sequence {
def apply[M[A] <: Monad[M,A],A](ms: List[M[A]])(implicit tc:
staticMonad[M]): M[List[A]] = {
ms.foldRight(tc.unit(List[A]()))( (m, acc) =>
m.bind( (a) => acc.bind( (tail) => tc.unit(a :: tail) ) )
)
}
}

(Tangentially, if I can figure out how to elegantly support Scala's =>
anonymous function syntax in Copute's LL(k) grammar, then I will.)

Note the subtypes which did not implement a STATIC extend the same
staticOption, and they don't create conflicting implicits.

Note how the Copute compiler must be smart enough to see that the common
Monad subtype of Some[A] and None is Option[A], so that it subsumes Sub[A]
to Option[A] since Sub[A] can't have two types Some[A] and None if they
both have have a common subtype which is a Monad.

Note that Copute STATIC methods that are implemented in the INTERFACE
where they are first declared, can go directly in the companion (same name
as trait) Scala object without the above named static* hierarchy.

Also note that where Copute STATIC methods are not all implemented in the
same subtype, then the static* hierarchy will have Sub2 etc for each fork,
e.g. assuming bind was a STATIC, then:

trait staticMonad[+Sub[Any] <: staticMonad[Sub,Sub2,Any], +Sub2[Any] <:
staticMonad[Sub,Sub2,Any]] {
def unit[A](_1: A): Sub[A]
def bind[A,B](_1: (A) => Sub2[B]): Sub2[B]
}

trait staticOption[+Sub2[Any] <: staticOption[Sub2,Any]] extends
staticMonad[Option,Sub2,Any] {
def unit[A](a: A): Option[A] = Some(a)
}
...

I believe I have worked out all the issues with this, yet if anyone sees a
corner case or flaw, please tell me.

Shelby

unread,
Sep 11, 2013, 9:40:36 PM9/11/13
to scala-...@googlegroups.com, she...@coolpage.com
On Thursday, September 12, 2013 7:52:12 AM UTC+8, Ðavîd L wrote:
On Wed, Sep 11, 2013 at 5:32 PM, Shelby Moore <she...@coolpage.com> wrote:

Could there exist rough acceptance (if not outright enthusiasm) that a
potentially worthwhile goal (if there are no trade-offs or costs) is for
Scala to gain the language marketshare so that we are more often offered
employment coding in Scala instead of requirements for Java or C# or other
mainstream language which we like less than Scala? And to gain
economy-of-scale for pushing for changes to the JVM or rolling our own VM
in the future.

Well it seems to be an admirable goal, but I'm surprised you seem to be focused so strongly on the syntax itself.

In my experience and from reading stack overflow and other things, I think more programmers are hindered by either:

a) implementation issues (e.g. the proper order to initialize fields in class, what it means to map over a set, 

I am addressing these by only supporting pure functional programming, which enables eliminating possibilities and syntax which complicate. Also by simplifying the typeclass as shown in my prior message, which I need to properly do a standard library based on category theory (functor, applicative, monad), which will be very very simple to understand and read (unlike Scalaz) and use. I think it will be much superior in every way (easier, more clear, less corner cases) to the ad-doc implicits (i.e. That) paradigm of the current std library.
 
performance surprises due to boxing, etc, etc, etc) or, 

I don't consider this a beginner level adoption issue. This is encountered after getting an application running. If I become invested in this, I will of course be working to solve these issues.
 
b) concept issues, e.g. monads, etc or,

Now you see I am addressing this in spades.
 
c) tooling support to some extent

It is much better than it was. And yes, I want to address this too if ever I am very invested.
 
So I don't think the syntax itself is reducing Scala adoption --

Strongly disagree. I plan to show this in the market. Have you actually polled people like me coming from C-like languages trying to learn Scala, especially I did not even know Java very well (learned Scala directly)? My experience was a boat-load of unnecessary head-scratching moments that slowed me down. And if I had not been as determined as I am to finally have the perfect language I want, i.e. if I was just being rational, I would have dumped Scala back in 2010 and settled on the best Java library I could find (which is precisely what most people are doing). And from what I gather Gavin King thinks Scala is wrong about this, and he has a track record of knowing what the mainstream Java programmers want. I think he takes it a little bit too far with long keywords, e.g. subtypeof, etc.. The proposed Copute syntax tries to be very concise while affording symbols like the plague. I discussed Copute's fold comprehension IN-FROM-DO and IF-IS-DO-ELSE unification of if-else and match-case in the thread mentioned in the OP. These are the sort of syntax I think can really drive the "that's cool, I get it" from the mainstream.
 
and on the contrary I think it is one of the main *attractions* to many developers, who are enthusiastic about the much cleaner-than-Java look and reduction in boiler plate.

Indeed, I am reducing Scala's boilerplate ;)
 
Granted, there are always corner cases like type-lambdas and such that are pretty ugly, but you may go years as a professional scala developer without writing one of those.

Why I moved the type of function's arguments to after the parenthesized list of arguments. 
 

Shelby

unread,
Sep 11, 2013, 10:04:36 PM9/11/13
to scala-...@googlegroups.com, she...@coolpage.com
On Thursday, September 12, 2013 7:52:12 AM UTC+8, Ðavîd L wrote: 
So I don't think the syntax itself is reducing Scala adoption -- and on the contrary I think it is one of the main *attractions* to many developers, who are enthusiastic about the much cleaner-than-Java look and reduction in boiler plate. Granted, there are always corner cases like type-lambdas and such that are pretty ugly, but you may go years as a professional scala developer without writing one of those.

My aim is Copute should be no less concise and often cleaner than Scala. I think you can see that on the same code I provided upthread.

And I aim to reduce the head scratching moments for the beginner and aid rapid learning, while retaining all that what you say is an advantage. No trade-offs.

I forgot to thank you for the feedback. 

Suminda Dharmasena

unread,
Sep 12, 2013, 1:33:17 AM9/12/13
to scala-...@googlegroups.com, she...@coolpage.com
BTW, is this the repo for Copute: https://code.google.com/p/copute/

There is no code there. Perhaps you can put the code at Github or somewhere to start with. As with any open source project you have to expect about 90% will just use it, 9% will give feedback and 1% will contribute. So don't expect a following from the on set.

Ideal way to start would be to have a source to source compiler to a more established language, perhaps scala where the output code is clean and human readable also. Also perhaps retaining comments in the Copute source for more comprehension and some additional comments on the potential transforms if not readily clear. Perhaps this would give you more feedback from the user base of the other language also.

Justin du coeur

unread,
Sep 12, 2013, 1:07:49 PM9/12/13
to Shelby Moore III, scala-debate
Before I get into any specific comments, an important use-case question that I'm not clear on:

On Wed, Sep 11, 2013 at 8:47 PM, Shelby Moore <she...@coolpage.com> wrote:
Here are few (not a comprehensive list or) highlights of features that add
capabilities to Scala, just to see if I can get any feedback or interest.

Is the intent that Copute be used *instead* of Scala, or *alongside* Scala?  That is, if I'm building a full-scale application, would I expect to use just Copute, or a mix of Copute and Scala?  My reactions are somewhat different, depending on the intent here.  (I'm not judging here either way, but it's crucial to grok what you're trying to accomplish...) 

Justin du coeur

unread,
Sep 12, 2013, 4:46:34 PM9/12/13
to Shelby Moore III, scala-debate
Some initial thoughts, having chewed on this for a few hours:

On Wed, Sep 11, 2013 at 8:47 PM, Shelby Moore <she...@coolpage.com> wrote:
1. Named tuples in Copute are concise

Interesting, although I'm not seeing the motivation offhand.  What's the use case?  This isn't something I'd felt the lack of.
 
2. First-class disjunctions. Copute allows you to write them, but since
Scala doesn't type-check them, they get subsumed to Any for now. It is
possible I might try to find a way to emulate them in current Scala, but
appears to be a can-of-worms with corner cases.

This one makes me nervous, and leads to a lot of questions.

How does Copute deal with type-checking of the disjunctions?  How do I assign to them; more importantly, are *uses* of the disjunctions fully type-checked?  I'd consider them entirely off-limits if uses aren't fully checked -- indeed, I'd probably have to insist that all uses have a compiler warning if they aren't.

This is part of why I'm wondering about how this works with Scala, BTW.  If Copute is intended to be used just on its own, and does enough internal type-checking, I might be willing to countenance these; if folks are going to be using Copute side-by-side with Scala, I'd probably say that this is a show-stopper, that would probably prevent me from using the language for any serious work.

Even if working with Scala isn't the plan, it's still the JVM, so people *will* try to use it with other languages, and the type signatures are going to leak.  So I'd encourage you not to turn these into Any (which is a gigantic red flag for me), but instead into some sort of wrapper that can enforce at least *some* good behaviour.  For example, a disjoint type could become a wrapper around a hidden Any, and at least enforced at runtime that (a) only the expected types can be assigned to it, and (b) it could only be used with a PartialFunction, with some assertions to check that the function isDefinedAt all of the types contained in the wrapper.

(Which, now that I think on it, could probably be implemented in Scala today with appropriate use of macros.  I wonder if anybody's already built that?)
 
3. Traits are called INTERFACE (and there is a separate MIXIN syntax),
because they do not contain non-static implementation. This restraint and
optimization is allowed because Copute is supporting pure functional
(immutable, referentially transparent) programming only.

I've been staring at this for hours now, and I'm not getting it.  Why is this a good thing?  I mean, the power of traits is one of the things I love about Scala, specifically because years of working with Java led me to *despise* pure interfaces.  They sound good, sure, but in practice I've had too many times that I've built an interface that seemed like it was pure, only to gradually find that there were several methods I wanted that really, deeply, belonged on the interface.  It always led to annoying duplicate code in the implementations.

So I guess the question is -- why would I ever want an INTERFACE instead of a MIXIN?  I *hate* interfaces with a burning passion, mostly due to Java experience.


Which reminds me: the all-caps thing just plain bugs me.  This isn't anything rational -- it's just many years of Internetting, teaching me that all-caps means YELLING.  So on a purely aesthetic level, I look at this code and it feels like it's shouting at me, which gives me an instinctive negative gut reaction.

Minor detail, but since your stated objective is easy adoption it may be worth thinking about -- I literally have a very slight "Ow: take it away" reaction to your examples.
 
4. A concise syntax to write higher-kinded types for one special case.
Just use the keyword "Sub" any where in the interface, mixin, or class,
and this is converted to Scala e.g. as follows for Applicative:

trait Applicative[+Sub[A] <: Applicative[Sub,A], +A] { ...

So I have to point out, since we've talked a lot about things that people find head-scratchingly confusing, that this *totally* falls into that category for me.  I haven't yet managed to mentally untangle your example.  I suspect that it would come with time, but my initial reaction is "mysterious gobbledygook".

None of which is to say that it's bad, but to make the point that this sort of head-scratching is *very* much in the eye of the beholder, and that deep power usually winds up causing some...

Shelby

unread,
Sep 12, 2013, 5:07:16 PM9/12/13
to scala-...@googlegroups.com, Shelby Moore III
I am currently thinking alongside. Not 100% of our code can be pure functional, and I don't yet see a compelling (enough to justify divergence) improvement on the mutable syntax in Scala. Also Scala is more general and there will be cases where you can express code more optimally in Scala.

I am hopeful that employng libraries written in Scala, some applications could be coded entirely in Copute.

Shelby

unread,
Sep 12, 2013, 5:55:29 PM9/12/13
to scala-...@googlegroups.com, Shelby Moore III
On Friday, September 13, 2013 4:46:34 AM UTC+8, Justin du Coeur wrote:
Some initial thoughts, having chewed on this for a few hours:

On Wed, Sep 11, 2013 at 8:47 PM, Shelby Moore <she...@coolpage.com> wrote:
1. Named tuples in Copute are concise

Interesting, although I'm not seeing the motivation offhand.  What's the use case?  This isn't something I'd felt the lack of.

For example, supporting naming for extractors:


Also, some programmers prefers names instead of field index to make code more self-documenting.
 
2. First-class disjunctions. Copute allows you to write them, but since
Scala doesn't type-check them, they get subsumed to Any for now. It is
possible I might try to find a way to emulate them in current Scala, but
appears to be a can-of-worms with corner cases.

This one makes me nervous, and leads to a lot of questions.

How does Copute deal with type-checking of the disjunctions?  How do I assign to them; more importantly, are *uses* of the disjunctions fully type-checked?  I'd consider them entirely off-limits if uses aren't fully checked -- indeed, I'd probably have to insist that all uses have a compiler warning if they aren't.

They are typed-checked as subsumed to Any, which is the same as if you had written Any.

Indeed you should not be expected the types to be checked because for now, there is no such capability in Scala and Copute outputs Scala and doesn't include a typing engine.

I am thinking it is more efficient (both in man-hours to implement and in runtime speed) to try to get this capability added to Scala than implement a complete typing engine for Copute.

This is part of why I'm wondering about how this works with Scala, BTW.  If Copute is intended to be used just on its own, and does enough internal type-checking, I might be willing to countenance these; if folks are going to be using Copute side-by-side with Scala, I'd probably say that this is a show-stopper, that would probably prevent me from using the language for any serious work.

How would the subsumption to Any stop you from using this with Scala? Scala is subsuming to Any also.

Even if working with Scala isn't the plan, it's still the JVM, so people *will* try to use it with other languages, and the type signatures are going to leak.  So I'd encourage you not to turn these into Any (which is a gigantic red flag for me), but instead into some sort of wrapper that can enforce at least *some* good behaviour.

How is subsumption to Any not a reasonable behavior? You are forced to specialize the type with match-case before you can do anything with it besides pass it along as an Any.
 
 For example, a disjoint type could become a wrapper around a hidden Any, and at least enforced at runtime that (a) only the expected types can be assigned to it, and (b) it could only be used with a PartialFunction, with some assertions to check that the function isDefinedAt all of the types contained in the wrapper.

Runtime typing would be much worse guarantee, opening holes.

The point is you can write the types now, but they aren't checked and then later if ever they are checked, you can expect to find incongruences in your code that you did not know existed. This is no worse than Scala you write now where there may be incongruences that you are not aware of, because all are subsumed to Any.

(Which, now that I think on it, could probably be implemented in Scala today with appropriate use of macros.  I wonder if anybody's already built that?)
 
3. Traits are called INTERFACE (and there is a separate MIXIN syntax),
because they do not contain non-static implementation. This restraint and
optimization is allowed because Copute is supporting pure functional
(immutable, referentially transparent) programming only.

I've been staring at this for hours now, and I'm not getting it.  Why is this a good thing?  I mean, the power of traits is one of the things I love about Scala, specifically because years of working with Java led me to *despise* pure interfaces.  They sound good, sure, but in practice I've had too many times that I've built an interface that seemed like it was pure, only to gradually find that there were several methods I wanted that really, deeply, belonged on the interface.  It always led to annoying duplicate code in the implementations.

So I guess the question is -- why would I ever want an INTERFACE instead of a MIXIN?  I *hate* interfaces with a burning passion, mostly due to Java experience.

I think what you hate about Java is single-inheritance (lack of multiple inheritance and mixins). Thus, I think you are conflating this with the benefits on an interface that contains no implementation.

The benefit of an interface is that implementation (concretion or instance construction) is orthogonal to type injection (i.e. abstraction). Specifically the I and D of the SOLID principles:



Did you not get that the key benefit of that code I presented is that it removes the boilerplate for encoding typeclasses and unifies typeclasses with subtyping? I suppose you are going to need try out a std library built on top of this to fully appreciate the benefits as compared to the implicits (That) paradigm employed for the Scala std library. Odersky mentioned[1] that he was surprised that they did not end up employing higher-kinds in the std library. Well the reason is because the std library violates category theory[2].

[1] "Fighting Bit Rot with Types" Odersky & Moors
 
Which reminds me: the all-caps thing just plain bugs me.  This isn't anything rational -- it's just many years of Internetting, teaching me that all-caps means YELLING.  So on a purely aesthetic level, I look at this code and it feels like it's shouting at me, which gives me an instinctive negative gut reaction.

The all-caps is much less annoying in the Eclipse IDE with its monospace font and purple syntax highlighting on the keywords. I agree it looks horrendous as displayed in this variable-width font used at google groups. I don't know what font you are viewing with in your email client.

The point is that when the code is displayed in a monospace font without syntax highlighting (e.g. in HTML, PDF, Word documents, etc) then the viewer can detect the keywords more rapidly.

For those who hate it, I will definitely try to get a feature into the IDE (and hopefully Github too) to toggle the keywords to lowercase. And the all-caps is unnecessary when the keywords are colored.

I am also open to abandoning the all-caps keywords entirely if I get feedback from all sectors asking me to do so.

Minor detail, but since your stated objective is easy adoption it may be worth thinking about -- I literally have a very slight "Ow: take it away" reaction to your examples.

I had the same "ouch" reaction when I saw in this variable-width font display at the google groups' website UI. Caused me to doubt my original decision. But again, I think most code will be displayed in a monospace font, either syntax highlighted or not. It is the latter "not" case that I am trying to address, e.g. when one quickly types some code in a non-IDE text editor.
 
 
4. A concise syntax to write higher-kinded types for one special case.
Just use the keyword "Sub" any where in the interface, mixin, or class,
and this is converted to Scala e.g. as follows for Applicative:

trait Applicative[+Sub[A] <: Applicative[Sub,A], +A] { ...

So I have to point out, since we've talked a lot about things that people find head-scratchingly confusing, that this *totally* falls into that category for me.  I haven't yet managed to mentally untangle your example.  I suspect that it would come with time, but my initial reaction is "mysterious gobbledygook".

None of which is to say that it's bad, but to make the point that this sort of head-scratching is *very* much in the eye of the beholder, and that deep power usually winds up causing some...

Indeed, now you understand what the person trying to learn Scala feels when they see all these constructs they don't understand and there is no quick reference card where they can quickly assimilate.

Copute hides that from you. You will only see it if you are looking at the Scala code Copute generates. That is a higher-kinded type. It says that the Sub(type) must be a type of the interface it implements, and the Sub(type) is known to the interface that the Sub(type) implements.

That is absolutely necessary if you want to unifying subtyping and typeclasses, as I've shown. 

Shelby

unread,
Sep 12, 2013, 7:59:59 PM9/12/13
to scala-...@googlegroups.com, she...@coolpage.com
(originally written Mon 7 Feb 2011 - 4:38)

Every 10 years we need a new programming language paradigm


In 1975 I started using “structured programming” techniques in assembly language, and became a true believer.
In 1983 a new era dawned for me as I started doing some C programming on Unix and MS-DOS. For the next five years, I would be programming mixed C/assembly systems running on a variety of platforms including microcoded bit-slice graphics processors, PCs, 68K systems, and mainframes. For the five years after that, I programmed almost exclusively in C on Unix, MS-DOS, and Windows.
Another new era began in 1994 when I started doing object-oriented programming in C++ on Windows. I fell in love with OO, but C++ I wasn’t so sure about. Five years later I came across the Eiffel language, and my feelings for C++ quickly spiraled toward “contempt.”
The following year, 2000, I made the switch to Java and I’ve been working in Java ever since.

About now, it time for the one that follows Java (the virtual machine, garbage collection, no pointers, everything is an object) paradigm.

Kevin Wright

unread,
Sep 12, 2013, 8:12:26 PM9/12/13
to Shelby, scala-debate

The one that follows is the one that preceded... ML lisp and Smalltalk all predate 1975.

Everything to *really* be an object (as in Smalltalk). None of this rubbish with primitives or pseudo-types like"void".

Declarative

A strong type system

Meta programming

The VM and garbage collection can stay, though I'd like to see stack allocation available too.

--
You received this message because you are subscribed to the Google Groups "scala-debate" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-debate...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Simon Schäfer

unread,
Sep 12, 2013, 8:19:27 PM9/12/13
to scala-...@googlegroups.com

On 09/11/2013 11:32 PM, Shelby Moore wrote:
> [...]
I don't think the choice of the language is important for mainstream
programmers, where I count enterprise programmers as mainstream
programmers. Such developers tend to use frameworks the whole day
because they don't spend the whole day writing code, thus they are not
skilled enough _as programmers_ to code large systems completely by
themselves.

Instead, they spend there time convincing customers to buy there
products, supporting them with there problems, and creating (in my
opinion) boring frontends for there system while wasting there time
trying to understand the overly complex frameworks the have to use.

There is no choose of a language, there is just the employer or the
customer that tells which ecosystem, which framework or which library to
use and how the product should look like. Languages are chosen because
they have a ecosystem that fulfills needs.

Scala is chosen by developers thinking that they can do their job with
it in the best possible way (and probably more as for other languages
because of its insanely awesomeness). And as for all what I have seen in
the last years, the majority of Scala programmers aren't utterly
mainstream programmers.

Thus, while again I can only speak for myself (but don't think I will),
I can only convince you to please reduce the amount of posts throwing to
the lists as you did lastly - they are just annoying and don't
contribute to Scalas further development in an amount you may think. The
majority of Scala developers don't share your problems understanding its
syntax or your way on how to solve problems or with the colleagues you
are working together.

Feel free to open another list for Copute after you brought your points
to an end and I wish you the best to find some contributors. But don't
expect there will be many.

Scala is chosen because it enjoyable to write code in that language but
mainstream programmers will not use Scala because the language is cool -
they will use it because they are told so.

Simon

Shelby

unread,
Sep 12, 2013, 8:32:00 PM9/12/13
to scala-...@googlegroups.com, she...@coolpage.com
(originally written Sat 11 Jun 2011 - 22:36)

Noting the duplicitous syntax of Scala in the numerous ways to write a function.

def len(s: String) = s.length
def len(s: String) = {s.length}
def len(s: String): Int = {return s.length}
def len: String => Int = s => s.length
def len: String => Int = _.length
def len = (s: String) => s.length
object len {
   def apply(s: String) = s.length
}

That is a fair amount (of convoluted differences) for a newbie to memorize if they want to be able to read Scala code. And that isn't even any where near all of Scala.

Ken Scambler

unread,
Sep 12, 2013, 8:50:22 PM9/12/13
to Shelby, scala-debate
That's not how people learn.

These examples are the product of 4 easy rules:
1) Return types are optional
2) Functions can be values
3) _ is a neat shortcut for lambda functions
4) The apply() method gives your object function-like syntax

Nobody individually rote-learns the cartesian product of all language rules; this is a non-sequitur.



--

Shelby

unread,
Sep 12, 2013, 9:08:20 PM9/12/13
to scala-...@googlegroups.com, Shelby
5) def can declare a function with parenthesis, be assigned a non-function value which declares a function that has no input parameters, or be assigned a value which is a function.
6) return can be used where it is entirely unnecessary (and then the return type is no longer optional). (Note in Copute return is only allowed to short-circuit)

So we had to memorize one less item than my list and note some of those numbered items are contain multiple points.

Shelby

unread,
Sep 12, 2013, 9:26:13 PM9/12/13
to scala-...@googlegroups.com, she...@coolpage.com
I went searching my old writings for this explanation of higher-kinded types.

(originally written on Sun 12 Jun 2011 - 15:14)

Higher Kinded type abstraction (a/k/a generics or polymorphism)

Higher Kinded types are actually quite easy to understand.

See the illustrations for section 5, on pages 6 and 7:


Value: 5      [1,2,3]    abstract  abstract      abstract
Type:  Int    List[Int]  List[T]    Pair[U, V]    Iterable[T, Sub[_]]
Kind:  *      *          * -> *    * -> * -> *    * -> (* -> *) -> *

Concrete (a/k/a "proper") types (e.g. Int, List[Int]) can be instantiated (a/k/a "value"). Abstract types (e.g. List[T], Pair[U, V]) are type constructors, i.e. they can construct a concrete type when their type parameters (e.g. T, U, V) are specified.

A higher kinded type system allows the type parameter for an abstract type to be an abstract type, instead of requiring it be a concrete type. Without higher-kinded typing we can do (pseudocode follows):

CLASS Iterable[T, Sub]
{
  filter : T -> Bool -> Sub
}

CLASS ListInt inherits Iterable[Int, ListInt] {}

Or:

class Iterable[T]
{
  filter : T -> Bool -> Iterable[T]
}

class List[T] inherits Iterable[T]
{
  filter : T -> Bool -> List[T]  // an override
}

Neither of the above enforce (at compile time, i.e. static typing) the return type of filter for subtypes of Iterable.

Whereas with higher-kinded types we can keep the SPOT (single-point-of-truth) in the base Iterable class:

class Iterable[T, Sub[_]]
{
  filter : T -> Bool -> Sub[T]
}

class List[T] inherits Iterable[T, List] {}

Note higher-kinded typing can be performed with C++ templates, but this is fugly verbose and remember that templates are reified at compile-time, which afair has some drawbacks one of which is in terms of the Expression Problem.

Shelby

unread,
Sep 12, 2013, 10:08:48 PM9/12/13
to scala-...@googlegroups.com, Shelby
Yeah stack allocation would be great for optimizing speed in some cases. We can drop into JNI and use C for that. Why would we want to do that in a high-level language?

And I am betting immutability (referential transparency) is going to play a much larger role, as it optimizes many facets including parallelism and concurrency (two distinct concepts).

I looked briefly at Standard ML and OCaml. Their syntax is even more unfamiliar (than Scala) to those coming the C-like trajectory that I had quoted. For example, I guess you can do generics using some combination of ML signatures and functors, but I've already forgotten. I don't know if ML has definition site variance? So I am agreeing with you, yet saying Scala is much more reachable for the mainstream (and also due to compatibility with Java and JVM). That mass of mainstream programmers wasn't ready for ML in 1980 and ML still isn't ready for them. Now they are ready to move up a level in abstractions, but the steeper the ascent, the fewer that will make that climb sooner.

If everything is going to be a object with a meaningful type, then how does it make any sense that Scala chose not to implement first-class disjunctions? That is really perplexing me how Scala obtained such a large hole and even went to great lengths to workaround it with automatic subsumption to Any. Is there some great advantage of subsumption to Any and not supporting first-class disjunctions? Subsumption to Any causes real problems, e.g. implementing equals properly:


Regarding meta programming, I haven't had time to research the proposals for Scala macros. What ever they do, I hope there is an option to see the generated Scala code. Shouldn't macros just be DSLs? Shouldn't Copute be a DSL? These are issues I need to research more.I would hope to find opportunities for unification. Perhaps that tangential discussion doesn't belong in this thread and will just distract my focus on completing Copute.

My definition of declarative is:


> The declarative property is where there can exist only one possible set of
> statements that can express each specific modular semantic.
>
> The imperative property[3] is the dual, where semantics are inconsistent under
> composition and/or can be expressed with variations of sets of statements.

This definition of declarative is distinctively local in semantic scope, meaning that it requires that a modular semantic maintain its consistent meaning regardless where and how it's instantiated and employed in global scope. Thus each declarative modular semantic should be intrinsically orthogonal to all possible others— and not an impossible (due to incompleteness theorems) global algorithm or model for witnessing consistency, which is also the point of “More Is Not Always Better” by Robert Harper, Professor of Computer Science at Carnegie Mellon University, one of the designers of Standard ML.

Examples of these modular declarative semantics include category theory functors e.g. the Applicative, nominal typing, namespaces, named fields, and w.r.t. to operational level of semantics then pure functional programming.

Thus well designed declarative languages can more clearly express meaning, albeit with some loss of generality in what can be expressed, yet a gain in what can be expressed with intrinsic consistency.

An example of the aforementioned definition... (c.f. aforementioned url to read more)

Shelby

unread,
Sep 12, 2013, 11:14:33 PM9/12/13
to scala-...@googlegroups.com
On Friday, September 13, 2013 8:19:27 AM UTC+8, Simon Schäfer wrote:

On 09/11/2013 11:32 PM, Shelby Moore wrote: 
> [...] 
I don't think the choice of the language is important for mainstream 
programmers, where I count enterprise programmers as mainstream 
programmers. Such developers tend to use frameworks the whole day 
because they don't spend the whole day writing code, thus they are not 
skilled enough _as programmers_ to code large systems completely by 
themselves.

Instead, they spend there time convincing customers to buy there 
products, supporting them with there problems, and creating (in my 
opinion) boring frontends for there system while wasting there time 
trying to understand the overly complex frameworks the have to use. 

There is no choose of a language, there is just the employer or the 
customer that tells which ecosystem, which framework or which library to 
use and how the product should look like. Languages are chosen because 
they have a ecosystem that fulfills needs.

Agreed the described type of "salesman" programmer does exist. The Camel Has Two Humps research on programmer aptitude supports this.

Disagree that all programmers fall into your binary taxonomy, DumbWorthlessLanguageAgnostic and AwesomeScalaIsPerfect.

I very much disagree with your concept that the world is composed of people who don't program, and the rest who use some awesome language such as Scala (and I assume you would include Haskell, ML) and per your taxonomy we must necessarily exclude the mainstream languages C++, C#, Java, Python, and Javascript. I almost want to believe you didn't realize what you wrote. The assertion is so ludicrously detached from reality.

Do you really believe the Scala world is so exceptional that you should just ignore attempts to add DSLs (e.g. Copute) to Scala that can add to the ecosystem?

Agreed programmers choose languages to get a job done. Libraries are important. Also finding others with the necessary skillset is important too. Here follows an example of myself recently trying to convince some expert C++ programmers to learn and use Scala for an upstart project that I was interested to contribute programming to:

(discussion continues to the next page of that thread)

What you will find is that if most programmers don't know Scala, it is impossible to convince anyone to use Scala for a project.

You can't ignore the rest of the world, unless you just want a toy language-- regardless how capable the language is.

Apparently ML was an awesome language since decades and it is still a toy, because there is no way I can use it commercially because it doesn't have enough penetration.


Scala is chosen by developers thinking that they can do their job with 
it in the best possible way (and probably more as for other languages 
because of its insanely awesomeness). And as for all what I have seen in 
the last years, the majority of Scala programmers aren't utterly 
mainstream programmers.

I like Scala too for that reason. Do you have any technical reason why what I have shown for Copute can help and improve the ecosystem?

Are you implying the technical issues that I have presented are not an issue and that no one in Scala world needs them?

(I hope you will say that, then it is going to be very instructive to you when you later realize you were blinded by the stupendous arrogance you are displaying in this post)

Thus, while again I can only speak for myself (but don't think I will), 
I can only convince you to please reduce the amount of posts throwing to 
the lists as you did lastly - they are just annoying and don't 
contribute to Scalas further development in an amount you may think.

I think your email client can filter on the thread subject line.

Are you sure that no one here has any interest in the issues I am discussing?

If you can convince me of that, then I would agree it would be a waste of my time to post here.
 
The 
majority of Scala developers don't share your problems understanding its 
syntax or your way on how to solve problems or with the colleagues you 
are working together.

Do you think Scalaz is acceptable for a category theory library solution?

Do you think it doesn't matter if it is or not, because the current Scala std library is good enough and you don't think my idea can create something better?

Do you think a majority of Scala developers and all future programmers who might consider Scala agree with you?

Can you say something that will convince me of that?
 
Feel free to open another list for Copute after you brought your points 
to an end and I wish you the best to find some contributors. But don't 
expect there will be many.

Do you really think I am here to promote to ride on coattails of Scala's audience? What audacity you possess.

If I succeed in my goals, Scala will be riding on my coattails when it comes to larger audience dude. I bother to explain here, hoping I might learn something, also to let people know what I am doing and why, so they won't misconstrue it as a threat to Scala and hopefully rather see it as one example of building the Scala ecosystem.
 
Scala is chosen because it enjoyable to write code in that language but 
mainstream programmers will not use Scala because the language is cool - 
they will use it because they are told so. 

That is the dumbest thing I've read since I been in this forum for past years.

Programmers are going to choose the language they love. The type of salesman programmer you mentioned upthread is not the type of programmers I have been working with my entire life. And none of the programmers I've worked with in my life know Scala.

May I call you Ostrich?

If you can convince me as I challenged you above, then I will most graciously shut up.

Shelby

unread,
Sep 12, 2013, 11:37:09 PM9/12/13
to scala-...@googlegroups.com, she...@coolpage.com
On Thursday, September 12, 2013 1:33:17 PM UTC+8, Suminda Dharmasena wrote:
BTW, is this the repo for Copute: https://code.google.com/p/copute/

That was back when I was transitioning away from HaXe and trying to decide what I would want in a computer language. John De Goes was kind enough to recommend I look at Scala. I evaluated every language I could find, e.g. Haskell, Scheme, Clojure, Groovy, Python, Ruby, etc, etc.

I started off being open to all ideas, including dynamically typed (i.e. uni-typed to Any) languages and originally was thinking of allow both dynamic and static typing.

Early in 2011, I realized Scala had really everything I was looking for except while learning about Haskell I came to realize the importance of immutability and referential transparency, so then I realized that everything about the way I used to program including GUIs and for example callbacks, had to change.

Then I realized I had to throw everything in the trash from a library standpoint and start from scratch, even the Scala library has to go, because the didn't use the maximum granularity of interfaces (traits), e.g. the method in Seq that I need for function parameters is polluted by the paradigms that conflict with functor as a base for a category theory library.

Scalaz gets around this by using implicits and structural typing, but this intrinsically incongruent with subtyping, i.e. Scala is not Haskell. Scala is an inductive language with Top as the type that all types are a member of, and has true subtying. Haskell is a co-inductive language with Bottom as the type that all types are a member of and does not have subtyping (it is an illusion).
 
There is no code there. Perhaps you can put the code at Github or somewhere to start with. As with any open source project you have to expect about 90% will just use it, 9% will give feedback and 1% will contribute. So don't expect a following from the on set.

Of course I am going that direction soon.
 
Ideal way to start would be to have a source to source compiler to a more established language, perhaps scala where the output code is clean and human readable also. Also perhaps retaining comments in the Copute source for more comprehension and some additional comments on the potential transforms if not readily clear. Perhaps this would give you more feedback from the user base of the other language also.

Exactly what the current work on the compiler is targeting. 

Suminda Dharmasena

unread,
Sep 12, 2013, 11:47:19 PM9/12/13
to scala-...@googlegroups.com
@Shelby

Some level of critical and constructive aggression, attack and challenge can foster further flow of feedback and comments. But you have to always keep your self in check. There is a fine line when you cross this boundary and also change from person to person. You seam to be flying off the handle in use of language in the last bit of your previous email. This might prevent continued discussion and bounce ideas & opinions around to get a critical evaluation. Also you seam to take a win or loose stance in an argument as when a case is make you are willing to "will most graciously shut up". Continued discussion, debate and keeping an argument with some level respect, constrain and refined use of language may uncover hidden gems of knowledge without having to gracefully shutting up.

My humble opinion if I am to bring my 2C into this.

Suminda Dharmasena

unread,
Sep 12, 2013, 11:52:52 PM9/12/13
to scala-...@googlegroups.com
@Shelby

Some level of critical and constructive aggression, attack and challenge can foster further flow of feedback and comments. But you have to always keep your self in check. There is a fine line when you cross this boundary and also change from person to person and from culture to culture. What is acceptable in one culture may not be acceptable in another. You seam to be flying off the handle in use of language in the last bit of your previous email. This might prevent continued discussion and bounce ideas & opinions around to get a critical evaluation. Also you seam to take a win or loose stance in an argument as when a case is make you are willing to "will most graciously shut up". Continued discussion, debate and keeping an argument with some level respect, constrain and refined use of language may uncover hidden gems of knowledge without having to gracefully shutting up.

My humble opinion if I am to bring my 2C into how you foster and try to stimulate debate.

Shelby

unread,
Sep 13, 2013, 12:02:25 AM9/13/13
to scala-...@googlegroups.com
Because he told me I could not post here. He said no one is interested. He basically told me to shut up. So I am holding him to a higher level of factual proof. All I read to back his ability to be so arrogant to me, is that he thinks all programmers are either stupid morons or use a non-popular language.

Shelby

unread,
Sep 13, 2013, 12:59:15 AM9/13/13
to scala-...@googlegroups.com, she...@coolpage.com
According to Simon Schäfer's illogic, then Jeff Atwood, the programmer who created StackOverflow, is a dumb slave told what language to use and doesn't really know how to program, because he chose Ruby based on Yeggie claim of being proficient in 3 days after being a Perl programmer for 8 years.


Of course, we went on to build Stack Overflow in Microsoft .NET. That's a big reason it's still as fast as it is. So one of the most frequently asked questions after we announced Discourse was:
Why didn't you build Discourse in .NET, too?

He continues:

Like any pragmatic programmer, I pick the appropriate tool for the job at hand. And as much as I may love .NET, it would be an extraordinarily poor choice for an 100% open source project like Discourse...
...Nobody accepts your patch to a core .NET class library no matter how hard you try. It always feels like you're swimming upstream...
...As I wrote five years ago:
I'm a pragmatist. For now, I choose to live in the Microsoft universe. But that doesn't mean I'm ignorant of how the other half lives. There's always more than one way to do it, and just because I chose one particular way doesn't make it the right way – or even a particularly good way. Choosing to be provincial and insular is a sure-fire path to ignorance. Learn how the other half lives. Get to know some developers who don't live in the exact same world you do. Find out what tools they're using, and why...
...However, I'd also be lying if I didn't mention that I truly believe the sort of project we are building in Discourse does represent most future software. If you squint your eyes a little, I think you can see a future not too far in the distance where .NET is a specialized niche outside the mainstream.
But why Ruby? Well, the short and not very glamorous answer is that I had narrowed it down to either Python or Ruby, and my original co-founder Robin Ward has been building major Rails apps since 2006. So that clinched it.
I've always been a little intrigued by Ruby, mostly because of the absolutely gushing praise Steve Yegge had for the language way back in 2006. I've never forgotten this...
...And he somehow made it all work together so well that you don't even notice that it has all that stuff. I learned Ruby faster than any other language, out of maybe 30 or 40 total; it took me about 3 days before I was more comfortable using Ruby than I was in Perl, after eight years of Perl hacking. It's so consistent that you start being able to guess how things will work, and you're right most of the time. It's beautiful. And fun. And practical...
...And of course the Ruby community is, and always has been, amazing. We never want for great open source gems and great open source contributors...
...Even if done in good will and for the best interests of the project, it's still a little scary to totally change your programming stripes overnight after two decades. I've always believed that great programmers learn to love more than one language and programming environment...

Shelby

unread,
Sep 13, 2013, 1:10:32 AM9/13/13
to scala-...@googlegroups.com, she...@coolpage.com
I have one more retort to my German antagonist, from Eric S. Raymond the self-claimed 150 - 170 IQ genius who wrote The Cathedral and The Bazaar and The Art of Unix Programming.

http://esr.ibiblio.org/?p=4901 (National styles in hacking)

Presented for your amusement: Three stereotypical hackers from three different countries, described relative to the American baseline.
 
The German: Methodical, good at details, prone to over-engineering things, careful about tests. Territorial: as a project lead, can get mightily offended if you propose to mess with his orderly orderliness. Good at planned architecture too, but doesn’t deal with novelty well and is easily disoriented by rapidly changing requirements. Rude when cornered. Often wants to run things; just as often it’s unwise to let him.

With that, I think better code rather than waste my time on nonsense. P.S. I have some German ancestry, as well French, Welsh, and American native Cherokee. But clearly I am a loud-mouth American by culture.

Suminda Dharmasena

unread,
Sep 13, 2013, 1:39:26 AM9/13/13
to scala-...@googlegroups.com, she...@coolpage.com
Being too loud mouth may not help you. Some rebuff may be needed in some cases. A more gentler approach can help.

Jason Zaugg

unread,
Sep 13, 2013, 2:34:58 AM9/13/13
to Shelby, scala-debate
On Thu, Sep 12, 2013 at 11:55 PM, Shelby <she...@coolpage.com> wrote:

Even if working with Scala isn't the plan, it's still the JVM, so people *will* try to use it with other languages, and the type signatures are going to leak.  So I'd encourage you not to turn these into Any (which is a gigantic red flag for me), but instead into some sort of wrapper that can enforce at least *some* good behaviour.

How is subsumption to Any not a reasonable behavior? You are forced to specialize the type with match-case before you can do anything with it besides pass it along as an Any.

I haven't been following this too closely, but I'll chime in here. Apologies is I've missed the point.

Previous efforts to support unboxed unions (ie without using Either or hypothetical OneOf3, .., OneOfN. have run into problems with erasure. We've had a compiler around at one stage (caveat: said compiler was cobbled together in a bar by a certain profilic scalac hacker) that would type `if (true) a: A else b: B` as `A|B`. But when it comes time to typecase, you can only do so if the erased types of `A` and `B` are rich enough. So you can't write:

   def foo[A, B](ab: A|B) = ab match { case _: A => case _: B }

You'd need a scheme by which values of these types were at least boxed in `class Union[T](value: Any, tpe; TypeTag[T])` and where pattern matching could use those reified types at runtime to pick the right case.

-jason

Suminda Dharmasena

unread,
Sep 13, 2013, 2:49:24 AM9/13/13
to scala-...@googlegroups.com, Shelby
A lightweight Metaobject could do the trick. Metaobjects slow down things through.

Or have I missed something?

Shelby

unread,
Sep 13, 2013, 2:59:56 AM9/13/13
to scala-...@googlegroups.com, Shelby
Thanks. That adds to my knowledge about possible reasons why it hasn't been implemented.

But why does that stop it from being implemented? There are many things that can't be typed due to erasure, e.g. you didn't refuse to implement generics because they are erased. Keeping that ("crazy idea drunken stupor inspiration") implementation for when the types are known would still be quite powerful, analogously as generics are quite powerful even though they are erased.

In short, why conflate type erasure with typing capabilities. Type erasure has some negative implications, yet it also has some positive implications. I hope we are not saying that we never get first class unions until we decide to reify typing.

That problem above can be solved with implicits which automatically box in that case:

def foo[A, B](ab: OneOf2[A, B]) = ...

implict def[A, B](ab: A|B) = ab match { case _:A => OneOf2_1[A, B](ab) case _:B => OneOf2_2[A, B](ab) }

So yeah we lose the first-class and revert to boxing and unboxing in the erased case. A macro could do that automatically or even the compiler.

Suminda Dharmasena

unread,
Sep 13, 2013, 3:43:54 AM9/13/13
to scala-...@googlegroups.com, Shelby
About the GC point you raise below.

If I am to put it a different way. Both GC and ARC helps a developer having to do explicit memory management. If you are to bet a sizeable amount on:
GC remain main stream
ARC becomes main stream

Will you put your money only on GC. I would spread my bets. With some R&D ARC may find applications outside Objective C / Clang and would be perfected. GC could perhaps co exists or might lose prominence.

If you were to future proof a solution which route will you take? Purely reliant on GC or currently reliant on GC but exploring alternatives with emerging technology?

Just trying to bring a different perspective.

Suminda

Shelby

unread,
Sep 13, 2013, 3:46:20 AM9/13/13
to scala-...@googlegroups.com, she...@coolpage.com
On Friday, September 13, 2013 1:39:26 PM UTC+8, Suminda Dharmasena wrote:
Being too loud mouth may not help you. Some rebuff may be needed in some cases. A more gentler approach can help.

When there are people irrationally conspiring to make one shut up, one has to choose either to leave or pound them into dust (of course without being rude, just by using logic), because they won't stop until you do. Humans are very territorial and they become irrational and blind to their biases when they feel their basis is threatened. The greatest don't suffer this, and that is why they are hyper successful:

(Ego is for little people)

I remember how everyone wanted to sweep the following under the rug, but the fact is that the same issues remain true two years later:


> Scala, as a language, has some profoundly interesting ideas in it...
> But it's also a very complex language.
> The number of concepts I had to explain to new members of our team
> for even the simplest usage of a collection was surprising:
> implicit parameters, builder typeclasses, "operator overloading", return type inference,
> etc. etc. Then the particulars: what's a Traversable vs. a TraversableOnce?
> GenTraversable? Iterable? IterableLike? Should they be choosing the most general
> type for parameters, and if so what was that? What was a =:= and where could
> they get one from?

I am proposing to solve that, yet I am told to shut up. Is that what the community wants for a reputation?

Then the following paragraph is nearly exactly what I have been saying (and I formed my opinion independently).

> In addition to the concepts and specific implementations that Scala introduces,
> there is also a cultural layer of what it means to write idiomatic Scala. The
> most vocal — and thus most visible — members of the Scala community at large
> seem to tend either towards the comic buffoonery of attempting to compile their
> Haskell using scalac or towards vigorously and enthusiastically reinventing the
> wheel as a way of exercising concepts they'd been struggling with or curious
> about. As my team navigated these waters, they would occasionally ask things
> like: "So this one guy says the only way to do this is with a bijective map on a
> semi-algebra, whatever the hell that is, and this other guy says to use a
> library which doesn't have docs and didn't exist until last week and that he
> wrote. The first guy and the second guy seem to hate each other. What's the
> Scala way of sending an HTTP request to a server?" We had some patchwork code
> where idioms which had been heartily recommended and then hotly criticized on
> Stack Overflow threads were tried out, but at some point a best practice
> emerged: ignore the community entirely.

And another of my points exactly.

> In hindsight, I definitely
> underestimated both the difficulty and importance of learning (and teaching)
> Scala. Because it's effectively impossible to hire people with prior Scala
> experience (of the hundreds of people we've interviewed perhaps three had Scala
> experience, of those three we hired one), this matters much more than it might
> otherwise. If we take even the strongest of JVM engineers and rush them into
> writing Scala, we increase our maintenance burden with their funky code; if we
> invest heavily in teaching new hires Scala they won't be writing production code
> for a while, increasing our time-to-market. Contrast this with the default for
> the JVM ecosystem: if new hires write Java, they're productive as soon as we can
> get them a keyboard.

I mean does anyone here even realize all the special cases in something as mundane as constructor parameters, or are you all just too close already to the language to realize what it looks like to a newbie:


Even the following I am proposing to solve, because the only way you solve it, is by making Scala so popular and ubiquitous that it is much less often that you need to interopt with Java:

> we found ourselves having to superimpose four different
> levels of mental model — the Scala we wrote, the Java we didn't write, the
> bytecode it all compiles into, and the actual problem we were writing code to
> solve...
> Even with services that only used Scala libraries, the choice was never between
> Java and Scala; it was between Java and Scala-and-Java.

Kudos, the remainder of Yammer's criticisms have been addressed to some extent, e.g. IDE, build, and Scala version binary compatibility (Martin laid down the gauntlet on that one I read!). I read that Simon contributed to the IDE work, so praise unto him also. Why he wants discourage someone from fixing the other issues above I can only assume is territorial myopia.

> Don't ever use a for-loop. Creating a new object for the loop closure,
> passing it to the iterable, etc., ends up being a forest of invokevirtual calls,
> even for the simple case of iterating over an array. Writing the same code as a
> while-loop or tail recursive call brings it back to simple field access and
> gotos.

Copute will solve that.

> Don't ever use scala.collection.mutable.

Copute will solve that.

> Replacing a
> scala.collection.immutable.HashMap with a java.util.concurrent.ConcurrentHashMap
> in a wrapper also produced a large performance benefit for a strictly read-only
> workload.

Do parallelized (.par) immutables solve this?

> Always use private[this]. Doing so avoids turning simple field access into an
> invokevirtual on generated getters and setters. Generally HotSpot would end up
> inlining these, but inside our inner serialization loop this made a huge
> difference.

Why can't this be an invokeinterface?


I will study this more.

> Avoid closures. Ditching Specs2 for my little JUnit wrapper meant that the
> main test class for one of our projects (~600-700 lines) no longer took three
> minutes to compile or produced 6MB of .class files. It did this by not capturing
> everything as closures. At some point, we stopped seeing lambdas as free and
> started seeing them as syntactic sugar on top of anonymous classes and thus
> acquired the same distaste for them as we did anonymous classes.

That is good to know that closures have a cost. I had been thinking it is better to encourage explicit function parameters.

Shelby

unread,
Sep 13, 2013, 5:25:37 AM9/13/13
to scala-...@googlegroups.com, she...@coolpage.com
On Friday, September 13, 2013 3:46:20 PM UTC+8, Shelby wrote: 
> Avoid closures. Ditching Specs2 for my little JUnit wrapper meant that the
> main test class for one of our projects (~600-700 lines) no longer took three
> minutes to compile or produced 6MB of .class files. It did this by not capturing
> everything as closures. At some point, we stopped seeing lambdas as free and
> started seeing them as syntactic sugar on top of anonymous classes and thus
> acquired the same distaste for them as we did anonymous classes.

That is good to know that closures have a cost. I had been thinking it is better to encourage explicit function parameters.

Perhaps many of you already read the following, today was my first read:


Scala loses a lot of benefits in this world, because features like closures have overhead on the JVM.  Hopefully when Java adopts closures, this overhead can be alleviated, but it is there right now.

Java 8 has closures. But Scala can't make it the default target, because Dalvik doesn't support them yet.

However, there is a backport to Java 5,6,7, and the author expects the performance to be the same, so perhaps there was no optimization of closures for Java 8?

Shelby

unread,
Sep 13, 2013, 6:12:44 AM9/13/13
to scala-...@googlegroups.com, she...@coolpage.com
Here is an example of why a simplified (and more powerful!) category theory std library would be superior to the baroque std library we have now.

Repeated parameters of A* input an Array[A]. Thus List.apply[A](xs: A) = xs.toList, i.e. the array is being copied into a List, which is a duplicated operation where the compiler built the Array argument from a sequence of arguments in the source code.

Rather with the category theory library I have been coding, List[A] will extend the Monoid[A] interface, thus the compiler can build any Monoid[A] directly. So then List.apply[A](xs: List[A]) = xs.

Sergey Scherbina

unread,
Sep 13, 2013, 8:26:55 AM9/13/13
to Shelby, scala-...@googlegroups.com
It's possible to use shapeless' Coproducts for first-class disjoint unions:

  import shapeless._
  import ops.coproduct._

  type Union = Int :+: String :+: Boolean :+: CNil

  implicit def union[T](t: T)(implicit inj: Union Inject T) = inj(t)

  def sum(xs: Union*) = xs map (_ unify) map {
    case n: Int => n
    case s: String => s.length()
    case b: Boolean => if (b) 1 else 0
  } reduce (_ + _)

  sum(1,"hello", true)                            //> res0: Int = 7

  // sum(1,"hello", true, 2.0)
 // Compiler forbids wrong type: 2.0 : Double

--
With regards,
Sergey Scherbina


2013/9/13 Shelby <she...@coolpage.com>

--

Justin du coeur

unread,
Sep 13, 2013, 10:17:49 AM9/13/13
to Shelby, scala-debate
On Thu, Sep 12, 2013 at 5:55 PM, Shelby <she...@coolpage.com> wrote:
On Friday, September 13, 2013 4:46:34 AM UTC+8, Justin du Coeur wrote:
Interesting, although I'm not seeing the motivation offhand.  What's the use case?  This isn't something I'd felt the lack of.

For example, supporting naming for extractors:


Fair enough -- I think it's a bit minor, but having a more concise way to extract complex patterns seems nice.
 
Also, some programmers prefers names instead of field index to make code more self-documenting.

That's reasonable.  I don't tend to use raw tuples much outside of pretty constrained circumstances, but I can see that this would be helpful if I used them more.
 
[Disjunctions] 
They are typed-checked as subsumed to Any, which is the same as if you had written Any.

Well, yes -- but I generally don't allow Any (or even AnyRef) in my code.  I usually consider appearances of AnyRef to be design bugs, and flag them for re-examination and fixing.  The only exception I can think of is the "signature" of Actors -- and they get away with it *specifically* because they are received in a rather constrained way, so I have a little more faith that it isn't going to result in ad-hoc casting, and the runtime support to protect against mistakes is at least adequate.
 
This is part of why I'm wondering about how this works with Scala, BTW.  If Copute is intended to be used just on its own, and does enough internal type-checking, I might be willing to countenance these; if folks are going to be using Copute side-by-side with Scala, I'd probably say that this is a show-stopper, that would probably prevent me from using the language for any serious work.

How would the subsumption to Any stop you from using this with Scala? Scala is subsuming to Any also.

Well, yes -- if I used disjunctions as a pattern.  I don't, for exactly this reason.
 
Even if working with Scala isn't the plan, it's still the JVM, so people *will* try to use it with other languages, and the type signatures are going to leak.  So I'd encourage you not to turn these into Any (which is a gigantic red flag for me), but instead into some sort of wrapper that can enforce at least *some* good behaviour.

How is subsumption to Any not a reasonable behavior? You are forced to specialize the type with match-case before you can do anything with it besides pass it along as an Any.

Yeah, but it tends to lead to ad-hockery, which is my primary concern.  I've seen too many instances where folks took an Any (or Object, if we're talking about Java), said, "Oh, I *know* that this must be an Int, because my code flow surely only allows that", and did an ad-hoc asInstanceOf[] -- only to have it turn into a weird and subtle bug when the code flows changed later.  Heck, I've made that mistake myself more times than I care to think about.

So my assertion (and you can certainly argue the point, although I *think* I'm correct) is that the only appropriate way to handle a disjunction is with a PartialFunction, preferably with enforcement that the PF is defined over all of the types in the disjunction.  That at least says that you have thought about the problem a bit, and will often kick up errors much sooner and more reliably if the code changes.
 
 For example, a disjoint type could become a wrapper around a hidden Any, and at least enforced at runtime that (a) only the expected types can be assigned to it, and (b) it could only be used with a PartialFunction, with some assertions to check that the function isDefinedAt all of the types contained in the wrapper.

Runtime typing would be much worse guarantee, opening holes.

Than compile-time?  Absolutely, but I'll take something over nothing.

And note that I'm saying that it should assert-check *all* of the types of the disjunction over the PF -- so you're only dependent on this code path getting executed once during testing, *not* that it gets exhaustively checked.  That seems at least much likelier to catch errors during debugging than in production.  Inferior to compile-time type checking, to be sure (and if I could think of a way for a macro to do the enforcement at compile-time I'd be suggesting that instead), but at least much better than you'll get with a raw Any, since the intended types of the disjunction are being propagated in the code.
 
The point is you can write the types now, but they aren't checked and then later if ever they are checked, you can expect to find incongruences in your code that you did not know existed. This is no worse than Scala you write now where there may be incongruences that you are not aware of, because all are subsumed to Any.

Right -- but again, I just plain don't *do* that.  I consider it a bug when I find myself backed into a corner and forced to use Any/AnyRef, even when that seems like it can't possibly fail.  When I get to the point of hiring more engineers and have to write coding standards, that will likely be one of them.

Context: one of the reasons I use Scala is that it allows me to do crazy-powerful things while preserving full type safety.  If I didn't care about that so much, I might use Ruby or something like that.  I rely on the compiler to tell me when I'm doing something stupid, and I consider that really, really important for large-scale code.

Given that, Any and AnyRef *must* be considered pretty evil in application code, used only in very limited circumstances and with a lot of thought and design around them.  This disjunction mechanism seems likely to introduce a lot more Any's into the system -- and worse yet, to bleed them into class APIs.  That seems likely to encourage more bugs unless the machinery provides ways to mitigate those problems.

Seriously: my other quibbles so far are minor.  *This* would probably rule out Copute for any serious enterprise-grade work in my book, simply because I'd have to worry about engineers bleeding Any's into the codebase.  (Hence my suggestion about a compiler warning: that would at least allow me to forbid use of that mechanism without approval.)
 
I think what you hate about Java is single-inheritance (lack of multiple inheritance and mixins). Thus, I think you are conflating this with the benefits on an interface that contains no implementation.

It's possible that you mean something very different by "interface" and "mixin" than I do, but I don't think you're listening to what I'm saying.

Take an Interface with properties A, B and C.  Given those, it is *extremely* common for me to want to define functions D and E, which are entirely functions *over* A, B and C, and are essentially universal to any implementation of the interface.  The best place to put those functions, in my experience, is typically in the same trait.

Yes, it is *possible* to put the function definitions into a mixin -- but from a factoring POV that's generally inappropriate, and tends to lead to boilerplate, at least in the sense of "if you include this interface you always include this mixin".  These functions are defined in terms of these properties; from a code-cohesion POV, they are generally best defined *with* the properties.  And I rarely know in advance that this will be the case, until I'm well into development -- it's usually the result of refactoring.

There might be benefits, yes, and I'm open to that argument.  But keep in mind that, to me, the price looks *extremely* high, and I have to wonder if it's worth it.

(It is also possible that these interfaces should only be used in extremely restricted circumstances; I'm really not clear on how you intend Copute to be used in normal application programming.)
 
The benefit of an interface is that implementation (concretion or instance construction) is orthogonal to type injection (i.e. abstraction).

Dude -- I've programmed most of the OO languages ever written.  I've been hearing this argument since the invention of Java.  I know what is *good* about interfaces -- I'm not just certain (based on what you are saying) that you are grokking what's *bad* about them, and why lots of us were happy that Scala moved away from that excessively rigid approach.

Putting it more simply: in my abstract example above, where do the implementations of functions D and E live?  If the implementation is *not* in the same place as the definition of A, B and C, why not?

I think you're extremely focused on high-level type theory, which is lovely.  I'm not: I mainly care about day-to-day down-in-the-details application engineering, and very little matters as much to me as good factoring of the application code.  In practice, I have *never* found myself wishing that I was using interfaces instead of traits, so I need more convincing about why and when you'd want to use them.
 
Which reminds me: the all-caps thing just plain bugs me.  This isn't anything rational -- it's just many years of Internetting, teaching me that all-caps means YELLING.  So on a purely aesthetic level, I look at this code and it feels like it's shouting at me, which gives me an instinctive negative gut reaction.

The all-caps is much less annoying in the Eclipse IDE with its monospace font and purple syntax highlighting on the keywords. I agree it looks horrendous as displayed in this variable-width font used at google groups. I don't know what font you are viewing with in your email client.

Granted, this sort of thing is in the eye of the beholder.  But I have to point out that the purple highlighting is something you can turn off if you dislike it, whereas the all-caps is inherent in the language.  (Or is it?? See below.)

The point is that when the code is displayed in a monospace font without syntax highlighting (e.g. in HTML, PDF, Word documents, etc) then the viewer can detect the keywords more rapidly.

For those who hate it, I will definitely try to get a feature into the IDE (and hopefully Github too) to toggle the keywords to lowercase. And the all-caps is unnecessary when the keywords are colored.

Okay, now I'm confused: does the case of the keywords matter?  Your examples had led me to assume that the all-caps was required.  If they are case-insensitive I might have other concerns (mostly that they consume more namespace), but this is a more minor issue.
 
I am also open to abandoning the all-caps keywords entirely if I get feedback from all sectors asking me to do so.

Fair enough.  Consider this a vote against.
 
I had the same "ouch" reaction when I saw in this variable-width font display at the google groups' website UI. Caused me to doubt my original decision. But again, I think most code will be displayed in a monospace font, either syntax highlighted or not. It is the latter "not" case that I am trying to address, e.g. when one quickly types some code in a non-IDE text editor.

Honestly, I think it's too high a price for an edge case.  The looks aside, I find all-caps a pain in the ass to type -- either I have to bring in my capslock key (which I am not used to when coding, and isn't built into my touch-typing) or I have to hold down the Shift.  Either way, it slows down my typing noticeably, and would likely annoy me a lot in practice.
 
Indeed, now you understand what the person trying to learn Scala feels when they see all these constructs they don't understand and there is no quick reference card where they can quickly assimilate.

Copute hides that from you. You will only see it if you are looking at the Scala code Copute generates. That is a higher-kinded type. It says that the Sub(type) must be a type of the interface it implements, and the Sub(type) is known to the interface that the Sub(type) implements.

That is absolutely necessary if you want to unifying subtyping and typeclasses, as I've shown. 

Hmm.  That may well be fair and appropriate.  I will say that, as a down-in-the-trenches engineer, spending 95% of my time writing application code and only 5% writing high-level libraries, it comes across as fairly ethereal and hard to relate to.  (Then again, I haven't found scalaz easy going either.)

That's relevant only from an audience POV.  You've talked a lot about trying to relate to ordinary engineers coming in from Java.  Most of them aren't going to have the slightest *clue* what you're talking about here, and the subject is likely to intimidate the bejeezus out of them.

If you're going to be successful, you need to think about who you're talking to.  This construct sounds much more interesting to high-level library programmers than to day-to-day engineers who know nothing of typeclasses and don't particularly want to.  It's an intriguing idea (although I'm not well-qualified to evaluate its merits), but I suspect will tend to drive away much of your target audience if you oversell it...

Justin du coeur

unread,
Sep 13, 2013, 11:54:12 AM9/13/13
to Shelby, scala-debate
I think you're contradicting yourself here.  You've been complaining extensively that Scala violates the principle of least surprise.  But what you're complaining about here is almost entirely sugar that is unnecessary, but which is *allowed* in Scala precisely to make it easier for Java programmers to pick it up...

Justin du coeur

unread,
Sep 13, 2013, 12:10:41 PM9/13/13
to Jason Zaugg, Shelby, scala-debate
On Fri, Sep 13, 2013 at 2:34 AM, Jason Zaugg <jza...@gmail.com> wrote:
You'd need a scheme by which values of these types were at least boxed in `class Union[T](value: Any, tpe; TypeTag[T])` and where pattern matching could use those reified types at runtime to pick the right case.

Which is just about what I was thinking in my suggestion, yes.  It seems like a bit of a PITA to write, but intuitively feels like macros might be able to reduce the boilerplate involved.  And if Copute was compiling to Scala anyway, might not actually be a bad way to handle such cases...

Justin du coeur

unread,
Sep 13, 2013, 12:22:31 PM9/13/13
to Shelby, scala-debate
On Wed, Sep 11, 2013 at 9:40 PM, Shelby <she...@coolpage.com> wrote:
Also by simplifying the typeclass as shown in my prior message, which I need to properly do a standard library based on category theory (functor, applicative, monad), which will be very very simple to understand and read (unlike Scalaz) and use.

You've said this repeatedly, and I'm finding myself skeptical.  Yes, your explanation of the concepts posted downthread is a significantly better explanation of most, and helps to motivate your design.  But seriously: if you think that the average programmer is going to look at that and not run away screaming, I think you're fooling yourself.

Your goal is good, and I do hope you succeed in it.  But I sincerely don't believe, based on the discussion so far, that you understand the average programmer anywhere near as well as you think you do.

Of course, scalaz is even worse in terms of comprehensibility.  But I suspect most of us here assume that only a tiny fraction of engineers care enough to even *try* to grok this stuff, so it doesn't matter so much.  And I do strongly suspect that the *conceptual* hurdles are far more important than the *syntactic* ones...

Justin du coeur

unread,
Sep 13, 2013, 12:31:16 PM9/13/13
to Shelby, scala-debate
On Thu, Sep 12, 2013 at 10:08 PM, Shelby <she...@coolpage.com> wrote:
If everything is going to be a object with a meaningful type, then how does it make any sense that Scala chose not to implement first-class disjunctions? That is really perplexing me how Scala obtained such a large hole

You've made a big deal about this, but I think you've lost perspective.  Again, from my viewpoint as a workaday engineer, this is an edge case.  Yes, it is occasionally an annoying edge case that I have to work around, but honestly -- I don't even notice the lack most of the time.

So while I agree that it would be lovely to fix this, it's an extremely minor detail in terms of what I look for in a language.  And (back to my previous point), if using it means encouraging my engineers to put Any into their function signatures, I will treat it as a misfeature and absolutely forbid its use.

Regarding meta programming, I haven't had time to research the proposals for Scala macros. What ever they do, I hope there is an option to see the generated Scala code. Shouldn't macros just be DSLs?

You have that backwards.  Macros are an implementation mechanism by which one *writes* more powerful DSLs.  They're the enabling tool.
 
Shouldn't Copute be a DSL?

I've been wondering that, yes.  I'd be more likely to take Copute seriously if it was done as a library that plugged into Scala, instead of a separate and somewhat idiosyncratic language.  That may or may not match your language goals, but it's a point worth factoring in.

Justin du coeur

unread,
Sep 13, 2013, 12:45:32 PM9/13/13
to Shelby, scala-debate
On Thu, Sep 12, 2013 at 11:14 PM, Shelby <she...@coolpage.com> wrote:
Here follows an example of myself recently trying to convince some expert C++ programmers to learn and use Scala for an upstart project that I was interested to contribute programming to:

(discussion continues to the next page of that thread)

What you will find is that if most programmers don't know Scala, it is impossible to convince anyone to use Scala for a project.

That isn't exactly a convincing argument.  Indeed, it simply displays the same flaw that Suminda has been trying to point out, which you seem to be ignoring.  In both cases, you (metaphorically) wandered into somebody else's house; declared that you are smarter than everyone present; started lecturing them pedantically; and wound up effectively calling other people stupid for disagreeing with you.  (And then getting defensive and "you're all picking on me" when you met resistance.)

That is pretty much the *least* effective way to convince anybody of anything.  Moving a community is a subtle game, and being right is by no stretch of the imagination sufficient to winning the argument.  It is absolutely essential that you engage in give-and-take, and frankly you have to be pretty damned thick-skinned to play that game successfully, since at *best* you are going to step on ideas that people are invested in.

In other words, it isn't level ground, and wishing that it was doesn't do you any favors.  You are trying to change the way other people are doing things, and the onus is *very* much on you to prove, very concretely, that you are correct -- and to engage very sincerely with the community to understand the shortcomings in your own ideas and find the necessary compromises.

Or more bluntly: a bull in a china shop doesn't convince anybody of anything, except that they want to shoo the bull away...

Justin du coeur

unread,
Sep 13, 2013, 12:48:38 PM9/13/13
to Shelby, scala-debate
On Fri, Sep 13, 2013 at 1:10 AM, Shelby <she...@coolpage.com> wrote:
I have one more retort to my German antagonist, from Eric S. Raymond the self-claimed 150 - 170 IQ genius who wrote The Cathedral and The Bazaar and The Art of Unix Programming.

Shelby, seriously: you're overquoting one source, and it's coming across as Argument By Authority.  I know Eric personally (through LARP circles), and while he's a smart guy, the name-dropping makes you sound like you're trying to hide behind him...

Shelby

unread,
Sep 13, 2013, 3:16:53 PM9/13/13
to scala-...@googlegroups.com, Shelby
But I can't:

def f(xs: List[Union]) = ...

f(List("test"))

Thus it isn't really first-class. We need the support in the compiler.

Sergey Scherbina

unread,
Sep 13, 2013, 3:52:10 PM9/13/13
to Shelby, scala-...@googlegroups.com
2013/9/13 Shelby <she...@coolpage.com>

But I can't:

def f(xs: List[Union]) = ...

f(List("test"))

Thus it isn't really first-class. We need the support in the compiler.


Of course, you can! Just try it :)

Actually, type Unit * is exactly Seq[Unit], so it isn't such different from List[Unit].
I use type Unit * in sample only for brevity at invocation.

For List's it looks like

  import shapeless._
  import ops.coproduct._

  type Union = Int :+: String :+: Boolean :+: CNil

  implicit def union[T](t: T)(implicit inj: Union Inject T) = inj(t)

  def sum(xs: List[Union]) = xs map (_ unify) map {
    case n: Int => n
    case s: String => s.length()
    case b: Boolean => if (b) 1 else 0
  } reduce (_ + _)

  sum(List(1,"hello", true))                            //> res0: Int = 7

  // sum(List(1,"hello", true, 2.0))
  // Compiler forbids wrong type: 2.0 : Double

Shelby

unread,
Sep 13, 2013, 4:01:35 PM9/13/13
to scala-...@googlegroups.com, Shelby
On Friday, September 13, 2013 10:17:49 PM UTC+8, Justin du Coeur wrote:
On Thu, Sep 12, 2013 at 5:55 PM, Shelby <she...@coolpage.com> wrote: 
[Disjunctions] 
They are typed-checked as subsumed to Any, which is the same as if you had written Any.

Well, yes -- but I generally don't allow Any (or even AnyRef) in my code.  I usually consider appearances of AnyRef to be design bugs,

Well yes I consider then uni-typed typing holes too. But the point you haven't addressed is how do you write heterogeneous collections without using Any? I really have to see the magic you know that no one else seems to know?
   
Given that, Any and AnyRef *must* be considered pretty evil in application code, used only in very limited circumstances and with a lot of thought and design around them.

Indeed. Please answer the above question then.
 
I think what you hate about Java is single-inheritance (lack of multiple inheritance and mixins). Thus, I think you are conflating this with the benefits on an interface that contains no implementation.
It's possible that you mean something very different by "interface" and "mixin" than I do, but I don't think you're listening to what I'm saying.

Take an Interface with properties A, B and C.  Given those, it is *extremely* common for me to want to define functions D and E, which are entirely functions *over* A, B and C, and are essentially universal to any implementation of the interface.  The best place to put those functions, in my experience, is typically in the same trait.

Yes, it is *possible* to put the function definitions into a mixin -- but from a factoring POV that's generally inappropriate, and tends to lead to boilerplate, at least in the sense of "if you include this interface you always include this mixin".  These functions are defined in terms of these properties; from a code-cohesion POV, they are generally best defined *with* the properties.  And I rarely know in advance that this will be the case, until I'm well into development -- it's usually the result of refactoring.

One factor is that abstract (no implementation) traits are just Java interfaces, thus are more interoperable with Java. Refer upthread to the comments from Yammer about the Java interoperability mess. However, that is not my main justification.

There is no extra boilerplate in Copute, just have your MIXIN extend the INTERFACE, then only extend the MIXIN where you need the default functionality.

If you don't want the default method to be overridden, then you can place it in the INTERFACE as a STATIC that inputs the INTERFACE as its first parameter. You then call these as Name.func(x, ...) where x is an instance of Name. I suppose I could support syntactical sugar for this case so you can call them as x.func(...).

The main justification is that interface is orthogonal to implementation. This is crucial design concept for code reuse and modularity. The consumer of an interface should not be partial to any particular implementation of subtypes, yet rather only to its documented semantics. By putting a default overrideable implementation in the interface (trait), the developer is able to avoid writing a complete specification of the interface, and thus consumers of the interface will rely on what they interpret the semantics to be by studying the default implementation. This will destroy code reuse.

One of my big goals with Copute, is I want to change the entire economy of open-source. For example I can't work around this stupid ego bug in Firefox, because I can modularly swap in my code to Firefox:

(what I predicted-- happened)

Thus programmers can't be paid by modular, rather they have to paid by large corporations. I want to change that where we are paid by module. I have very specific idea about how to accomplish this, which I will not discuss right now. Technicals first, before marketing.

There might be benefits, yes, and I'm open to that argument.  But keep in mind that, to me, the price looks *extremely* high, and I have to wonder if it's worth it.

No high cost, i.e. no boilerplate when extending the base MIXIN. Major gains in code reuse discipline.
  
The benefit of an interface is that implementation (concretion or instance construction) is orthogonal to type injection (i.e. abstraction).

Dude -- I've programmed most of the OO languages ever written.  I've been hearing this argument since the invention of Java.

Now you've finally read the correct argument unarguably elucidated.
  
I think you're extremely focused on high-level type theory, which is lovely.  I'm not: I mainly care about day-to-day down-in-the-details application engineering, and very little matters as much to me as good factoring of the application code.  In practice, I have *never* found myself wishing that I was using interfaces instead of traits, so I need more convincing about why and when you'd want to use them.

I am focused on what you are focused on, you just don't yet see all the factors I have worked out in my mind.
 
Which reminds me: the all-caps thing just plain bugs me.  This isn't anything rational -- it's just many years of Internetting, teaching me that all-caps means YELLING.  So on a purely aesthetic level, I look at this code and it feels like it's shouting at me, which gives me an instinctive negative gut reaction.

The all-caps is much less annoying in the Eclipse IDE with its monospace font and purple syntax highlighting on the keywords. I agree it looks horrendous as displayed in this variable-width font used at google groups. I don't know what font you are viewing with in your email client.

Granted, this sort of thing is in the eye of the beholder.  But I have to point out that the purple highlighting is something you can turn off if you dislike it, whereas the all-caps is inherent in the language.  (Or is it?? See below.)

The current proposal is yes they are inherent in the language, and thus to turn off all-caps, you need an IDE or renderer.
 
The point is that when the code is displayed in a monospace font without syntax highlighting (e.g. in HTML, PDF, Word documents, etc) then the viewer can detect the keywords more rapidly.

For those who hate it, I will definitely try to get a feature into the IDE (and hopefully Github too) to toggle the keywords to lowercase. And the all-caps is unnecessary when the keywords are colored.

Okay, now I'm confused: does the case of the keywords matter?  Your examples had led me to assume that the all-caps was required.  If they are case-insensitive I might have other concerns (mostly that they consume more namespace), but this is a more minor issue.

You can view them in lowercase only by having an IDE or renderer which display them in lowercase for you. The compiler requires all-caps for those keywords (note the "in" and "io" annotations for definition site contravariance (- in Scala) and invariance (no annotation in Scala) are required to be lowercase, because the type parameters are required to be all-caps so as to be distinguished from type names which are required to have the first letter uppercase and rest lowercase).
 
 
I am also open to abandoning the all-caps keywords entirely if I get feedback from all sectors asking me to do so.

Fair enough.  Consider this a vote against.

Then we lose the ability to distinguish keywords easily in code that is viewed in a text editor or in html. And how often are you likely to view the code in a non-IDE setting where it would annoy you?
  
I had the same "ouch" reaction when I saw in this variable-width font display at the google groups' website UI. Caused me to doubt my original decision. But again, I think most code will be displayed in a monospace font, either syntax highlighted or not. It is the latter "not" case that I am trying to address, e.g. when one quickly types some code in a non-IDE text editor.

Honestly, I think it's too high a price for an edge case.  The looks aside, I find all-caps a pain in the ass to type -- either I have to bring in my capslock key (which I am not used to when coding, and isn't built into my touch-typing) or I have to hold down the Shift.  Either way, it slows down my typing noticeably, and would likely annoy me a lot in practice.

When you are typing in your IDE, then you will type and view them as lowercase. Your IDE will hide from you that the text file contains them as all-caps.
 
Indeed, now you understand what the person trying to learn Scala feels when they see all these constructs they don't understand and there is no quick reference card where they can quickly assimilate.

Copute hides that from you. You will only see it if you are looking at the Scala code Copute generates. That is a higher-kinded type. It says that the Sub(type) must be a type of the interface it implements, and the Sub(type) is known to the interface that the Sub(type) implements.

That is absolutely necessary if you want to unifying subtyping and typeclasses, as I've shown. 

Hmm.  That may well be fair and appropriate.  I will say that, as a down-in-the-trenches engineer, spending 95% of my time writing application code and only 5% writing high-level libraries, it comes across as fairly ethereal and hard to relate to.  (Then again, I haven't found scalaz easy going either.)

But it impacts what type of library you can use, and I already made arguments that the category theory library is going to be much easier to use and better performant. I have to prove that. And I assume we agree the Scalaz isn't easy for you to use (and I don't know if it is more performant).
 
That's relevant only from an audience POV.  You've talked a lot about trying to relate to ordinary engineers coming in from Java.  Most of them aren't going to have the slightest *clue* what you're talking about here, and the subject is likely to intimidate the bejeezus out of them.

They won't need to. The use of the libraries will be easier and more performant than the std library you use now.
 
If you're going to be successful, you need to think about who you're talking to.  This construct sounds much more interesting to high-level library programmers than to day-to-day engineers who know nothing of typeclasses and don't particularly want to.  It's an intriguing idea (although I'm not well-qualified to evaluate its merits), but I suspect will tend to drive away much of your target audience if you oversell it...

Thus driving them to it, not away from it. 

Shelby

unread,
Sep 13, 2013, 4:19:42 PM9/13/13
to scala-...@googlegroups.com, Shelby
Hmm. But can I?

def sum(xs: List[Union]) = xs.head match {
    case n: Int => n
    case s: String => s.length()
    case b: Boolean => if (b) 1 else 0
  }

If not, it doesn't mean I couldn't fiddle with Shapeless to get it to support the type category theory library I've been coding. I will need to study the Shapeless implementation in more detail.

It would be great if this would allow me to support disjunctions seamlessly without a change to the Scala compiler.

Sergey Scherbina

unread,
Sep 13, 2013, 4:45:43 PM9/13/13
to Shelby, scala-...@googlegroups.com
2013/9/13 Shelby <she...@coolpage.com>


Hmm. But can I?

def sum(xs: List[Union]) = xs.head match {
    case n: Int => n
    case s: String => s.length()
    case b: Boolean => if (b) 1 else 0
  }

If not, it doesn't mean I couldn't fiddle with Shapeless to get it to support the type category theory library I've been coding. I will need to study the Shapeless implementation in more detail.

It would be great if this would allow me to support disjunctions seamlessly without a change to the Scala compiler.


Yes, you can, but you should use 'unify' method before matching:

  def sum(xs: List[Union]) = xs.head.unify match {
    case n: Int => n
    case s: String => s.length()
    case b: Boolean => if (b) 1 else 0
  }

Justin du coeur

unread,
Sep 13, 2013, 4:53:58 PM9/13/13
to Shelby, scala-debate
On Fri, Sep 13, 2013 at 4:01 PM, Shelby <she...@coolpage.com> wrote:
On Friday, September 13, 2013 10:17:49 PM UTC+8, Justin du Coeur wrote:
Well, yes -- but I generally don't allow Any (or even AnyRef) in my code.  I usually consider appearances of AnyRef to be design bugs,

Well yes I consider then uni-typed typing holes too. But the point you haven't addressed is how do you write heterogeneous collections without using Any? I really have to see the magic you know that no one else seems to know?

I don't -- that is, I don't try to unify the types.  I would live with the limitation, and use, eg, case classes with Option fields.  It's clunky, but it works and is safe.

Or other mechanisms -- the best approach depends on the circumstance.  The point is, you seem to be assuming that people can't live without heterogeneous collections.  That doesn't match my experience -- indeed, it's extremely rare that I want truly heterogeneous collections.  Collections whose values are subtyping a parent, sure -- but true heterogeneity?  I don't think I've even *wanted* that any time in the past five years.  The only times I've ever done it was by accident, fixed as soon as the compiler complained at me.

In other words, you're seeing a massive critical problem that I regard as, at most, a mild nuisance in practical application code.  It's possible that you have encountered situations where it *is* a huge problem, but I've built enough complex, huge systems that I'm a tad skeptical that this is a deathly-important weakness.

There is no extra boilerplate in Copute, just have your MIXIN extend the INTERFACE, then only extend the MIXIN where you need the default functionality.

That's still boilerplate, and is still poor factoring.  As I said above, if there is only one rational implementation (which is *very* common in my experience), then having to mix it in is unnecessary nuisance, and a recipe for accidental code duplication by people who don't happen to notice that there is One True Implementation existing somewhere else.
 
If you don't want the default method to be overridden, then you can place it in the INTERFACE as a STATIC that inputs the INTERFACE as its first parameter. You then call these as Name.func(x, ...) where x is an instance of Name. I suppose I could support syntactical sugar for this case so you can call them as x.func(...).

I would recommend that -- it at least provides a workaround to my major concern about code duplication.  That said:
 
The main justification is that interface is orthogonal to implementation. This is crucial design concept for code reuse and modularity.

I can see that you are making this as a fundamental assumption.  I don't happen to agree, and I no longer find the arguments persuasive.  I spent many years going down that road, and found that in practice, it leads to *poorer* code reuse and *worse* modularity.  It's an assertion that sounds nice, but in my experience doesn't match reality.

Instead, my experience has been that Scala's traits get reuse and modularity exactly correct: implementation is fine on the trait, but should depend on the trait's abstracts.  You leave abstract the things that are intended to vary, and provide implementation for the ones that aren't.  Whether the trait happens to be a pure interface or not is a matter of happenstance, and not terribly relevant.
 
The consumer of an interface should not be partial to any particular implementation of subtypes, yet rather only to its documented semantics. By putting a default overrideable implementation in the interface (trait), the developer is able to avoid writing a complete specification of the interface, and thus consumers of the interface will rely on what they interpret the semantics to be by studying the default implementation. This will destroy code reuse.

Stuff and nonsense.  I reuse such traits all the time.  Most of Scala's standard library is made up of such traits.  Are you claiming that nobody ever reuses them?  And I rarely look at the implementation unless there is a compelling reason to do so -- I use the documentation, exactly the same way I would with an abstract interface.
 
One of my big goals with Copute, is I want to change the entire economy of open-source.

Ambitious, but okay -- I can appreciate ambitious.  But you're going to fail if you demand that everybody think like you, and sacrifice functionality that already exists in other languages that they find useful.

No high cost, i.e. no boilerplate when extending the base MIXIN. Major gains in code reuse discipline.

If every implementation class has to mix in the mixin, that's still boilerplate.  Concise boilerplate is still boilerplate.

The point is that when the code is displayed in a monospace font without syntax highlighting (e.g. in HTML, PDF, Word documents, etc) then the viewer can detect the keywords more rapidly.

For those who hate it, I will definitely try to get a feature into the IDE (and hopefully Github too) to toggle the keywords to lowercase. And the all-caps is unnecessary when the keywords are colored.

Okay, now I'm confused: does the case of the keywords matter?  Your examples had led me to assume that the all-caps was required.  If they are case-insensitive I might have other concerns (mostly that they consume more namespace), but this is a more minor issue.

You can view them in lowercase only by having an IDE or renderer which display them in lowercase for you.

That seems deeply unwise in the JVM world.  Symbols are case-sensitive, so having the IDE lie about case seems like a magnificent recipe for hard-to-understand bugs.

Then we lose the ability to distinguish keywords easily in code that is viewed in a text editor or in html. And how often are you likely to view the code in a non-IDE setting where it would annoy you?

Um -- rarely?  Never?  Again, you're making a gigantic deal out of something that many of us regard as a non-problem.  I find it mildly useful to have syntax highlighting in Eclipse, but I edit un-hilighted code in, eg, Notepad++ all the time, and rarely if ever find that to be an issue.  The all-caps symbols, OTOH, would probably *always* bother me.
 
But it impacts what type of library you can use, and I already made arguments that the category theory library is going to be much easier to use and better performant. I have to prove that. And I assume we agree the Scalaz isn't easy for you to use (and I don't know if it is more performant).

Correct -- but like I said, my problem there is more the concepts than the symbols.  (I am deeply hoping that Greg Meredith's forthcoming book helps me there.)

Simon Schäfer

unread,
Sep 13, 2013, 5:18:17 PM9/13/13
to scala-...@googlegroups.com
Shelby,

you wrote a lot during the last day and just to clarify I don't have the
plan to respond to much what you said. To me, it looks like everything I
said was misinterpreted by you. And because I can't see if you did that
by accident or by being in rage about a guy not thinking you are as
great as you want to convince, I will try to round out this discussion.

Just to give you a hint at the beginning: What do you expect from a guy
coming to a community where he isn't a very well known nor respected
member and telling that everyone but him is blind in seeing all the
solutions needed to solve all the problems they have. Now this user
starts in being aggressive due to - in his opinion - existing ignorance
and stupidity of single, but respected members of the community when
they try to contradict.

You can't really assume that anyone is giving much back to such a user.
And just to be clear, the above is exactly how I observe you.

Some quotes:

> I very much disagree with your concept that the world is composed of
people who don't program, and the rest who use some awesome language
such as Scala

> [...] he thinks all programmers are either stupid morons or use a
non-popular language.

> He basically told me to shut up.

> Why he wants discourage someone from fixing the other issues above I
can only assume is territorial myopia.

I have never written anything of the above and never thought so. As I
said, I won't anymore clarify what I meant (hopefully others understood
it more right than you did) - I don't think you are smart enough to give
other members of a community, regardless who they are, the respect they
deserve, thus I don't think that you are able to take part of a
discussion led in the sense of trying to improve.

> [...] then Jeff Atwood, the programmer who created StackOverflow, is
a dumb slave [...]

> [...] from Eric S. Raymond the self-claimed 150 - 170 IQ genius who
wrote The Cathedral and The Bazaar and The Art of Unix Programming.

There is no reason, not a single one, to claim the own opinion by famous
people. They are human too and it is possible that such people live
completely wrong their entire life. Normally such quotes are only done
by people not being able to think by themselves, you don't want that
others have this opinion from you.

I'm sure I care much more of Scala and its success than you do -
contributing to the IDE is only one part of what I did to that
community. I know that you wrote that all because you want to contribute
your own part but the way you are interacting is not the way one
interacts with a community - especially not a community you don't know
for depth. Not sure how you react on that, but I need to recommend you
this book:

http://www.amazon.com/How-Win-Friends-Influence-People/dp/0671723650

I wrote my first mail because I hoped to learn something from you and I
also hoped that you may learn something from me as well. And even
because you said that, I can't believe anymore that you really meant it.

Simon

Shelby

unread,
Sep 13, 2013, 5:21:57 PM9/13/13
to scala-...@googlegroups.com, Shelby
I only claimed one case was unnecessary, which is the only case Copute eliminates:

def len(s: String): Int = {return s.length}

I said "That is a fair amount (of convoluted differences) for a newbie to memorize if they want to be able to read Scala code. And that isn't even any where near all of Scala". The point I was making is that simplifying as much as we can (beyond the declaration of a function) is important, since even the most basic issue of declaring a function is already more complex in Scala than in Java, C++, and C.

Also Copute gets rid of the = when followed by a braced enclosed block, to make it more consistent with (and familiar to programmers using) those popular languages.

Shelby

unread,
Sep 13, 2013, 5:48:02 PM9/13/13
to scala-...@googlegroups.com
On Saturday, September 14, 2013 5:18:17 AM UTC+8, Simon Schäfer wrote: 
Just to give you a hint at the beginning: What do you expect from a guy
coming to a community where he isn't a very well known nor respected
member

I suggest focus on arguing the points and stop the personality and ad hominem if you want my respect. I respect those who can argue factually. I don't respect authority-by-inertia.

I haven't seen you argue one technical point in my thread. All I have seen is you advising me to not speak, and advising me that programmers don't need an easier Scala DSL with all the goodness and less of the complexity. You told me I would receive no interest at all in my ideas, and basically told me your personal opinion of me. So you should expect that I respond to such ad hominem (devoid of technical argument) as I did.
 
and telling that everyone but him is blind in seeing all the
solutions needed to solve all the problems they have.

Never have I said I propose to solve all problems. There you go again with your weasel words passive aggressive ad hominem. 
 
Now this user
starts in being aggressive due to - in his opinion - existing ignorance
and stupidity of single, but respected members of the community when
they try to contradict.

I don't characterize debating the facts as aggressive. I think it is rational discussion. I am sorry if you view technical processes as political. That is your problem, not mine.
 
You can't really assume that anyone is giving much back to such a user.
And just to be clear, the above is exactly how I observe you.

There is no rational nor technical benefit for me to cow-tail to those protecting their orderly orderliness.

If you can convince me technically, that is helpful. I learned something potentially very important from Sergey so this thread has probably been a net benefit to me. Some other readers may have gained a few insights also, I dunno.
 
Some quotes:

 > I very much disagree with your concept that the world is composed of
people who don't program, and the rest who use some awesome language
such as Scala

 > [...] he thinks all programmers are either stupid morons or use a
non-popular language.

 > He basically told me to shut up.

 > Why he wants discourage someone from fixing the other issues above I
can only assume is territorial myopia.

I have never written anything of the above and never thought so.

Try again to read your prior post.
 
other members of a community, regardless who they are, the respect they
deserve,

I give everyone respect, except those who have irrational anti-social behavior,  in which case I may respect their technical accomplishments, but I have to wary about trusting their judgement in a social context.
 
thus I don't think that you are able to take part of a
discussion led in the sense of trying to improve.

I am trying to improve something. You have a right to your opinion. Yet I've seen no rational nor technical points from you in this thread yet.
 
 > [...] then Jeff Atwood, the programmer who created StackOverflow, is
a dumb slave [...]

 > [...] from Eric S. Raymond the self-claimed 150 - 170 IQ genius who
wrote The Cathedral and The Bazaar and The Art of Unix Programming.

There is no reason, not a single one, to claim the own opinion by famous
people. They are human too and it is possible that such people live
completely wrong their entire life. Normally such quotes are only done
by people not being able to think by themselves, you don't want that
others have this opinion from you.

Next you will be telling which books I can't quote from.
 
I'm sure I care much more of Scala and its success than you do -

And your proof is?

contributing to the IDE is only one part of what I did to that
community. I know that you wrote that all because you want to contribute
your own part but the way you are interacting is not the way one
interacts with a community - especially not a community you don't know
for depth. Not sure how you react on that, but I need to recommend you
this book:

I am nice to those who are nice to me.

I am not going to agree to be told that I can't express and present logical arguments for my POV.

That is orthogonal to being nice and leaving the authority-by-inertia and ad hominem out of it.

Now can we please stop this noise? Do you have anything technical or a logical argument to contribute to this thread?

Shelby

unread,
Sep 13, 2013, 6:37:52 PM9/13/13
to scala-...@googlegroups.com, Shelby
On Saturday, September 14, 2013 4:53:58 AM UTC+8, Justin du Coeur wrote:
On Fri, Sep 13, 2013 at 4:01 PM, Shelby <she...@coolpage.com> wrote:
On Friday, September 13, 2013 10:17:49 PM UTC+8, Justin du Coeur wrote:
Well, yes -- but I generally don't allow Any (or even AnyRef) in my code.  I usually consider appearances of AnyRef to be design bugs,

Well yes I consider then uni-typed typing holes too. But the point you haven't addressed is how do you write heterogeneous collections without using Any? I really have to see the magic you know that no one else seems to know?

I don't -- that is, I don't try to unify the types.  I would live with the limitation, and use, eg, case classes with Option fields.  It's clunky, but it works and is safe.

Don't you mean you use Either fields or OneOfX? I don't see how an Option would help you when you have two or more types (which are not subtypes of common type other than Any) in your collection. 

Or other mechanisms -- the best approach depends on the circumstance.  The point is, you seem to be assuming that people can't live without heterogeneous collections.  That doesn't match my experience -- indeed, it's extremely rare that I want truly heterogeneous collections.  Collections whose values are subtyping a parent, sure -- but true heterogeneity?  I don't think I've even *wanted* that any time in the past five years.  The only times I've ever done it was by accident, fixed as soon as the compiler complained at me.

That may be true. Nevertheless I don't want to come to a case where I really need it and don't have it. I don't like arguing away corner cases, just because I never expect to encounter them, because usually where you encounter a corner case is where you can't refactor, i.e. reuse.

When I say "reuse", I mean where we can't refactor the libraries, because we are interopting with them. Imagine the implausibility of refactoring the Scala std library for example or any popular library.

Disjunctions can eliminate a cartesian product of function overloading, i.e. your function can take N arguments of M types each, or you can write N x M overloaded methods.

In other words, you're seeing a massive critical problem that I regard as, at most, a mild nuisance in practical application code.  It's possible that you have encountered situations where it *is* a huge problem, but I've built enough complex, huge systems that I'm a tad skeptical that this is a deathly-important weakness.

I will concede your point here. I may be overestimating their importance. Yet I am not sure, and I don't want to risk it. So if Sergey's suggestion about Shapeless works, then I can have the disjunctions without subsuming them to Any nor the performance overhead (and boilerplate) of boxing them into Either or OneOfX.
 
There is no extra boilerplate in Copute, just have your MIXIN extend the INTERFACE, then only extend the MIXIN where you need the default functionality.

That's still boilerplate, and is still poor factoring.

No, because...
 
 As I said above, if there is only one rational implementation (which is *very* common in my experience), then having to mix it in is unnecessary nuisance, and a recipe for accidental code duplication by people who don't happen to notice that there is One True Implementation existing somewhere else.

Because I already wrote you can put it in a STATIC in this case where there is only one rational implementation.
 
If you don't want the default method to be overridden, then you can place it in the INTERFACE as a STATIC that inputs the INTERFACE as its first parameter. You then call these as Name.func(x, ...) where x is an instance of Name. I suppose I could support syntactical sugar for this case so you can call them as x.func(...).

I would recommend that -- it at least provides a workaround to my major concern about code duplication.  That said:

So why did you continue to argue above? Let's try to keep the noise level of our posts down and focus on points that are still in contention.
 
The main justification is that interface is orthogonal to implementation. This is crucial design concept for code reuse and modularity.

I can see that you are making this as a fundamental assumption.  I don't happen to agree, and I no longer find the arguments persuasive.  I spent many years going down that road, and found that in practice, it leads to *poorer* code reuse and *worse* modularity.  It's an assertion that sounds nice, but in my experience doesn't match reality.

Please give me an example. I am confident you are confused or conflating some issues.

Instead, my experience has been that Scala's traits get reuse and modularity exactly correct: implementation is fine on the trait, but should depend on the trait's abstracts.  You leave abstract the things that are intended to vary, and provide implementation for the ones that aren't.  Whether the trait happens to be a pure interface or not is a matter of happenstance, and not terribly relevant.

All I have done is separate the abstract part that goes in the INTERFACE from the implementation part that goes in the MIXIN. You haven't lost any functionality as pertains to pure functional programming (that Copute is restricted to). For mutable programming, yes the trait would offer you some additional flexibility.

And that separation enforces you to put overridable implementation in the MIXIN, which is correct from a reuse POV. This does not reduce your flexibility at all s pertains to pure functional programming (that Copute is restricted to).

Please show me an example otherwise?
 
The consumer of an interface should not be partial to any particular implementation of subtypes, yet rather only to its documented semantics. By putting a default overrideable implementation in the interface (trait), the developer is able to avoid writing a complete specification of the interface, and thus consumers of the interface will rely on what they interpret the semantics to be by studying the default implementation. This will destroy code reuse.

Stuff and nonsense.

Careful ;)
 
 I reuse such traits all the time.  Most of Scala's standard library is made up of such traits.  Are you claiming that nobody ever reuses them?

Did claiming that tractors are more efficient than plows means claiming that plows can't till the soil.
 
 And I rarely look at the implementation unless there is a compelling reason to do so -- I use the documentation, exactly the same way I would with an abstract interface.

I wrote upthread that allowing a default implementation in an interface allows the programmer to avoid writing documentation. I didn't write they would always avoid writing documentation. I think this discipline is helpful, especially it doesn't cost anything in flexibility and it doesn't cost anything in boilerplate for the class extending the MIXIN. And it only costs a single terse extra line of code to write, "MIXIN Name: InterfaceName {", because Copute eliminates the need to rewrite the signatures.
 
One of my big goals with Copute, is I want to change the entire economy of open-source.

Ambitious, but okay -- I can appreciate ambitious.  But you're going to fail if you demand that everybody think like you, and sacrifice functionality that already exists in other languages that they find useful.

I haven't sacrificed any functionality except for requiring pure functional programming.

No high cost, i.e. no boilerplate when extending the base MIXIN. Major gains in code reuse discipline.

If every implementation class has to mix in the mixin, that's still boilerplate.  Concise boilerplate is still boilerplate.

Here is the example difference for the worst case where the default should be overridable.

In Scala:

trait Foo {
   def f: Sig = implementation
}

class Fooish extends Foo

In Copute:

INTERFACE Foo {
   f: Sig
}

MIXIN FooImpl: Foo {
   f = implementation
}

CLASS Fooish: FooImpl 

The point is that when the code is displayed in a monospace font without syntax highlighting (e.g. in HTML, PDF, Word documents, etc) then the viewer can detect the keywords more rapidly.

For those who hate it, I will definitely try to get a feature into the IDE (and hopefully Github too) to toggle the keywords to lowercase. And the all-caps is unnecessary when the keywords are colored.

Okay, now I'm confused: does the case of the keywords matter?  Your examples had led me to assume that the all-caps was required.  If they are case-insensitive I might have other concerns (mostly that they consume more namespace), but this is a more minor issue.

You can view them in lowercase only by having an IDE or renderer which display them in lowercase for you.

That seems deeply unwise in the JVM world.  Symbols are case-sensitive, so having the IDE lie about case seems like a magnificent recipe for hard-to-understand bugs.

Not if I disallow the keywords as lowercase to be used as identifiers. Thanks for pointing this out. I will go add this to the lexer (terminals grammar) now.

Then we lose the ability to distinguish keywords easily in code that is viewed in a text editor or in html. And how often are you likely to view the code in a non-IDE setting where it would annoy you?

Um -- rarely?  Never?  Again, you're making a gigantic deal out of something that many of us regard as a non-problem.  I find it mildly useful to have syntax highlighting in Eclipse, but I edit un-hilighted code in, eg, Notepad++ all the time, and rarely if ever find that to be an issue.  The all-caps symbols, OTOH, would probably *always* bother me.

Okay I hear your opinion. And I will continue listening to others, but after I have implemented the following solution for them to try. I understand the mainstream loves syntax highlighting, thus in the case where it isn't available, then I want the code to appear as if the keywords are highlighted in all-caps. I don't see a problem with you editing code in Notepad++ using lowercase keywords, as you can run it through a preprocessor (or non-default compiler flag) before it is compiled. That makes it just a tad difficult enough to compile correctly for newbies that you will hopefully not display the lowercase version of the code publicly without some wrapper around copy+paste to convert it to all-caps keywords. Thus encouraging you to display it either as syntax highlighted lowercase with such a wrapper on copy+paste, or as plain text with all-caps keywords. Thus achieving my goal without burdening you at all (since you are knowledgeable enough to use tooling).

Perhaps I just create more confusion than it is worth, in which case I will abandon this idea.
 
But it impacts what type of library you can use, and I already made arguments that the category theory library is going to be much easier to use and better performant. I have to prove that. And I assume we agree the Scalaz isn't easy for you to use (and I don't know if it is more performant).

Correct -- but like I said, my problem there is more the concepts than the symbols.  (I am deeply hoping that Greg Meredith's forthcoming book helps me there.)

I can explain this easily to you. And I will. You won't need to read no darn book, hehe. 

Shelby

unread,
Sep 13, 2013, 6:53:52 PM9/13/13
to scala-...@googlegroups.com, Shelby
I remembered the other justification for this design. So that you can read an INTERFACE very cleanly, i.e. it is self-documenting from signatures, without the noise of implementation interspersed.

In summary, 3 benefits with no loss of flexibility, and only 1 extra terse line of boilerplate:

1. Java interfaces so better interopt with Java.
2. Forcing overrideable default implementation out of the interface, to force documentation discipline.
3. Separating signatures from implementation to enable rapid eyeballing of expected functionality, even without accessing a scaladoc. It is just cleaner all around. And better factored.

Shelby

unread,
Sep 13, 2013, 7:05:04 PM9/13/13
to scala-...@googlegroups.com, Jason Zaugg, Shelby
If the Shapeless solution is boxing, then I am thinking we still need first-class union support in the compiler. Boxing is inefficient. 

Shelby

unread,
Sep 13, 2013, 7:11:26 PM9/13/13
to scala-...@googlegroups.com, Shelby
On Saturday, September 14, 2013 12:31:16 AM UTC+8, Justin du Coeur wrote:
On Thu, Sep 12, 2013 at 10:08 PM, Shelby <she...@coolpage.com> wrote:
If everything is going to be a object with a meaningful type, then how does it make any sense that Scala chose not to implement first-class disjunctions? That is really perplexing me how Scala obtained such a large hole

You've made a big deal about this, but I think you've lost perspective.  Again, from my viewpoint as a workaday engineer, this is an edge case.  Yes, it is occasionally an annoying edge case that I have to work around, but honestly -- I don't even notice the lack most of the time. So while I agree that it would be lovely to fix this, it's an extremely minor detail in terms of what I look for in a language.

I am not sure it is an edge case in the general reuse scenario that I mentioned in my prior reply. You may be correct, but I am hedging my bets.
 
And (back to my previous point), if using it means encouraging my engineers to put Any into their function signatures, I will treat it as a misfeature and absolutely forbid its use.

Thanks for making this point. I understand that it shouldn't subsume to Any without a compiler option to turn these into warnings or errors. And depending on the efficiency of a boxed solution (if that is what Shapeless is doing), then it still might be necessary to have a compiler option to turn these into warnings.
 
Regarding meta programming, I haven't had time to research the proposals for Scala macros. What ever they do, I hope there is an option to see the generated Scala code. Shouldn't macros just be DSLs?

You have that backwards.  Macros are an implementation mechanism by which one *writes* more powerful DSLs.  They're the enabling tool.

I.e. they are just DSLs ;)
 
Shouldn't Copute be a DSL?

I've been wondering that, yes.  I'd be more likely to take Copute seriously if it was done as a library that plugged into Scala, instead of a separate and somewhat idiosyncratic language.  That may or may not match your language goals, but it's a point worth factoring in.

I haven't explored if it is possible and the current state of tooling for DSLs. Right now I am trying to get to proof-of-concept stage with the least tsuris. Yes I am definitely open to this. I would rather it be fully integrated in the Scala tooling. And I will work towards that goal if it is reasonable. 

Shelby

unread,
Sep 13, 2013, 7:48:22 PM9/13/13
to scala-...@googlegroups.com, Shelby
On Saturday, September 14, 2013 12:45:32 AM UTC+8, Justin du Coeur wrote:
On Thu, Sep 12, 2013 at 11:14 PM, Shelby <she...@coolpage.com> wrote:
Here follows an example of myself recently trying to convince some expert C++ programmers to learn and use Scala for an upstart project that I was interested to contribute programming to:

(discussion continues to the next page of that thread)

What you will find is that if most programmers don't know Scala, it is impossible to convince anyone to use Scala for a project.

That isn't exactly a convincing argument.  Indeed, it simply displays the same flaw that Suminda has been trying to point out, which you seem to be ignoring.  In both cases, you (metaphorically) wandered into somebody else's house; declared that you are smarter than everyone present; started lecturing them pedantically; and wound up effectively calling other people stupid for disagreeing with you.  (And then getting defensive and "you're all picking on me" when you met resistance.)

Triangulation helps formulate rationality w.r.t. to our interpretations, because we are often blinded by biases and emotion.

Actually they were invading my house, as evident by myself winning the poll with 85% support for my technical arguments:


Both of the principles asked for my feedback:



And I was asked by others in that forum on that day to go present my analysis in that thread.

And I did not belabor the point about languages and conceded it to them:


And even convinced the person who was arguing with me to take interest in Scala:


See where sulking causes rationality to go ;)
 
That is pretty much the *least* effective way to convince anybody of anything.  Moving a community is a subtle game, and being right is by no stretch of the imagination sufficient to winning the argument.

Sorry disagree. Being correct and implementing wins. One aspect of being correct, is being sure that your stuff is essentially needed and there is a pent up demand for it.
 
 It is absolutely essential that you engage in give-and-take, and frankly you have to be pretty damned thick-skinned to play that game successfully, since at *best* you are going to step on ideas that people are invested in.

Yeah I take and you give. Joke.

Seriously the way to get others invested in your ideas, is incorporate their good ideas. And recognize them for it and thank them. But succumbing to crap ideas and political nonsense will get you precisely what you got from your years with Ada.

The best is "talk is cheap, show me the darn code".

So it means I need to stop this. I've gained some ideas from this thread (some from you). Thanks.
 
In other words, it isn't level ground, and wishing that it was doesn't do you any favors.  You are trying to change the way other people are doing things,

Yeah but it is not you all I am trying to change. It is the people who haven't yet adopted Scala. It is a much larger audience than the small group here. Here I am mainly trying to explain to you what I am doing and to get any technical discussion that can be helpful. I am not expecting to change anyone's mind here now. To the extent you have shown some critical interest, is a pleasant surprise. It has given me inspiration I am on to something potentially worthwhile.
 
and the onus is *very* much on you to prove, very concretely, that you are correct -- and to engage very sincerely with the community to understand the shortcomings in your own ideas and find the necessary compromises.

Exactly what I have been doing.

Shelby

unread,
Sep 13, 2013, 8:05:21 PM9/13/13
to scala-...@googlegroups.com, Shelby
Given a function:

def f[T<:Plus] = _:T + _:T + _:T

In Copute, you can write f|(List(Some(1), Some(2), Some(3)), Some(3), Some(4))| and obtain a result List(Some(8), Some(9), Some(10)).

This is because List and Option will implement the interface Applicative.apply.

That isn't difficult to understand and the code for Applicative.apply is very easy to study if you want to see the plumbing.

Further explanations will be of this genre.

Shelby

unread,
Sep 13, 2013, 8:35:00 PM9/13/13
to scala-...@googlegroups.com, Shelby
I used to get emotionally hurt when people rebuffed me for what seemed like no logical reason, just the silver-backed Aping. Because I used to strive to be likable and I was very popular in high school and college. I had the best parties, I was a jock, had beautiful gfs, school was relatively easy (rarely had to go to class), could get pissed drunk then run a 4:30 mile on a hangover, etc..

But as I've gotten older and more serious about my work, I have adopted what appears to match the following quote from ESR. It doesn't mean I won't try to be likeable, but when someone is just playing the beta-male wannabe alpha-male game with me, I just go into my sigma-male mode ("don't care, show me the technicals").


I’m not speaking abstractly here. I’ve always been more interested in doing the right thing than doing what would make me popular, to the point where I generally figure that if I’m not routinely pissing off a sizable minority of people I should be pushing harder. In the language of psychology, my need for external validation is low; the standards I try hardest to live up to are those I’ve set for myself. But one of the differences I can see between myself at 25 and myself at 52 is that my limited need for external validation has decreased.

I can clearly see who the alpha/sigma-males are here, because they are courteous, factually astute, helpful, and not at all offended except by ignorance (yet even then they allow some patience and understanding). 

Shelby

unread,
Sep 14, 2013, 2:37:19 AM9/14/13
to scala-...@googlegroups.com, Shelby
I misspoke. There are two more cases Copute does not allow, because they are unnecessary in Copute because it is possible to specify the return type on the anonymous function.

def len: String => Int = s => s.length
def len: String => Int = _.length

Also where I wrote upthread that a pattern match wildcard would use ?, I changed that to agree with Scala's use of _. Now the ? is only for syntactical sugar for Option(al). The main difference from Scala's treatment of _ is that now it always means unspecified or unused, thus is never a shorthand for an anonymous function parameter. The _1 to _N are for anonymous function parameters which has the advantage over Scala that ordering doesn't matter and the parameter can be used more than once in the function expression.

Shelby

unread,
Sep 14, 2013, 6:57:56 AM9/14/13
to scala-...@googlegroups.com, Shelby
On Friday, September 13, 2013 10:17:49 PM UTC+8, Justin du Coeur wrote: 
Which reminds me: the all-caps thing just plain bugs me.  This isn't anything rational -- it's just many years of Internetting, teaching me that all-caps means YELLING.  So on a purely aesthetic level, I look at this code and it feels like it's shouting at me, which gives me an instinctive negative gut reaction.

The all-caps is much less annoying in the Eclipse IDE with its monospace font and purple syntax highlighting on the keywords. I agree it looks horrendous as displayed in this variable-width font used at google groups. I don't know what font you are viewing with in your email client.

Granted, this sort of thing is in the eye of the beholder.  But I have to point out that the purple highlighting is something you can turn off if you dislike it, whereas the all-caps is inherent in the language.  (Or is it?? See below.)

The point is that when the code is displayed in a monospace font without syntax highlighting (e.g. in HTML, PDF, Word documents, etc) then the viewer can detect the keywords more rapidly.

For those who hate it, I will definitely try to get a feature into the IDE (and hopefully Github too) to toggle the keywords to lowercase. And the all-caps is unnecessary when the keywords are colored.

Okay, now I'm confused: does the case of the keywords matter?  Your examples had led me to assume that the all-caps was required.  If they are case-insensitive I might have other concerns (mostly that they consume more namespace), but this is a more minor issue.
 
I am also open to abandoning the all-caps keywords entirely if I get feedback from all sectors asking me to do so.

Fair enough.  Consider this a vote against.
 
I had the same "ouch" reaction when I saw in this variable-width font display at the google groups' website UI. Caused me to doubt my original decision. But again, I think most code will be displayed in a monospace font, either syntax highlighted or not. It is the latter "not" case that I am trying to address, e.g. when one quickly types some code in a non-IDE text editor.

Honestly, I think it's too high a price for an edge case.  The looks aside, I find all-caps a pain in the ass to type -- either I have to bring in my capslock key (which I am not used to when coding, and isn't built into my touch-typing) or I have to hold down the Shift.  Either way, it slows down my typing noticeably, and would likely annoy me a lot in practice. 

I decided to abandon the all-caps keywords. I think you are correct. To maximize adoption, I should minimize unfamiliarity from the other popular languages, and also not create unnecessary differences from Scala syntax. Thanks for convincing me.

This was a stupid idea in hindsight. I got enamored with it, when I was thinking the ultra-short keywords would serve as delimiters in place of symbols, e.g. DO instead of parenthesis around the right operand of IF, e.g. IF expr DO expr .... But this doesn't work well for the keywords longer than 2-3 characters, and now `do` isn't even always the delimiter, because I improved the syntax to allow a block or a `do` expr, e.g:

if expr do expr ...

Or:

if exr {
   stmts
}
...

Justin du coeur

unread,
Sep 14, 2013, 11:30:22 AM9/14/13
to Shelby, scala-debate
On Fri, Sep 13, 2013 at 5:21 PM, Shelby <she...@coolpage.com> wrote:
I only claimed one case was unnecessary, which is the only case Copute eliminates:

def len(s: String): Int = {return s.length}

Everyone agrees that it is unnecessary.  As I said, AFAIK that was left in mainly for the ease of adoption of all those newbie programmers you claim to be so concerned about.

Also Copute gets rid of the = when followed by a braced enclosed block, to make it more consistent with (and familiar to programmers using) those popular languages.

That is plausible, provided it is done correctly, since the ability to not have it is mildly controversial in Scala.  I assume you always require a return type signature?

Justin du coeur

unread,
Sep 14, 2013, 5:33:21 PM9/14/13
to Shelby, scala-debate
On Fri, Sep 13, 2013 at 6:37 PM, Shelby <she...@coolpage.com> wrote:
On Saturday, September 14, 2013 4:53:58 AM UTC+8, Justin du Coeur wrote:
I don't -- that is, I don't try to unify the types.  I would live with the limitation, and use, eg, case classes with Option fields.  It's clunky, but it works and is safe.

Don't you mean you use Either fields or OneOfX? I don't see how an Option would help you when you have two or more types (which are not subtypes of common type other than Any) in your collection. 

No, I rarely use either of those, precisely because they don't work very well.  Instead, I usually resign myself to more complex data structures with distinct, strongly-typed optional fields for the different types that might be captured.  That's a weak solution, but it's safe and often permits me to capture other surrounding data that turns out to be relevant.

But really -- I just plain don't hit this very often in my ordinary, real-world code, so I don't spend a lot of time worrying about it.
 
When I say "reuse", I mean where we can't refactor the libraries, because we are interopting with them. Imagine the implausibility of refactoring the Scala std library for example or any popular library.

Disjunctions can eliminate a cartesian product of function overloading, i.e. your function can take N arguments of M types each, or you can write N x M overloaded methods.

So is that your intended use case?  One of the challenges I'm having here is that I don't really understand your target applications.  For day-to-day coding, Copute looks like it mostly adds features I don't care about, and introduces problems I really don't want to deal with.  If your focus is on high-level libraries, it is possible that that's a lesser concern -- that's not my area of expertise.  (But it also is largely irrelevant to most folks just starting out in this environment, very few of whom want to move quickly into high-level library design.)

And I have to underscore my previous point: disjunctions that simply become Any at the JVM level are probably a strong net negative to me -- I wouldn't be willing to work with any library that exposed them, because I wouldn't be able to trust the robustness of the resulting application code in the face of enhancement and maintenance.  Either is kind of weak, but I at least *mostly* trust it not to introduce subtle bugs.
 
I will concede your point here. I may be overestimating their importance. Yet I am not sure, and I don't want to risk it. So if Sergey's suggestion about Shapeless works, then I can have the disjunctions without subsuming them to Any nor the performance overhead (and boilerplate) of boxing them into Either or OneOfX.

I'd recommend exploring this.  Frankly, the performance overhead is a relatively minor detail -- boxing is a fact of life in the JVM, and most of us writing application code don't spend a lot of time worrying about it except in pathological cases.  

And the boilerplate of Either would be less important if we're talking about generated code.  Generated code is allowed to have as much boilerplate as it likes, pretty much.  I worry that the resulting *usage* code in Scala, using values from generated Copute code, might be fattier than I like, but that's a lesser evil than exposed Any's.
 
 As I said above, if there is only one rational implementation (which is *very* common in my experience), then having to mix it in is unnecessary nuisance, and a recipe for accidental code duplication by people who don't happen to notice that there is One True Implementation existing somewhere else.

Because I already wrote you can put it in a STATIC in this case where there is only one rational implementation.

Hmm.  Makes me twitch on an aesthetic level, and I think it's rather confusing, since what I'm talking about is not *conceptually* static -- after all, if you're passing in an instance, then it's an instance method almost by definition.  (The sometime C++ engineer in me thinks quite strongly in those terms -- if you're passing the instance as a parameter, it's a method; if not, it's static.)

I'll grant you that it may be a practical solution to the problem, but I wind up questioning the keyword STATIC -- that isn't what most people *mean* when they say "static", and it's a very unusual usage.  I find it unintuitive.
 
The main justification is that interface is orthogonal to implementation. This is crucial design concept for code reuse and modularity.

I can see that you are making this as a fundamental assumption.  I don't happen to agree, and I no longer find the arguments persuasive.  I spent many years going down that road, and found that in practice, it leads to *poorer* code reuse and *worse* modularity.  It's an assertion that sounds nice, but in my experience doesn't match reality.

Please give me an example. I am confident you are confused or conflating some issues.

... no.  Sorry, Shelby, but you've gotten my goat, and I'm going to back off from the conversation before I get heated.  I'm failing to keep my temper in check, so it's probably time for me to give up on trying to help you.

I have to observe that your choice of rhetoric is poor.  You've made a big deal about other folks getting ad hominem at you, but the reason that's happening is because your conversational style is full of flamebait like this.  I've been drifting too close to responding in kind, and that means it's time to go away.  I've been participating in this conversation because it was intellectually interesting, and might possibly result in something useful, but ultimately I don't actually *care* very much.  Trying to help out isn't worth being insulted -- I have better things to do.

Is it not obvious that the above, in the context of this conversation, is insulting?  Just how unwelcome it is that you are brushing off my experience (which is almost certainly at least equal to yours in terms of practical engineering) with "you are confused"?  I'm not sure whether you honestly don't really how nasty and arrogant you sound in the aggregate, or if you do it intentionally.  Once or twice wouldn't be a big deal -- everyone falls into these traps occasionally -- but you've done things like this rather frequently in the course of talking to various people over the past week or two.

Either way, if you don't learn to control it, you're very likely to self-sabotage any serious efforts you make in the open-source arena.  Making an OSS project succeed depends at *least* as much on good management as it does on being right -- working closely with the community, listening very hard to their critiques, and taking their comments with the utmost seriousness.  I probably spend a third of my time doing that for my project, and it's not easy -- it requires swallowing my own ego, constantly asking whether I've gotten the details wrong, and making course adjustments every single day based on the input I'm getting from the folks around me.  It doesn't mean that I follow every one of their suggestions -- but in practice I *do* probably incorporate over half of them into my designs, and I never, *ever* insult the intelligence of the members of the community.

Your project, and your call -- you get to manage it as you like.  But I point out that you've managed to drive away pretty much everyone here who showed interest.  That doesn't bode well, and you may want to learn from it...

Justin du coeur

unread,
Sep 14, 2013, 5:43:27 PM9/14/13
to Shelby, scala-debate
(Mostly dropping off, but skimming for New and Different:)

On Fri, Sep 13, 2013 at 6:53 PM, Shelby <she...@coolpage.com> wrote:
I remembered the other justification for this design. So that you can read an INTERFACE very cleanly, i.e. it is self-documenting from signatures, without the noise of implementation interspersed.

Huh.  Okay, I will concede that it does accomplish that.  But does it matter?  I mean, I very nearly always use APIs based on the Scaladocs, very rarely based on the code itself -- that's pretty much SOP for most projects of real scale nowadays.  So I have to question whether it's a *significant* benefit, or whether the standard tools have obviated the problem away...

Justin du coeur

unread,
Sep 14, 2013, 5:56:21 PM9/14/13
to Shelby, scala-debate
Following on from my earlier point:

On Fri, Sep 13, 2013 at 8:35 PM, Shelby <she...@coolpage.com> wrote:
But as I've gotten older and more serious about my work, I have adopted what appears to match the following quote from ESR. It doesn't mean I won't try to be likeable, but when someone is just playing the beta-male wannabe alpha-male game with me, I just go into my sigma-male mode ("don't care, show me the technicals").

That's not a bad strategy -- heaven knows, I often do it myself.  (Both professionally and in the various clubs I'm involved with.)  But to play that game successfully, it absolutely *demands* both a thick skin and the discipline to back off from an argument before it gets heated.  That's *very* hard, but you will generally undermine yourself when you fail to do so.

And keep in mind that it isn't sufficient for running a successful larger-scale project.  It's possible for you to design Copute yourself, and maybe even to build it yourself.  But *popularizing* it (which I gather is another goal) is in many ways a management / marketing problem, and calls for very different mental tools.  I recommend thinking about how you're going to accomplish that.  (I've learned a good deal of that through spending a number of years as Product and/or Project Manager on various projects, as well as technical lead.  *Very* different skills.)

And despite Eric's big name in the OSS world, it's worth noting that he hasn't actually *led* any really major OSS projects that I'm aware of.  I don't know whether he covers the nitty-gritty problems of leading successful OSS projects in The Cathedral and the Bazaar (I haven't read it), but there are a number of books specifically on the subject of managing and leading such projects that have come out in the past decade.  You might do well to internalize a few of them...

Shelby

unread,
Sep 15, 2013, 5:52:08 AM9/15/13
to scala-...@googlegroups.com, Shelby
I started writing some documentation (to make sure I implement the compiler to spec) which clarifies:

Shelby

unread,
Sep 15, 2013, 6:48:39 AM9/15/13
to scala-...@googlegroups.com, Shelby
On Sunday, September 15, 2013 5:33:21 AM UTC+8, Justin du Coeur wrote:
On Fri, Sep 13, 2013 at 6:37 PM, Shelby <she...@coolpage.com> wrote:
On Saturday, September 14, 2013 4:53:58 AM UTC+8, Justin du Coeur wrote:
I don't -- that is, I don't try to unify the types.  I would live with the limitation, and use, eg, case classes with Option fields.  It's clunky, but it works and is safe.

Don't you mean you use Either fields or OneOfX? I don't see how an Option would help you when you have two or more types (which are not subtypes of common type other than Any) in your collection. 

No, I rarely use either of those, precisely because they don't work very well.  Instead, I usually resign myself to more complex data structures with distinct, strongly-typed optional fields for the different types that might be captured.  That's a weak solution, but it's safe and often permits me to capture other surrounding data that turns out to be relevant.

But really -- I just plain don't hit this very often in my ordinary, real-world code, so I don't spend a lot of time worrying about it.

We need disjunctions when we can't coalesce different types to have the same supertype (other than Any), which will happen when ever we reuse modules written by others, where we can't refactor their types to inherit from the supertypes we need to mash all the universe together.

Disjunctions are fundamentally about the Expression Problem (which I have provided links to upthread already) or more importantly the fragile base class problem.

Without disjunctions, I can't meet my big picture goal of forwarding modularity and reuse across the universe (not just within the local team), as I explained to you upthread.

In addition to Sergey's revelation about Shapeless above, I've found out that Paul Philipps is the hacker Jason Zaugg was referring to upthread:


I found the old discussion about union types (contains Adriaan Moors terse explanation of a possible future overhaul of Scala's type system):


Shapeless provides another way to think about tuples:

 
When I say "reuse", I mean where we can't refactor the libraries, because we are interopting with them. Imagine the implausibility of refactoring the Scala std library for example or any popular library.

Disjunctions can eliminate a cartesian product of function overloading, i.e. your function can take N arguments of M types each, or you can write N x M overloaded methods.

So is that your intended use case?

Yes the inability to refactor case.

For example if I am mashing up a collection of data structures returned by different modules written by different teams all over the world.
 
 One of the challenges I'm having here is that I don't really understand your target applications.  For day-to-day coding, Copute looks like it mostly adds features I don't care about, and introduces problems I really don't want to deal with.  If your focus is on high-level libraries, it is possible that that's a lesser concern -- that's not my area of expertise.  (But it also is largely irrelevant to most folks just starting out in this environment, very few of whom want to move quickly into high-level library design.)

You entirely miss the point which is that users need to consume libraries and have them all work together. Modularity is what makes that happen.

Right now you choose one big monolithic library and you are stuck with what ever is compatible with it. You can't pick and choose granularly, because we don't write libraries nor userland code in a way that compose without refactoring.

The Expression Problem or more importantly the fragile base class problem.

And I have to underscore my previous point: disjunctions that simply become Any at the JVM level are probably a strong net negative to me -- I wouldn't be willing to work with any library that exposed them, because I wouldn't be able to trust the robustness of the resulting application code in the face of enhancement and maintenance.  Either is kind of weak, but I at least *mostly* trust it not to introduce subtle bugs.
 
I will concede your point here. I may be overestimating their importance. Yet I am not sure, and I don't want to risk it. So if Sergey's suggestion about Shapeless works, then I can have the disjunctions without subsuming them to Any nor the performance overhead (and boilerplate) of boxing them into Either or OneOfX.

I'd recommend exploring this.  Frankly, the performance overhead is a relatively minor detail -- boxing is a fact of life in the JVM, and most of us writing application code don't spend a lot of time worrying about it except in pathological cases.

I must have disjunctions. I realized now why I can't live without them as explained above. They are essential to the very reason I am bothering to create Copute.

Btw, I will tie this technical point into your comments about my politics too (see next reply).
 
And the boilerplate of Either would be less important if we're talking about generated code.  Generated code is allowed to have as much boilerplate as it likes, pretty much.  I worry that the resulting *usage* code in Scala, using values from generated Copute code, might be fattier than I like, but that's a lesser evil than exposed Any's.
 
 As I said above, if there is only one rational implementation (which is *very* common in my experience), then having to mix it in is unnecessary nuisance, and a recipe for accidental code duplication by people who don't happen to notice that there is One True Implementation existing somewhere else.

Because I already wrote you can put it in a STATIC in this case where there is only one rational implementation.

Hmm.  Makes me twitch on an aesthetic level, and I think it's rather confusing, since what I'm talking about is not *conceptually* static -- after all, if you're passing in an instance, then it's an instance method almost by definition.  (The sometime C++ engineer in me thinks quite strongly in those terms -- if you're passing the instance as a parameter, it's a method; if not, it's static.)

My quick explanation of type theory in layman's terms today from first principles may elucidate:


In particular, note that `this` plays not role in subtyping with typeclasses. I quote myself, "Type classes are a form of polymorphism (i.e. extensibility) where the `this` parameter is a type parameter of the operations. The operations for a concrete type can be matched nominally (by name) as shown in the OP or even structurally (although this can be erased and sometime cause reflection on the JVM and thus is probably inefficient)". Then read the rest of that post to compare to the role or `this` in subtyping.

So either it is an overrideable method (in which case put it in a mixin) or it is not (so you can put it in a static).

The point of putting overrideable methods in the mixin is separation-of-concerns, a fundamental design principle. Your point is to favor SPOT (single-point-of-truth), but putting it in the mixin maintains SPOT and factors the code more correctly.

First you think about interface which means specification. Then you think about implementation, c.f. ESR's famous book The Art of Unix Programming.

Aesthetically it is beautiful, because the boundary is clea-- overrideable implementation goes in the mixin and/or class. Specification goes in the interface.
 
I'll grant you that it may be a practical solution to the problem, but I wind up questioning the keyword STATIC -- that isn't what most people *mean* when they say "static", and it's a very unusual usage.  I find it unintuitive.

It is same historical meaning of the static keyword. It is function that doesn't change per instance.

Now I add a new capability to static, which is you can implement it in SUB types. That is to support the benefits of typeclasses, c.f. the prior links and also this one:


And some new discussion of problems with the std library:

 
The main justification is that interface is orthogonal to implementation. This is crucial design concept for code reuse and modularity.

I can see that you are making this as a fundamental assumption.  I don't happen to agree, and I no longer find the arguments persuasive.  I spent many years going down that road, and found that in practice, it leads to *poorer* code reuse and *worse* modularity.  It's an assertion that sounds nice, but in my experience doesn't match reality.

Please give me an example. I am confident you are confused or conflating some issues.

... no.  Sorry, Shelby, but you've gotten my goat, and I'm going to back off from the conversation before I get heated.  I'm failing to keep my temper in check, so it's probably time for me to give up on trying to help you.

I have to observe that your choice of rhetoric is poor.  You've made a big deal about other folks getting ad hominem at you, but the reason that's happening is because your conversational style is full of flamebait like this.

 I was correct. Now we see above you were conflating `this` with subtyping.

Sorry I speak matter-of-factly. I didn't intend it as an insult.
 
 I've been drifting too close to responding in kind, and that means it's time to go away.  I've been participating in this conversation because it was intellectually interesting, and might possibly result in something useful, but ultimately I don't actually *care* very much.  Trying to help out isn't worth being insulted -- I have better things to do.

Hey go reread your post laden with insults where you misjudged my role at the Bitcointalk.org forum.

Yet even so, I wasn't responding with any personal angle. I was saying factually that I could see you were conflating issues. I was hoping if you showed me an example, I could clarify it for you more eloquently than if I introduced more verbosity of discussion.

Is it not obvious that the above, in the context of this conversation, is insulting?  Just how unwelcome it is that you are brushing off my experience (which is almost

I didn't brush anything off, I just know what you aren't focused in the area of type theory, because you already told me so upthread.

Thus you come at this from a practical engineering standpoint, which sometimes yields great insights, e.g. your point about not subsuming to Any and not using all-caps keywords was very helpful and I conceded those points to you.

Yet when it comes to type theory, you may be in an area where your experience may not tell the entire picture.
 
certainly at least equal to yours in terms of practical engineering) with "you are confused"?  I'm not sure whether you honestly don't really how nasty and arrogant you sound in the aggregate, or if you do it intentionally.  Once or twice wouldn't be a big deal -- everyone falls into these traps occasionally -- but you've done things like this rather frequently in the course of talking to various people over the past week or two.

Suggest reread the entire thread (on google groups) on a sober day where your emotions have faded away. I am confident you will see me in a much more positive way.

I reread my posts and the only time I got aggressive was to those who were being discourteous to me or judging me personally. Other than that, I've tried to remain on factual points without political or emotional bias.
 
Either way, if you don't learn to control it, you're very likely to self-sabotage any serious efforts you make in the open-source arena.  Making an OSS project succeed depends at *least* as much on good management as it does on being right -- working closely with the community, listening very hard to their critiques, and taking their comments with the utmost seriousness.

I am going to address this point in my next reply. Prepare for surprising retort.
 
 I probably spend a third of my time doing that for my project, and it's not easy -- it requires swallowing my own ego, constantly asking whether I've gotten the details wrong, and making course adjustments every single day based on the input I'm getting from the folks around me.  It doesn't mean that I follow every one of their suggestions -- but in practice I *do* probably incorporate over half of them into my designs, and I never, *ever* insult the intelligence of the members of the community.

Yeah but how to make that work most efficiently. I explain my idea in next reply.

Your project, and your call -- you get to manage it as you like.  But I point out that you've managed to drive away pretty much everyone here who showed interest.  That doesn't bode well, and you may want to learn from it...

Realize I was attacked from the moment I started repeating often heard criticism about Scala. You even said my points about no more operators was absurd.

A few were kind enough to offer replies and suggestions in the midst of all that heavy emotion hanging over the thread, yourself included. 

You insulted me several times but I didn't get angry (just corrected you and moved on), yet the only thing I remember saying to you is that I thought you were confused about one issue. I think was correct.

Shelby

unread,
Sep 15, 2013, 7:26:55 AM9/15/13
to scala-...@googlegroups.com, Shelby
One thing that more than annoys me (makes it difficult for me to study the code) is when reading Scala source code at Github is that I can barely find the code buried in all that Scaladoc markup. And I can hardly read the comments because it has that noisy markup.

I am a programmer. I want to look at the code.

I hate noise (might be surprised to hear that given the number of posts I have made, but that is only because of the politics...normally I run away from politics).

If I get around to doing a Copudoc, the markup will go at the bottom of the file out of the way of code. And the IDE either should be taught how to render that markup in place and/or the marked up versions should be separate from non-marked up versions. I don't want to have to go rummaging around for Copudoc output, when I have the code right in front of my face in a text editor.

Shelby

unread,
Sep 15, 2013, 7:49:54 AM9/15/13
to scala-...@googlegroups.com, Shelby
On Sunday, September 15, 2013 5:56:21 AM UTC+8, Justin du Coeur wrote:
Following on from my earlier point:

On Fri, Sep 13, 2013 at 8:35 PM, Shelby <she...@coolpage.com> wrote:
But as I've gotten older and more serious about my work, I have adopted what appears to match the following quote from ESR. It doesn't mean I won't try to be likeable, but when someone is just playing the beta-male wannabe alpha-male game with me, I just go into my sigma-male mode ("don't care, show me the technicals").

That's not a bad strategy -- heaven knows, I often do it myself.  (Both professionally and in the various clubs I'm involved with.)  But to play that game successfully, it absolutely *demands* both a thick skin

Agreed.
 
and the discipline to back off from an argument before it gets heated.

My strategy is back off against (i.e. be reasonable with) all reasonable people (e.g. yourself I think). Absolutely crush into the dust (courteously with force of logic and figuratively, not literal violence) raising it tit-for-tat if they don't relent, those unreasonable people whose unrelenting aim is to teach you that they are higher on the beta-male (brown nose up collectivism) social ladder and if you don't conform that will pull every kind of political maneuver to try to put you down and banned out of the group they feel is their group. Then you quickly find out if the group is worth being in, because either the group is controlled by them or it is not. The former groups are in paralysis (group-think, etc), and the latter are productive.
 
 That's *very* hard, but you will generally undermine yourself when you fail to do so.

Not hard for me. I am reasonable. I will never cow-tail to a collective paralysis. (sorry to repeat but no Ada for me, hope you don't repeat...my dead-ends have been personal, not getting stuck in some group-think morass)

And keep in mind that it isn't sufficient for running a successful larger-scale project.  It's possible for you to design Copute yourself, and maybe even to build it yourself.  But *popularizing* it (which I gather is another goal) is in many ways a management / marketing problem, and calls for very different mental tools.

Programmers will use the best tool for the job. Marketing is about getting the information to them. One of the best ways to market to programmers is build a market where they can earn money and use the best tool for job.

What are the markets where I can use Scala and earn money? Typesafe isn't even focused on Andriod, rather on the server. Remember I've written programs that have been downloaded by millions. I am aiming for the consumer markets.
 
 I recommend thinking about how you're going to accomplish that.  (I've learned a good deal of that through spending a number of years as Product and/or Project Manager on various projects, as well as technical lead.  *Very* different skills.)

I visualize a world, like my successes, where the technical lead is the product/project manager.

The modularity is crucial so as to increase what 1 (or 2 or 3) programmers can do by reusing modules.

Small teams. Less politics. Less waste. Less arguing. Less pontificating. More coding. More profit.

And despite Eric's big name in the OSS world, it's worth noting that he hasn't actually *led* any really major OSS projects that I'm aware of.  I don't know whether he covers the nitty-gritty problems of leading successful OSS projects in The Cathedral and the Bazaar (I haven't read it), but there are a number of books specifically on the subject of managing and leading such projects that have come out in the past decade.  You might do well to internalize a few of them...

Here is my surprising retort. I don't want to lead a large project. I want to change the way we do projects, such that we are mostly all building and reusing each other's modules.

So I want to eliminate the politics that you are saying I need to read books about.

In my way of thinking, Linus's model is so 90s. Time for the next big paradigm shift. 

P.S. Then I can piss people off when they don't agree with me, and it won't matter one iota.

Shelby

unread,
Sep 15, 2013, 9:01:55 AM9/15/13
to scala-...@googlegroups.com, Shelby
On Sunday, September 15, 2013 6:48:39 PM UTC+8, Shelby wrote:
On Sunday, September 15, 2013 5:33:21 AM UTC+8, Justin du Coeur wrote:
On Fri, Sep 13, 2013 at 6:37 PM, Shelby <she...@coolpage.com> wrote: 
When I say "reuse", I mean where we can't refactor the libraries, because we are interopting with them. Imagine the implausibility of refactoring the Scala std library for example or any popular library.

Disjunctions can eliminate a cartesian product of function overloading, i.e. your function can take N arguments of M types each, or you can write N x M overloaded methods.

So is that your intended use case?

Yes the inability to refactor [that use] case.

For example if I am mashing up a collection of data structures returned by different modules written by different teams all over the world.
 
 One of the challenges I'm having here is that I don't really understand your target applications.  For day-to-day coding, Copute looks like it mostly adds features I don't care about, and introduces problems I really don't want to deal with.  If your focus is on high-level libraries, it is possible that that's a lesser concern -- that's not my area of expertise.  (But it also is largely irrelevant to most folks just starting out in this environment, very few of whom want to move quickly into high-level library design.)

You entirely miss the point which is that users need to consume libraries and have them all work together. Modularity is what makes that happen.

Right now you choose one big monolithic library and you are stuck with what ever is compatible with it. You can't pick and choose granularly, because we don't write libraries nor userland code in a way that compose without refactoring.

The Expression Problem or more importantly the fragile base class problem.

...
 
Because I already wrote you can put it in a STATIC in this case where there is only one rational implementation.

Hmm.  Makes me twitch on an aesthetic level, and I think it's rather confusing, since what I'm talking about is not *conceptually* static -- after all, if you're passing in an instance, then it's an instance method almost by definition.  (The sometime C++ engineer in me thinks quite strongly in those terms -- if you're passing the instance as a parameter, it's a method; if not, it's static.)

My quick explanation of type theory in layman's terms today from first principles may elucidate:


In particular, note that `this` plays not role in subtyping with typeclasses. I quote myself, "Type classes are a form of polymorphism (i.e. extensibility) where the `this` parameter is a type parameter of the operations. The operations for a concrete type can be matched nominally (by name) as shown in the OP or even structurally (although this can be erased and sometime cause reflection on the JVM and thus is probably inefficient)". Then read the rest of that post to compare to the role or `this` in subtyping.

So either it is an overrideable method (in which case put it in a mixin) or it is not (so you can put it in a static).

The point of putting overrideable methods in the mixin is separation-of-concerns, a fundamental design principle. Your point is to favor SPOT (single-point-of-truth), but putting it in the mixin maintains SPOT and factors the code more correctly.

First you think about interface which means specification. Then you think about implementation, c.f. ESR's famous book The Art of Unix Programming.

Aesthetically it is beautiful, because the boundary is clea[r]-- overrideable implementation goes in the mixin and/or class. Specification goes in the interface.
 
I'll grant you that it may be a practical solution to the problem, but I wind up questioning the keyword STATIC -- that isn't what most people *mean* when they say "static", and it's a very unusual usage.  I find it unintuitive.

It is same historical meaning of the static keyword. It is function that doesn't change per instance.

Now I add a new capability to static, which is you can implement it in SUB types. That is to support the benefits of typeclasses, c.f. the prior links and also this one:


And some new discussion of problems with the std library:

 
The main justification is that interface is orthogonal to implementation. This is crucial design concept for code reuse and modularity.

I can see that you are making this as a fundamental assumption.  I don't happen to agree, and I no longer find the arguments persuasive.  I spent many years going down that road, and found that in practice, it leads to *poorer* code reuse and *worse* modularity.  It's an assertion that sounds nice, but in my experience doesn't match reality.

Please give me an example. I am confident you are confused or conflating some issues.

... no.  Sorry, Shelby, but you've gotten my goat, and I'm going to back off from the conversation before I get heated.  I'm failing to keep my temper in check, so it's probably time for me to give up on trying to help you.

I have to observe that your choice of rhetoric is poor.  You've made a big deal about other folks getting ad hominem at you, but the reason that's happening is because your conversational style is full of flamebait like this.

 I was correct.

Read from the linked post down to Paul Phillips wishing we had pure interfaces:

Rex Kerr

unread,
Sep 15, 2013, 11:21:40 AM9/15/13
to Shelby, scala-debate
On Sun, Sep 15, 2013 at 3:48 AM, Shelby <she...@coolpage.com> wrote:
On Sunday, September 15, 2013 5:33:21 AM UTC+8, Justin du Coeur wrote:

On Fri, Sep 13, 2013 at 6:37 PM, Shelby <she...@coolpage.com> wrote:

Please give me an example. I am confident you are confused or conflating some issues.

... no.  Sorry, Shelby, but you've gotten my goat, and I'm going to back off from the conversation before I get heated.  I'm failing to keep my temper in check, so it's probably time for me to give up on trying to help you.

I have to observe that your choice of rhetoric is poor.  You've made a big deal about other folks getting ad hominem at you, but the reason that's happening is because your conversational style is full of flamebait like this.

 I was correct. Now we see above you were conflating `this` with subtyping.

Sorry I speak matter-of-factly. I didn't intend it as an insult.

Speaking matter-of-factly can also be an insult.  For example, if I were to tell someone
  "You are dumber than average."
and then they take an IQ test and get 97, is it true that I didn't insult them?  It might be nice if one didn't need to pay attention to what is factual-but-insulting ("you are fat") but people do indeed take offense at having certain classes of unfavorable but true things pointed out, or at phrasing that suggests surety where they disagree and do not believe that the contrary is obvious.

And this is exactly why Justin's characterization of your conversational style was on-target.  Compare these levels of tactfulness:
  (1) Could you give an example?  This isn't obvious to me.
  (2) Please give me an example.  I'm pretty sure that this isn't actually the case.
  (3) Please give me an example.  I'm confident you're confused.
  (4) Give me an example.  I'm sure you're confused; you've barely thought about this at all.

You often come in at level (3) here: nominal politeness in phrasing mixed with phrasing that is a nominal insult regardless of truth-value.

If you actually want to interact with people, you have to allow that they might be right and you might be wrong, and do so in your phrasing.  Also, you have to allow that you and they may be unable to come to an agreement over who is right, and that in such case it is not necessarily true that they are the ones who are confused.  (You may think they are--indeed you probably think so in every particular case--but you also have to recognize that you'll have some error rate even if you can't perfectly detect which times you are the one in error.  It is courteous, therefore, to use less judgemental language.)

Anyway, that's enough of a tangent.

  --Rex

Shelby

unread,
Sep 15, 2013, 8:22:46 PM9/15/13
to scala-...@googlegroups.com, Shelby
Hi Rex,

I've read many of your posts in the Scala discussion groups, the bug tracker, and also at stackoverflow and else where. So I know you are one of those extremely knowledgeable (more than me in many areas apparently) people that I had in mind when I wrote at the bitcointalk.org forum that the IQ level in the Scala community is extremely high, with Haskell's community perhaps higher. I wrote that before coming back to participate in these discussion groups recently following a more than several month hiatus. So the idea that I don't give respect to people here doesn't match the demonstrated reality of my behavior.

You are clearly biased by your emotions in this case. Let's review the evidence.

This is going to be illustrative (and I hope will cause people to apologize to me for misjudging me and to more carefully triangulate their subjectiveness in the future)...

P.S. it is time-consuming for me to write this post. I hope all will feel a little bit embarrassed at wasting my time, forcing me to enumerate what is obvious from the objective evidence. But any way, I do it below in order to make one attempt to be judged fairly, i.e. I will tip my hat at this windmill one time to show my respect for you all... see below...
Okay now let's review the history of what I wrote and what others wrote.

First Victor Klang was probing me to see what logical argument I could make against certain operators, because he was convinced I could make none. Then I made the unarguable point that the operators we learned in primary school are more inuitive for all in society, and then operators that we learned in C are more intuitive for more programmers. Nothing I wrote up to that point was discourteous in any way. I challenge you to go find a counter-example.

Then Miles Sabin startled me with the most discourteous slander:


On Monday, September 9, 2013 4:57:50 PM UTC+8, Miles Sabin wrote:
On Sat, Sep 7, 2013 at 11:24 PM, √iktor Ҡlang <viktor...@gmail.com> wrote: 
> Would you so kindly sum it up for me? 

"Your ideas are intriguing to me and I wish to subscribe to your newsletter" ;-) 


On Monday, September 9, 2013 5:34:48 PM UTC+8, Miles Sabin wrote:
On Mon, Sep 9, 2013 at 10:13 AM, Shelby <she...@coolpage.com> wrote: 
> On Monday, September 9, 2013 4:57:50 PM UTC+8, Miles Sabin wrote: 

>> On Sat, Sep 7, 2013 at 11:24 PM, √iktor Ҡlang <viktor...@gmail.com> wrote: 
>> > Would you so kindly sum it up for me? 
>> 
>> "Your ideas are intriguing to me and I wish to subscribe to your 
>> newsletter" ;-) 


> Please don't tell me I am going to get that same sh$t here. Cripes, it is 
> scala-DEBATE. 

Exactly ... I think you're looking for scala-troll.

I informed him he broke two of the ground rules covered at scala-lang.org w.r.t. to the discussion groups.


On Tuesday, September 10, 2013 9:12:55 AM UTC+8, Shelby wrote:
On Monday, September 9, 2013 5:34:48 PM UTC+8, Miles Sabin wrote:
On Mon, Sep 9, 2013 at 10:13 AM, Shelby <she...@coolpage.com> wrote: 
> On Monday, September 9, 2013 4:57:50 PM UTC+8, Miles Sabin wrote: 

>> On Sat, Sep 7, 2013 at 11:24 PM, √iktor Ҡlang <viktor...@gmail.com> wrote: 
>> > Would you so kindly sum it up for me? 
>> 
>> "Your ideas are intriguing to me and I wish to subscribe to your 
>> newsletter" ;-) 


> Please don't tell me I am going to get that same sh$t here. Cripes, it is 
> scala-DEBATE. 

Exactly ... I think you're looking for scala-troll.

Why does it harm you that I want to elaborate on my thoughts about symbols, responding to two people who were discussing that with me?

Why do you wish to discourage open discussion about Scala?

I was not rude to anyone, yet you were rude to me.

Thus I conclude you have an anti-social personality.

According to the rules that govern this forum, you should now receive a warning, because you violated two of the rules:

1. Being discourteous
2. Using the "troll" word.
 
===============================
The discussion between Justin du Coeur and myself has been fairly polite and productive, so I really didn't internalize all of his discourteous behavior (I don't like wasting my energy on nonsense emotions). When I went back to read the entire discussion, I was a little bit shocked at how many times he has done exactly what you are telling me not to do to him, yet I wasn't doing the same to him!


On Tuesday, September 10, 2013 9:44:37 AM UTC+8, Justin du Coeur wrote:
On Sun, Sep 8, 2013 at 1:34 PM, Shelby <she...@coolpage.com> wrote: 
In fact, my brain wants to filter anything that is not direct-to-the-point and intuitively, readily grasped.

I can entirely understand that, but I don't think it's an excuse.  This stuff *is* deep, and excelling at it requires a time investment to understand it.  And yes, that does mean reading the documentation -- not every detail of the spec, but at least the good overview books like the ones I mention above.

I really do get what you're saying.  But I think you're just plain taking the argument to the point of reductio ad absurdam -- it's coming across as "I don't personally get that symbol immediately, so it is Bad", which is subjective to the point of useless.  And I find it poorly aimed ...

Above he says I'm being absurd. And he says I am subjective to the point of useless and I that I have poor aim.

Then there was the flamefest post from Justin, in which he got all his interpretations wrong (see my reply to him for the evidence):


On Saturday, September 14, 2013 12:45:32 AM UTC+8, Justin du Coeur wrote:
On Thu, Sep 12, 2013 at 11:14 PM, Shelby <she...@coolpage.com> wrote:
Here follows an example of myself recently trying to convince some expert C++ programmers to learn and use Scala for an upstart project that I was interested to contribute programming to:

(discussion continues to the next page of that thread)

What you will find is that if most programmers don't know Scala, it is impossible to convince anyone to use Scala for a project.

That isn't exactly a convincing argument.  Indeed, it simply displays the same flaw that Suminda has been trying to point out, which you seem to be ignoring.  In both cases, you (metaphorically) wandered into somebody else's house; declared that you are smarter than everyone present; started lecturing them pedantically; and wound up effectively calling other people stupid for disagreeing with you.  (And then getting defensive and "you're all picking on me" when you met resistance.)

And my reply to correct him without flaming him back.


On Saturday, September 14, 2013 7:48:22 AM UTC+8, Shelby wrote:
On Saturday, September 14, 2013 12:45:32 AM UTC+8, Justin du Coeur wrote:
On Thu, Sep 12, 2013 at 11:14 PM, Shelby <she...@coolpage.com> wrote:
Here follows an example of myself recently trying to convince some expert C++ programmers to learn and use Scala for an upstart project that I was interested to contribute programming to:

(discussion continues to the next page of that thread)

What you will find is that if most programmers don't know Scala, it is impossible to convince anyone to use Scala for a project.

That isn't exactly a convincing argument.  Indeed, it simply displays the same flaw that Suminda has been trying to point out, which you seem to be ignoring.  In both cases, you (metaphorically) wandered into somebody else's house; declared that you are smarter than everyone present; started lecturing them pedantically; and wound up effectively calling other people stupid for disagreeing with you.  (And then getting defensive and "you're all picking on me" when you met resistance.)

Triangulation helps formulate rationality w.r.t. to our interpretations, because we are often blinded by biases and emotion.

Actually they were invading my house, as evident by myself winning the poll with 85% support for my technical arguments:


Both of the principles asked for my feedback:



And I was asked by others in that forum on that day to go present my analysis in that thread.

And I did not belabor the point about languages and conceded it to them:


And even convinced the person who was arguing with me to take interest in Scala:

 
See where sulking causes rationality to go ;) 

 ==================================
Now lets look at the discussion between Justin and myself pertaining to the one case where I wrote something that could have been misinterpreted as an insult and again we will see that Justin was insulting me numerous times if we use your "thin-skin" criteria.

First he conflated a static method (i.e. putting it the companion object) with an override-able method. Scala does make static methods easier to overlook, because it doesn't scope them with override-able method callls, i.e. can't call List[A].f(l: List[A]) by doing List().f(List()) instead must call List.f(List()). But later in the discussion I explained that to him that Copute could possibly do that syntactical sugar.


On Friday, September 13, 2013 4:46:34 AM UTC+8, Justin du Coeur wrote:
On Wed, Sep 11, 2013 at 8:47 PM, Shelby Moore <she...@coolpage.com> wrote: 
3. Traits are called INTERFACE (and there is a separate MIXIN syntax),
because they do not contain non-static implementation. This restraint and
optimization is allowed because Copute is supporting pure functional
(immutable, referentially transparent) programming only.
 
I've been staring at this for hours now, and I'm not getting it.  Why is this a good thing?  I mean, the power of traits is one of the things I love about Scala, specifically because years of working with Java led me to *despise* pure interfaces.  They sound good, sure, but in practice I've had too many times that I've built an interface that seemed like it was pure, only to gradually find that there were several methods I wanted that really, deeply, belonged on the interface.  It always led to annoying duplicate code in the implementations.

So I replied and told him he was conflating things he liked with a non-point.


On Friday, September 13, 2013 5:55:29 AM UTC+8, Shelby wrote:
On Friday, September 13, 2013 4:46:34 AM UTC+8, Justin du Coeur wrote:
I've been staring at this for hours now, and I'm not getting it.  Why is this a good thing?  I mean, the power of traits is one of the things I love about Scala, specifically because years of working with Java led me to *despise* pure interfaces.  They sound good, sure, but in practice I've had too many times that I've built an interface that seemed like it was pure, only to gradually find that there were several methods I wanted that really, deeply, belonged on the interface.  It always led to annoying duplicate code in the implementations.

So I guess the question is -- why would I ever want an INTERFACE instead of a MIXIN?  I *hate* interfaces with a burning passion, mostly due to Java experience.

I think what you hate about Java is single-inheritance (lack of multiple inheritance and mixins). Thus, I think you are conflating this with the benefits on an interface that contains no implementation.

Then he replied showing that he still didn't remember the difference between a static method and an override-able method, and he added a marginal insult (by your criteria), "I don't think you're listening to what I'm saying".


On Friday, September 13, 2013 10:17:49 PM UTC+8, Justin du Coeur wrote:
On Thu, Sep 12, 2013 at 5:55 PM, Shelby <she...@coolpage.com> wrote: 
I think what you hate about Java is single-inheritance (lack of multiple inheritance and mixins). Thus, I think you are conflating this with the benefits on an interface that contains no implementation.

It's possible that you mean something very different by "interface" and "mixin" than I do, but I don't think you're listening to what I'm saying.

Take an Interface with properties A, B and C.  Given those, it is *extremely* common for me to want to define functions D and E, which are entirely functions *over* A, B and C, and are essentially universal to any implementation of the interface.  The best place to put those functions, in my experience, is typically in the same trait.

Yes, it is *possible* to put the function definitions into a mixin -- but from a factoring POV that's generally inappropriate 

 On the issue of disjunctions (which I have subsequently explained are a major issue for modular reuse), Justin hurled more marginal insults if we follow your ("thin-skin") criteria.


On Saturday, September 14, 2013 12:31:16 AM UTC+8, Justin du Coeur wrote:

On Thu, Sep 12, 2013 at 10:08 PM, Shelby <she...@coolpage.com> wrote:
If everything is going to be a object with a meaningful type, then how does it make any sense that Scala chose not to implement first-class disjunctions? That is really perplexing me how Scala obtained such a large hole

You've made a big deal about this, but I think you've lost perspective.

...

it's an extremely minor detail in terms of what I look for in a language.

...
 
Regarding meta programming, I haven't had time to research the proposals for Scala macros. What ever they do, I hope there is an option to see the generated Scala code. Shouldn't macros just be DSLs?

You have that backwards.

Then he continues to not understand the difference between a static method and an override-able method, even I have told him explicitly (see the quote of myself in his post), and hurls more real (not marginal) insults "Stuff and nonsense".


On Saturday, September 14, 2013 4:53:58 AM UTC+8, Justin du Coeur wrote:
On Fri, Sep 13, 2013 at 4:01 PM, Shelby <she...@coolpage.com> wrote: 
In other words, you're seeing a massive critical problem that I regard as, at most, a mild nuisance in practical application code.  It's possible that you have encountered situations where it *is* a huge problem, but I've built enough complex, huge systems that I'm a tad skeptical that this is a deathly-important weakness.

There is no extra boilerplate in Copute, just have your MIXIN extend the INTERFACE, then only extend the MIXIN where you need the default functionality.

That's still boilerplate, and is still poor factoring.  As I said above, if there is only one rational implementation (which is *very* common in my experience), then having to mix it in is unnecessary nuisance, and a recipe for accidental code duplication by people who don't happen to notice that there is One True Implementation existing somewhere else.
 
If you don't want the default method to be overridden, then you can place it in the INTERFACE as a STATIC that inputs the INTERFACE as its first parameter. You then call these as Name.func(x, ...) where x is an instance of Name. I suppose I could support syntactical sugar for this case so you can call them as x.func(...).

I would recommend that -- it at least provides a workaround to my major concern about code duplication.  That said:
 
The main justification is that interface is orthogonal to implementation. This is crucial design concept for code reuse and modularity.

I can see that you are making this as a fundamental assumption.  I don't happen to agree, and I no longer find the arguments persuasive.  I spent many years going down that road, and found that in practice, it leads to *poorer* code reuse and *worse* modularity.  It's an assertion that sounds nice, but in my experience doesn't match reality.

Instead, my experience has been that Scala's traits get reuse and modularity exactly correct: implementation is fine on the trait, but should depend on the trait's abstracts.  You leave abstract the things that are intended to vary, and provide implementation for the ones that aren't.  Whether the trait happens to be a pure interface or not is a matter of happenstance, and not terribly relevant.
 
The consumer of an interface should not be partial to any particular implementation of subtypes, yet rather only to its documented semantics. By putting a default overrideable implementation in the interface (trait), the developer is able to avoid writing a complete specification of the interface, and thus consumers of the interface will rely on what they interpret the semantics to be by studying the default implementation. This will destroy code reuse.

Stuff and nonsense.  I reuse such traits all the time.  Most of Scala's standard library is made up of such traits.  Are you claiming that nobody ever reuses them?  And I rarely look at the implementation unless there is a compelling reason to do so -- I use the documentation, exactly the same way I would with an abstract interface.
 
One of my big goals with Copute, is I want to change the entire economy of open-source.

Ambitious, but okay -- I can appreciate ambitious.  But you're going to fail if you demand that everybody think like you, and sacrifice functionality that already exists in other languages that they find useful.

No high cost, i.e. no boilerplate when extending the base MIXIN. Major gains in code reuse discipline.

If every implementation class has to mix in the mixin, that's still boilerplate.  Concise boilerplate is still boilerplate.

So then finally I replied and told him that he was confused and to provide an example, because after trying to explain to him several times, he still wasn't getting it, so I figured maybe an example might cause him to realize.


On Saturday, September 14, 2013 6:37:52 AM UTC+8, Shelby wrote:

On Saturday, September 14, 2013 4:53:58 AM UTC+8, Justin du Coeur wrote: 
In other words, you're seeing a massive critical problem that I regard as, at most, a mild nuisance in practical application code.  It's possible that you have encountered situations where it *is* a huge problem, but I've built enough complex, huge systems that I'm a tad skeptical that this is a deathly-important weakness.

I will concede your point here. I may be overestimating their importance. Yet I am not sure, and I don't want to risk it. So if Sergey's suggestion about Shapeless works, then I can have the disjunctions without subsuming them to Any nor the performance overhead (and boilerplate) of boxing them into Either or OneOfX.
 
There is no extra boilerplate in Copute, just have your MIXIN extend the INTERFACE, then only extend the MIXIN where you need the default functionality.

That's still boilerplate, and is still poor factoring.

No, because...
 
 As I said above, if there is only one rational implementation (which is *very* common in my experience), then having to mix it in is unnecessary nuisance, and a recipe for accidental code duplication by people who don't happen to notice that there is One True Implementation existing somewhere else.

Because I already wrote you can put it in a STATIC in this case where there is only one rational implementation.
 
If you don't want the default method to be overridden, then you can place it in the INTERFACE as a STATIC that inputs the INTERFACE as its first parameter. You then call these as Name.func(x, ...) where x is an instance of Name. I suppose I could support syntactical sugar for this case so you can call them as x.func(...).

I would recommend that -- it at least provides a workaround to my major concern about code duplication.  That said:

So why did you continue to argue above? Let's try to keep the noise level of our posts down and focus on points that are still in contention.
 
The main justification is that interface is orthogonal to implementation. This is crucial design concept for code reuse and modularity.

I can see that you are making this as a fundamental assumption.  I don't happen to agree, and I no longer find the arguments persuasive.  I spent many years going down that road, and found that in practice, it leads to *poorer* code reuse and *worse* modularity.  It's an assertion that sounds nice, but in my experience doesn't match reality.
Please give me an example. I am confident you are confused or conflating some issues.


 Thus the evidence clear shows that my discussion falls into classification #2:

  (2) Please give me an example.  I'm pretty sure that this isn't actually the case.

And Justin repeatedly devolves his discussion to classification #3 or #4:
  
  (3) Please give me an example.  I'm confident you're confused.
  (4) Give me an example.  I'm sure you're confused; you've barely thought about this at all.
 
Having said that, I really appreciated the discussion with Justin. He helped me. And I was not offended by his remarks. So I expect him to not be offended by mine.

==========================
On that other nasty, wasteful "discussion" that Simon dragged me into, here are the links:


I rest my case your honor.

Simon Schäfer

unread,
Sep 16, 2013, 2:10:13 AM9/16/13
to scala-debate

Shelby,
>
> P.S. it is time-consuming for me to write this post. I hope all will
> feel a little bit embarrassed at wasting my time, forcing me to
> enumerate what is obvious from the objective evidence.
that is exactly your problem. No one forces you to do anything. Being
respectful doesn't mean to tell others how great they are (I can assure
you these guys already know that they have your respect), it means being
friendly to the ones you think are wrong or even worse the ones you know
are dump as hell.

It can also be form of respect to tell someone you are wrong even if you
know that it's not the case. Another form of respect is to not respond
at all (for example because you know you can't end a discussion
otherwise, which would just waste peoples time).

>
> I rest my case your honor.
The question is not if we can forgive you, the question is if you can
forgive others.


I beg anyone on that list not to further respond on Shelbys responds nor
on the responds of others. It should be clear, that we do not understand
each other, thus there will be no agreement and no end for this
senseless and harmful discussion. If anyone wants to respond do it to
the technical stuff that was written.

Simon

Shelby

unread,
Sep 16, 2013, 11:30:14 PM9/16/13
to scala-...@googlegroups.com
On Monday, September 16, 2013 2:10:13 PM UTC+8, Simon Schäfer wrote:

Shelby,
>
> P.S. it is time-consuming for me to write this post. I hope all will
> feel a little bit embarrassed at wasting my time, forcing me to
> enumerate what is obvious from the objective evidence.
that is exactly your problem. No one forces you to do anything.

You tried to force me to stop presenting my PoV.
 
Being
respectful doesn't mean to tell others how great they are (I can assure
you these guys already know that they have your respect), it means being
friendly to the ones you think are wrong or even worse the ones you know
are dump as hell.

I have been friendly. You were not friendly. If you want to be friendly, I am happy to reciprocate.

How can I be friendly with someone who is telling me I shouldn't speak? The only way to oblige you was for me to go silent and not participate.
 
It can also be form of respect to tell someone you are wrong even if you
know that it's not the case.

I will never do that. Sorry.
 
Another form of respect is to not respond

Sorry I don't adhere to political rigor mortis.

I want to eliminate politics entirely with technology. Realistic or not, I see that as the only way to a highly prosperous future.

at all (for example because you know you can't end a discussion
otherwise, which would just waste peoples time).

Not responding because it is redundant, is something I do. If you repeat the same arguments, I will not respond.
 
>
> I rest my case your honor.
The question is not if we can forgive you, the question is if you can
forgive others.

Sure I love mutual forgiveness. I almost sent another msg asking "can we kiss and make up", but then I decided it would just add noise and annoy people more.

The guys who gouged my right eye out with a blunt object in 1999, thus destroying all the momentum I had with CoolPage, I laid in bed praying they would find peace.

You know you think you are important, but try losing an eye, and get some perspective on priorities.

Shelby

unread,
Sep 16, 2013, 11:41:34 PM9/16/13
to scala-...@googlegroups.com
On Tuesday, September 17, 2013 11:30:14 AM UTC+8, Shelby wrote:
On Monday, September 16, 2013 2:10:13 PM UTC+8, Simon Schäfer wrote:
It can also be form of respect to tell someone you are wrong even if you 
know that it's not the case.

I will never do that. Sorry.
 
Another form of respect is to not respond

Sorry I don't adhere to political rigor mortis.

I want to eliminate politics entirely with technology. Realistic or not, I see that as the only way to a highly prosperous future.

at all (for example because you know you can't end a discussion
otherwise, which would just waste peoples time).

Not responding because it is redundant, is something I do. If you repeat the same arguments, I will not respond.

Actually when it is not important and I see that by responding I will create animosity, I do not respond. But if I am in the middle of discussing something that I feel is important, then I can't do that. Want to make that distinction.

There are many times in social settings I hear something that I could better explain or contend, but I usually keep my lips sealed, or if I say anything I will try to make my point by overly agreeing with them, e.g. someone says something racial against negros, then if I wanted to make a point about where that will go without creating animosity, I might say, "yeah would be great if we let the president decide which groups are relocated to camps or controlled environs, so it can be eliminated without political interference of the targeted group". That would cause them to go silent given the current president is obama.

Shelby

unread,
Sep 17, 2013, 12:26:47 AM9/17/13
to scala-...@googlegroups.com, Shelby
I hope this is the last post on the non-technicals. I need to publicly correct my egregious conflation.

I will also summarize technicals below...


On Monday, September 16, 2013 8:22:46 AM UTC+8, Shelby wrote:
Hi Rex,

I've read many of your posts in the Scala discussion groups, the bug tracker, and also at stackoverflow and else where. So I know you are one of those extremely knowledgeable (more than me in many areas apparently) people that I had in mind when I wrote at the bitcointalk.org forum that the IQ level in the Scala community is extremely high, with Haskell's community perhaps higher. I wrote that before coming back to participate in these discussion groups recently following a more than several month hiatus. So the idea that I don't give respect to people here doesn't match the demonstrated reality of my behavior.

You are clearly biased by your emotions in this case. Let's review the evidence.

This is going to be illustrative (and I hope will cause people to apologize to me for misjudging me and to more carefully triangulate their subjectiveness in the future)...

s/emotions/politics/

I should have taken more time to think carefully about what I was pointing out, because that word is loaded with ad hominem implications.

Clearly Rex wants communications be productive.

I interpreted that his subjectivity was leaning against me non-objectively (which the word "emotion" doesn't accurately capture), but another way of interpreting his objectivity is the "group is always correct" when determining who is causing friction or not.

Frankly it is difficult to raise issues such as I did without causing friction:

1. Symbols as infix operators should only be those we learned in primary school or in C, unless the DSL is targeted to a group who knows those symbols. The Scala style guide already states this.

2. Claiming there may be a more refined syntax for a pure functional language which is similar to Scala's syntax.

3. Implicating that maybe we have to go through the pain of overhauling the standard library yet again (ouch!).

4. Wondering why Scala didn't yet prioritize disjunctions as crucial given they are required for reuse of types in heterogeneous collections, when refactoring is not an option.

The best would be to just release code that corrected the above and say nothing unless it was already gaining adoption. That would eliminate much friction. Unfortunately I am not omniscient and I rely on the group to help me find optimum directions, so I discuss early and incrementally through the process of learning.

If viewing some of my old posts on the scala lists, it is clear that I did not yet have a well formed understanding of many issues or how they related (disjointed conceptualization). It is possible at times I spoke too soon and needed to read more, yet I was impatient (which is both a positive and negative attribute).

There is no perfect way to handle politics, especially when one is pushing themselves to go faster. Politics is inherently slow, e.g. the communication and understanding overhead pointed out in the Mythical Man Month.

Any way, back to coding. Thanks to all those who have tolerated me, and I am trying to find a balance between my haste and the minimum necessary politics I need to reach a goal.

I don't like being a narcissistic Prima donna and talking so much about myself. Ugh embarrassing. Let's stay on the technicals please. I don't matter at all. It is the open-source code that matters.

Haoyi Li

unread,
Sep 17, 2013, 12:58:35 AM9/17/13
to Shelby, scala-...@googlegroups.com
It is the open-source code that matters.

I don't think this could be farther from the truth, in any situation. If you don't think empathizing and convincing people matters, you really shouldn't be surprised if other people show no empathy and remain unconvinced...

For all the times you've talked about code mattering, all I've seen is lengthy lectures, with no code at all! If you came in with a prototype that demonstrated the correctness and value of your ideas, I'm pretty sure it would generate some interest. 



--

Suminda Dharmasena

unread,
Sep 17, 2013, 4:40:09 AM9/17/13
to scala-...@googlegroups.com, Shelby
I agree with @Haoyi

Start with a Github projects with perhaps a Github.io project page.

Shelby

unread,
Sep 17, 2013, 7:57:40 PM9/17/13
to scala-...@googlegroups.com, Shelby
I was avoiding making this point in my argument against unfamiliar symbol operators, because I am embarrassed of the following webpage I wrote in 2011; there are some mistakes and naive conceptualizations. As you can see at the following link, I was originally designing to use overloaded operators and had worked out a strategy for supporting right associative operators which could be a method of the left or right operand (something Scala can't do) so I could make the Applicative.apply method (<-) read in the correct order to agree with the order of the function parameters it was lifting to the Applicative.


P.S. Here are some other informational links at Tony's blog on the category theory stuff, and note he is using the normal Scala formulation for a typeclass (see my point below for relevance), not my formulation upthread:


Then I got sick and didn't look at my the library code that was employing <- (and <-- for Functor map) for several months or more. Then when I came back to the code, I couldn't quickly understand what it was doing, because of the strange right associativity and method of the left or right operand flexibility. About this time, I was having the discussion with Python and C# programmers at ESR's blog asking them why they couldn't understand and appreciate folds and Applicative apply. It became abundantly clear to me that strange operators were just making the hurdle even more difficult. I decided if Applicative.apply was going to be used liberally throughout the code by most programmers, then I would just make some syntax sugar to hide the method calls, and thus now there is a special syntax {( ... )} which lifts a function call to Applicative parameters. That is even more concise than the <- and <-- operators version of the code, and when you look at the Scala code generated by that sugar, you will see the Applicative.apply and Functor.map method calls so it is easier to follow what is going on.

At that point, I decided to disallow non-familiar, non-left associative symbol operators. I don't even like List.++, why not just overload the List.+.

The argument in this thread (and the one it derived from) has been that strange symbols can make Scala code more concise, they have to be learned and learning is part of life, they often have a named equivalent thus are optional, and Scala's community doesn't want to be dumbed down to a least common denominator. Basically an argument that I must be (to the point of absurdity) against progress and learning and a higher level of IQ community.

I have rebutted that by saying that for all those who want to replace the named methods with emoticons, then this could be configured into the IDE so that they don't burden the rest of the mainstream with such a high learning curve hurdle when reading Scala code. That is a win-win, because Scala as a top-tier language would have very significant benefits for all, e.g. more economy-of-scale for job offerings, willings of projects to choose Scala, and to influence and fund a better VM, etc..

For DSLs where the target audience is not the general Scala community, e.g. math matrix library, use of symbols appropriate to the target audience would not be a problem.


On Tuesday, September 17, 2013 12:58:35 PM UTC+8, Haoyi Li wrote:
It is the open-source code that matters.

I don't think this could be farther from the truth, in any situation.

You appear to disagree with yourself below. Logical consistency is important to me.
 
If you don't think empathizing and convincing people matters, you really shouldn't be surprised if other people show no empathy and remain unconvinced...

I didn't complain about people not empathizing with me. I did respond to those who were ad hominem hostile against my speech.

I also stated that my project was not yet at the stage where I expected anyone here to take it seriously.

I started this thread not about my project (c.f. the OP), yet about the general concept of addressing some issues which may be stopping Scala from reaching a first-tier language popularity and discussion about what may and may not be possible to address that.

For all the times you've talked about code mattering, all I've seen is lengthy lectures, with no code at all!

Try reading the thread again. There is code where I showed for the first time in the Scala world how to implement a typeclass that is integrated with subtyping. It has a deep implication:


(note probably can't make that strategy for fixing views in above link work without the revelation of code I provided upthread here)

And yeah more code at those links from me.
 
If you came in with a prototype that demonstrated the correctness and value of your ideas, I'm pretty sure it would generate some interest. 

Yes that is exactly what I was saying. So I dunno why you disagree with yourself above. Perhaps one of us is misinterpreting the other's point.

Shelby

unread,
Sep 17, 2013, 8:41:16 PM9/17/13
to scala-...@googlegroups.com, she...@coolpage.com
Cross-referencing to a thread about frustrations with inconsistencies when learning Scala the first time and lack of an appropriately focused, official quick-start tutorial:


(and he hasn't even begun to hit the corner cases in the standard library and other issues he doesn't know about yet ;)

Shelby

unread,
Sep 18, 2013, 12:58:20 AM9/18/13
to scala-...@googlegroups.com, she...@coolpage.com
A disadvantage of my proposal (with a fix proposed below) is that it only allows the typeclass (e.g. Monad) for each type (e.g. Option) to be implemented one way. Whereas Daniel's (i.e. the usual Scala typeclass) formulation allows one to pull into scope alternative implementations of Monad for any targeted type (e.g. Option). The following tutorial makes this clearer explaining the way the well-known type Ordering works as non-higher-kinded example:


I highly recommend the above linked tutorial to better understand what I was writing about. And they make the point that what I was doing is not just higher-kinded types, yet full blown F-bounded polymorphism (finally someone explained to me what that means in a way I can understand).


An advantage of my proposal is the programmer doesn't need to create an implicit scope to require the operations of a typeclass, thus for example a collection of Functor knows it has the essential map method. For example, in the proposed pattern (assuming Seq extends Functor) the programmer could write:

def f[A](xs: Seq[A]) = xs.map ...

Whereas in the more flexible typeclass pattern, the programmer must instead write:

def f[A](xs: Seq[A])(implict tc: Monad[Seq]) = tc.map(xs ...

Actually that isn't sufficient in the general case where there might be two Seq parameters and thus needing two input implicits, thus generally write:

def f[M[A] <: Seq[A], A](xs: M[A])(implict tc: Monad[M]) = tc.map(xs ...

Thus one could call the more flexible version in two ways:

f(List(1,2,3))
f(List(1,2,3))(mapEveryOtherElem)

Compare the more complex example I wrote upthread, where I write the more flexible pattern first:

def sequence[M[_], A](xs: List[M[A]])(implicit tc: Monad[M]): M[List[A]] = { 
   xs.foldRight(tc.unit(List[A]()))( (m, acc) =>
      tc.bind(m)( a => tc.bind(acc)(tail => tc.unit(a :: tail)) ) 
   ) 
}

My proposed pattern:

def sequence[M[A]: Monad[A], A](xs: List[M[A]])(implicit tc: staticMonad[M]): M[List[A]] = { 
   xs.foldRight(tc.unit(List[A]()))( (m, acc) =>
      m.bind( a => acc.bind(tail => tc.unit(a :: tail)) ) 
   ) 


Yet Copute would simplify writing the proposed pattern:

def sequence[M[A]: Monad[A], A](xs): List[M[A]] M[List[A]] { 
    xs.foldRight(M.unit(List[A]()))( (m, acc) =>
        m.bind( a => acc.bind(tail => M.unit(a :: tail)) ) 
    ) 
}

At the definition site for typeclass, I could modify my proposal for Copute to write the more flexible form of typeclass.

However, at the use site, I could not without do the transformation without access to Scala's AST for the computed types (i.e. I would need to figure out how to write Copute as a Scala plugin). Yet I could do the following in Copute syntax instead (note that : is an upper bound in Copute i.e. <: in Scala):

def f[M[A]: Seq[A], A](xs: Seq[A]) = M.map(xs) ...

def sequence[M[A]: Monad[A], A](xs): List[M[A]] M[List[A]] { 
    xs.foldRight(M.unit(List[A]()))( (m, acc) =>
        M.bind(m)( a => M.bind(acc)(tail => M.unit(a :: tail)) ) 
    ) 
}

To determine which methods in the Copute interface to transfer into the typeclass, I propose it would be all those which access Sub (the F-bounded type parameter), because by definition these are operating on concrete class implementation and thus should be patterned so they can be re-implemented in more than one way with the typeclass pattern (which is much more powerful than the implicit delegate pimp pattern).

I think I will probably shift to this new proposal.

On Thursday, September 12, 2013 8:47:51 AM UTC+8, Shelby wrote:
5. More concise way to write typeclasses, that integrates well with 
subtyping. Before I describe the syntax and translation to Scala, this
feature as well as #4 above are primarily motivated by the desire to
support category theory (a la Scalaz) in a more intuitive, less verbose,
and less tsuris syntax.

Basically I understand a typeclass to be a structural type, such that code
can reference a member (e.g. method) of that structural type on any
instance of a type which implements that structure. Some incorrectly refer
to this as ad-hoc polymorphism but I understand the latter term be to a
more general concept of combining structural typing with function
overloading.

Daniel Spiewak wrote the first example Scala typeclass I encountered:

http://www.codecommit.com/blog/ruby/monads-are-not-metaphors

trait Monad[+M[_]] {
  def unit[A](a: A): M[A]
  def bind[A, B](m: M[A])(f: A => M[B]): M[B]
}

implicit object ThingMonad extends Monad[Thing] {
  def unit[A](a: A) = Thing(a)
  def bind[A, B](thing: Thing[A])(f: A => Thing[B]) = thing bind f
}

implicit object OptionMonad extends Monad[Option] {
  def unit[A](a: A) = Some(a)
  def bind[A, B](opt: Option[A])(f: A => Option[B]) = opt bind f
}

sealed trait Option[+A] {
  def bind[B](f: A => Option[B]): Option[B]
}

case class Some[+A](value: A) extends Option[A] {
  def bind[B](f: A => Option[B]) = f(value)
}

case object None extends Option[Nothing] {
  def bind[B](f: Nothing => Option[B]) = None
}

def sequence[M[_], A](ms: List[M[A]])(implicit tc: Monad[M]) = {
  ms.foldRight(tc.unit(List[A]())) { (m, acc) =>
    tc.bind(m) { a => tc.bind(acc) { tail => tc.unit(a :: tail) } }
  }
}

For comparison, I will show some proposed equivalent functionality Copute
code, which you shall note is drastically more concise as well unified
with subtyping:

INTERFACE Monad[A] {
   STATIC unit: A Sub[A]
   bind[B]: (A Sub[B]) Sub[B]
}

INTERFACE Option[A]: Monad[A] {
   Monad.unit(a) = Some(a)
}

CLASS Some[A](value: A): Option[A] {
   bind(f) = f(value)
}

OBJECT None: Option[All] {
   bind(_) = None
}

OBJECT sequence {
   apply[M[A]: Monad[A],A](ms): List[M[A]] M[List[A]] {
     ms.foldRight(M.unit(List[A]()))( (m, acc) {
        m.bind( (a) {acc.bind( (tail) {M.unit(a :: tail)} )} )
     })
   }
}

The proposed translation to Scala:

trait Monad[+Sub[A] <: Monad[Sub,A], +A] {
   def bind[B](_1: (A) => Sub[B]): Sub[B]
}

trait Option[+A] extends Monad[Option,A]

case class Some[A](value: A) extends Option[A] {
   def bind[B](f: (A) => Option[B]): Option[B] = f(value)
}

case object None extends Option[Nothing] with staticOption {
   def bind[B](_1: (Nothing) => Option[B]): Option[B] = None
}

trait staticMonad[+Sub[Any] <: Monad[Sub,Any]] {
   def unit[A](_1: A): Sub[A]
}

trait staticOption extends staticMonad[Option] {
   def unit[A](a: A): Option[A] = Some(a)
}

object Option extends staticOption
object Some extends staticOption

object Implicits {
   implicit object OptionImplicit extends staticOption
}
import Implicits._

object sequence {
   def apply[M[A] <: Monad[M,A],A](ms: List[M[A]])(implicit tc:
staticMonad[M]): M[List[A]] = {
     ms.foldRight(tc.unit(List[A]()))( (m, acc) =>
        m.bind( (a) => acc.bind( (tail) => tc.unit(a :: tail) ) )
     )
   }
}

(Tangentially, if I can figure out how to elegantly support Scala's =>
anonymous function syntax in Copute's LL(k) grammar, then I will.)

Note the subtypes which did not implement a STATIC extend the same
staticOption, and they don't create conflicting implicits.

Note how the Copute compiler must be smart enough to see that the common
Monad subtype of Some[A] and None is Option[A], so that it subsumes Sub[A]
to Option[A] since Sub[A] can't have two types Some[A] and None if they
both have have a common subtype which is a Monad.

Note that Copute STATIC methods that are implemented in the INTERFACE
where they are first declared, can go directly in the companion (same name
as trait) Scala object without the above named static* hierarchy.

Also note that where Copute STATIC methods are not all implemented in the
same subtype, then the static* hierarchy will have Sub2 etc for each fork,
e.g. assuming bind was a STATIC, then:

trait staticMonad[+Sub[Any] <: staticMonad[Sub,Sub2,Any], +Sub2[Any] <:
staticMonad[Sub,Sub2,Any]] {
   def unit[A](_1: A): Sub[A]
   def bind[A,B](_1: (A) => Sub2[B]): Sub2[B]
}

trait staticOption[+Sub2[Any] <: staticOption[Sub2,Any]] extends
staticMonad[Option,Sub2,Any] {
   def unit[A](a: A): Option[A] = Some(a)
}
...

I believe I have worked out all the issues with this, yet if anyone sees a
corner case or flaw, please tell me.

Shelby

unread,
Sep 18, 2013, 1:58:59 AM9/18/13
to scala-...@googlegroups.com, she...@coolpage.com
On Wednesday, September 18, 2013 12:58:20 PM UTC+8, Shelby wrote: 
However, at the use site, I could not without do the transformation without access to Scala's AST for the computed types (i.e. I would need to figure out how to write Copute as a Scala plugin). Yet I could do the following in Copute syntax instead (note that : is an upper bound in Copute i.e. <: in Scala):

def f[M[A]: Seq[A], A](xs: Seq[A]) = M.map(xs) ...

Correction (where & means conjunction, i.e. `with` in Scala and noting that isn't the result type which is inferred):

f[F[A]: Functor[A], A](xs): Seq[A]&F[A] = F.map(xs ...

Or maybe it is better to just write out the implicit (although one of my goals is to keep the implicit keyword out of Copute):

f[F[A]: Seq[A], A](xs)(implicit tc): F[A] Functor[F] = tc.map(xs ...

Much cleaner if in the future I could make a Scala plugin to aid Copute, so instead we can write it more simply (which was one of the motivations in my original proposal for doing the F-bound polymorphism with subtyping).

f[A](xs): Seq[A] = xs.map ...

If that isn't already confusing enough to follow, there is another more complex issue implied. In my planned library something like a Seq may implement more than one typeclass, e.g. Functor and Applicative, and some subtypes of Seq will mixin Monoid etc..

Thus the type of implicit tc parameter controls which (conjunction of) typeclass(es) are required by the method. So to allow maximum flexibility, I must allow writing the implicit parameters. I suppose the Copute compiler (with the aid of a Scala plugin that can see all the computed types in the AST) could be smart enough to figure this out in the future, and then the IDE could display which implicits parameters are inferred. So then it would still be possible to write the simplified form. However this still wouldn't help you when the desired typeclass was not specified in the inheritance tree of e.g. Seq, so you must write the implicits in that (perhaps more rare) case.

So I guess it wouldn't be an unreasonable strategy for my proposal to initially require writing the implicits, then later as the Copute compiler becomes smarter, writing them would be optional when the typeclass used in the method is implemented in the e.g. Seq. Perhaps I can still keep implicit methods out of Copute and only allow implicit objects.

Another option would be to abandon Copute's sugar for writing typeclasses, yet I find it enables the quite SPOT and concise syntax to specify the default typeclasses for a type. And it gives the illusion (for easier comprehension) that they are subtypes even if I change my proposal to not make them subtypes (so they can be reimplemented thus avoids the fragile base class problem).

Rex Kerr

unread,
Sep 18, 2013, 2:00:55 AM9/18/13
to Shelby, scala-debate
On Tue, Sep 17, 2013 at 4:57 PM, Shelby <she...@coolpage.com> wrote:
At that point, I decided to disallow non-familiar, non-left associative symbol operators. I don't even like List.++, why not just overload the List.+.

Or, to put it another way, at that point you decided to throw the baby out with the bathwater.

There is a reason that mathematicians use symbols, and it is because they clarify what is going on.  It's easier to conceptualize ∇v when written compactly like that than grad(v_x, v_y, v_z).  You can work with it symbolically.

And there is a reason why mathematicians don't just use + over and over again; sometimes you need to distinguish between different kinds of addition.  (Note ⊕ used for XOR, which is addition-like.)

I think you're looking at this backwards.  + is not a great thing to use just because everyone's familiar with addition.  It's great because it is a great concept to have.  You want that sort of convenience when dealing with something so fundamental yet powerful.  It's so useful that we teach it to everyone.

But just because you decide + is useful for everyone, it doesn't follow that ++ should be avoided by all programmers.  Instead, one should ask: is there a concept that we could powerfully associate with ++?  It had better be +-like, or it will confuse everyone.  But it had also better be very important to keep distinct from +.

And that is exactly what concatenation vs. addition of lists is: another kind of addition, one with different but equally valuable behavior.

And this doesn't go just for lists.  There are a variety of things that either mirror widely (if not universally) used mathematical notation (e.g. f: a -> b) which should be used.

Now, you can try to guess them all, or you can allow users to use the ones that make sense.  I've yet to see a language that hasn't forbidden operators that make sense, thereby transmuting difficult-to-understand material into an impossible morass of huge words or strange abbreviations.  Take list cons:

  case x :: y :: more =>

What would you replace :: with in order to make this any more clear?  What would you replace it with to avoid making it way _less_ clear?

Anyway, there is certainly a school of thought that says that you should forbid anything that can be confused.  But I'm much more interested in languages that make it possible for me to do something that otherwise would be beyond my reach than languages that keep me from reaching far enough to confuse newcomers.  Of course, the two goals overlap to some extent, since if I can't understand my old code, anything that would require such understanding will be beyond me (the maximum size of the projects I can handle will be reduced).  But sometimes there is a tradeoff, and then I am not much interested in having my wings clipped.

  --Rex

P.S. IDEs can turn symbols into lengthy descriptors too.

Shelby

unread,
Sep 18, 2013, 2:22:28 AM9/18/13
to scala-...@googlegroups.com, Shelby
I appreciate this.

On Wednesday, September 18, 2013 2:00:55 PM UTC+8, Rex Kerr wrote:

On Tue, Sep 17, 2013 at 4:57 PM, Shelby <she...@coolpage.com> wrote:
At that point, I decided to disallow non-familiar, non-left associative symbol operators. I don't even like List.++, why not just overload the List.+.

Or, to put it another way, at that point you decided to throw the baby out with the bathwater.

There is a reason that mathematicians use symbols, and it is because they clarify what is going on.  It's easier to conceptualize ∇v when written compactly like that than grad(v_x, v_y, v_z).  You can work with it symbolically.

Agreed. (I aced Differential and Integral Calculus and Linear Algebra at the university night school while still in high school, yet that was 30 years ago ;)
 
And there is a reason why mathematicians don't just use + over and over again; sometimes you need to distinguish between different kinds of addition.  (Note ⊕ used for XOR, which is addition-like.)

Now you are talking about overloading, and not compaction.

I think you're looking at this backwards.  + is not a great thing to use just because everyone's familiar with addition.  It's great because it is a great concept to have.  You want that sort of convenience when dealing with something so fundamental yet powerful.  It's so useful that we teach it to everyone.

But just because you decide + is useful for everyone, it doesn't follow that ++ should be avoided by all programmers.  Instead, one should ask: is there a concept that we could powerfully associate with ++?  It had better be +-like, or it will confuse everyone.  But it had also better be very important to keep distinct from +.

And that is exactly what concatenation vs. addition of lists is: another kind of addition, one with different but equally valuable behavior.

It is more confusing that + is overloading in some scenarios yet not in others.

I can add two strings, I can add an element to a list, but I can't add two lists.

And ++ is traditionally a unary post and/or pre-increment operator, so that makes it very, very confusing for everyone coming from C-like languages (which is the mainstream).

And this doesn't go just for lists.  There are a variety of things that either mirror widely (if not universally) used mathematical notation (e.g. f: a -> b) which should be used.

Now, you can try to guess them all, or you can allow users to use the ones that make sense.  I've yet to see a language that hasn't forbidden operators that make sense, thereby transmuting difficult-to-understand material into an impossible morass of huge words or strange abbreviations.  Take list cons:

  case x :: y :: more =>

What would you replace :: with in order to make this any more clear?  What would you replace it with to avoid making it way _less_ clear?

What is wrong with case List(x, y)?

Anyway, there is certainly a school of thought that says that you should forbid anything that can be confused.  But I'm much more interested in languages that make it possible for me to do something that otherwise would be beyond my reach than languages that keep me from reaching far enough to confuse newcomers.

If I propose something that prevents your flexibility, please bring it to my attention. I am very much against making anything more difficult or less flexible for any of you.
 
  Of course, the two goals overlap to some extent, since if I can't understand my old code, anything that would require such understanding will be beyond me (the maximum size of the projects I can handle will be reduced).  But sometimes there is a tradeoff, and then I am not much interested in having my wings clipped.

I am just trying to think how we can make code as accessible and consistent as possible, while maintaining all the power we have now.

  --Rex

P.S. IDEs can turn symbols into lengthy descriptors too.

But the newbie doesn't have the IDE running, they are just reading to in HTML, PDF, and online to get a taste. 

Rex Kerr

unread,
Sep 18, 2013, 2:40:51 AM9/18/13
to Shelby, scala-debate
On Tue, Sep 17, 2013 at 11:22 PM, Shelby <she...@coolpage.com> wrote:

And there is a reason why mathematicians don't just use + over and over again; sometimes you need to distinguish between different kinds of addition.  (Note ⊕ used for XOR, which is addition-like.)

Now you are talking about overloading, and not compaction.

Kind of.  a + b + c where all types are the same either way but the +'s mean different things is incredibly confusing or just plain won't work.

But if you now need two plus-like symbols and you're only allowed one, you have thrown away compaction because overloading is impossible.
 

I think you're looking at this backwards.  + is not a great thing to use just because everyone's familiar with addition.  It's great because it is a great concept to have.  You want that sort of convenience when dealing with something so fundamental yet powerful.  It's so useful that we teach it to everyone.

But just because you decide + is useful for everyone, it doesn't follow that ++ should be avoided by all programmers.  Instead, one should ask: is there a concept that we could powerfully associate with ++?  It had better be +-like, or it will confuse everyone.  But it had also better be very important to keep distinct from +.

And that is exactly what concatenation vs. addition of lists is: another kind of addition, one with different but equally valuable behavior.

It is more confusing that + is overloading in some scenarios yet not in others.

I can add two strings, I can add an element to a list, but I can't add two lists.

Adding two strings is the weird one here.
 

And ++ is traditionally a unary post and/or pre-increment operator, so that makes it very, very confusing for everyone coming from C-like languages (which is the mainstream).

Yes, but unfortunately that operator is a really poor choice since it eats an extremely valuable symbol that can distinguish between addition/appending and concatenation.

Which is important unless all your collections are invariant.
 

And this doesn't go just for lists.  There are a variety of things that either mirror widely (if not universally) used mathematical notation (e.g. f: a -> b) which should be used.

Now, you can try to guess them all, or you can allow users to use the ones that make sense.  I've yet to see a language that hasn't forbidden operators that make sense, thereby transmuting difficult-to-understand material into an impossible morass of huge words or strange abbreviations.  Take list cons:

  case x :: y :: more =>

What would you replace :: with in order to make this any more clear?  What would you replace it with to avoid making it way _less_ clear?

What is wrong with case List(x, y)?

How do you access more?

  --Rex

P.S. Web interfaces can play all the IDE tricks.  Who learns a language by typing around in text editors without expecting to be exposed to all the guts?  If you want things pretty and highlighted and so on, you use an IDE.

Shelby

unread,
Sep 18, 2013, 3:21:51 AM9/18/13
to scala-...@googlegroups.com, Shelby
On Wednesday, September 18, 2013 2:40:51 PM UTC+8, Rex Kerr wrote:
On Tue, Sep 17, 2013 at 11:22 PM, Shelby <she...@coolpage.com> wrote:

And there is a reason why mathematicians don't just use + over and over again; sometimes you need to distinguish between different kinds of addition.  (Note ⊕ used for XOR, which is addition-like.)

Now you are talking about overloading, and not compaction.

Kind of.  a + b + c where all types are the same either way but the +'s mean different things is incredibly confusing or just plain won't work.

But if you now need two plus-like symbols and you're only allowed one, you have thrown away compaction because overloading is impossible.

I am trying to understand what you wrote. Perhaps your concern is similar to the famous Javascript standard library design error where if I understand correctly + is overloaded for both Strings and Number, yet there is an implicit conversion between both types, thus the overload is ambiguous when adding String and Number, yet Javascript silently chooses one of the ambiguous choices. I think there may be a tendency to conflate this implicits ambiguity problem with using + for the concatenation operator.

Or perhaps you mean that it is confusing when we need to know the types of the operands to know what plus is doing, but I think that is the case in many instances in idiomatic Scala.

Or perhaps you mean where for any two operand types, we may need distinct additive semantics thus we need distinct operators. In this case, yes I agree but do you have any examples? I can't think of one of the top-of-my-head.
 
I think you're looking at this backwards.  + is not a great thing to use just because everyone's familiar with addition.  It's great because it is a great concept to have.  You want that sort of convenience when dealing with something so fundamental yet powerful.  It's so useful that we teach it to everyone.

But just because you decide + is useful for everyone, it doesn't follow that ++ should be avoided by all programmers.  Instead, one should ask: is there a concept that we could powerfully associate with ++?  It had better be +-like, or it will confuse everyone.  But it had also better be very important to keep distinct from +.

And that is exactly what concatenation vs. addition of lists is: another kind of addition, one with different but equally valuable behavior.

It is more confusing that + is overloading in some scenarios yet not in others.

I can add two strings, I can add an element to a list, but I can't add two lists.

Adding two strings is the weird one here.

Is there any reason (besides the Javascript implicits ambiguity) why concatenation can't be thought of as an additive operation and thus use the + operator?
 
And ++ is traditionally a unary post and/or pre-increment operator, so that makes it very, very confusing for everyone coming from C-like languages (which is the mainstream).

Yes, but unfortunately that operator is a really poor choice since it eats an extremely valuable symbol that can distinguish between addition/appending and concatenation.

Why do we need to distinguish? As far as I can see so far, additive operation on two lists can't have any other meaning, unless it is a typeclass that only operates on Lists that contain elements which implement the + operator, then we define + as adding element-wise. But this is the wrong way to do it, since we should use Applicative for that any way since it is more general.

Which is important unless all your collections are invariant.

I don't understand. Do you mean immutable, yet that would apply only to += not +.
 
And this doesn't go just for lists.  There are a variety of things that either mirror widely (if not universally) used mathematical notation (e.g. f: a -> b) which should be used.

Now, you can try to guess them all, or you can allow users to use the ones that make sense.  I've yet to see a language that hasn't forbidden operators that make sense, thereby transmuting difficult-to-understand material into an impossible morass of huge words or strange abbreviations.  Take list cons:

  case x :: y :: more =>

What would you replace :: with in order to make this any more clear?  What would you replace it with to avoid making it way _less_ clear?

What is wrong with case List(x, y)?

How do you access more?

Couldn't we make List.unapply support List(x, y, more) by returning a Option[Seq[A | List[A]]].

Another reason we need first-class disjunctions.
 
  --Rex

P.S. Web interfaces can play all the IDE tricks.  Who learns a language by typing around in text editors without expecting to be exposed to all the guts?  If you want things pretty and highlighted and so on, you use an IDE.

When I started learning Scala and I bet it is similar for most people, I was reading documents all over the place and there is no way we will get (even a simple majority of) them all to adhere to some IDE translation for obtuse symbols. Life is much too chaotic.

Whereas, the hardcore users who really want those obtuse symbols will most likely be able to copy+paste in their running IDE environment. This target is much more realistic to hit near 100% coverage.

Nevertheless, I would prefer to first focus on the low-hanging fruit where we can replace obtuse symbols by common ones without losing any benefits for any one and for example the List.unapply solution above so then we only have to explain one concept.

I really want to simplify and unify as much as possible, even if it isn't the standard library. Then let the market decide what is the defacto standard (that will happen any way, regardless what anyone thinks here).

Rex Kerr

unread,
Sep 18, 2013, 3:44:38 AM9/18/13
to Shelby, scala-debate
On Wed, Sep 18, 2013 at 12:21 AM, Shelby <she...@coolpage.com> wrote:
On Wednesday, September 18, 2013 2:40:51 PM UTC+8, Rex Kerr wrote:

On Tue, Sep 17, 2013 at 11:22 PM, Shelby <she...@coolpage.com> wrote:

And there is a reason why mathematicians don't just use + over and over again; sometimes you need to distinguish between different kinds of addition.  (Note ⊕ used for XOR, which is addition-like.)

Now you are talking about overloading, and not compaction.

Kind of.  a + b + c where all types are the same either way but the +'s mean different things is incredibly confusing or just plain won't work.

But if you now need two plus-like symbols and you're only allowed one, you have thrown away compaction because overloading is impossible.

I am trying to understand what you wrote. Perhaps your concern is similar to the famous Javascript standard library design error where if I understand correctly + is overloaded for both Strings and Number, yet there is an implicit conversion between both types, thus the overload is ambiguous when adding String and Number, yet Javascript silently chooses one of the ambiguous choices. I think there may be a tendency to conflate this implicits ambiguity problem with using + for the concatenation operator.

That's one.  You can view it as either an implicit problem, or that there are two kinds of addition on numbers: regular addition, and concatenation of the string representation.  And they are ambiguous.
 

Or perhaps you mean that it is confusing when we need to know the types of the operands to know what plus is doing, but I think that is the case in many instances in idiomatic Scala.

Or perhaps you mean where for any two operand types, we may need distinct additive semantics thus we need distinct operators. In this case, yes I agree but do you have any examples? I can't think of one of the top-of-my-head.

concat vs. append on a covariant collection is another.
 
 
I think you're looking at this backwards.  + is not a great thing to use just because everyone's familiar with addition.  It's great because it is a great concept to have.  You want that sort of convenience when dealing with something so fundamental yet powerful.  It's so useful that we teach it to everyone.

But just because you decide + is useful for everyone, it doesn't follow that ++ should be avoided by all programmers.  Instead, one should ask: is there a concept that we could powerfully associate with ++?  It had better be +-like, or it will confuse everyone.  But it had also better be very important to keep distinct from +.

And that is exactly what concatenation vs. addition of lists is: another kind of addition, one with different but equally valuable behavior.

It is more confusing that + is overloading in some scenarios yet not in others.

I can add two strings, I can add an element to a list, but I can't add two lists.

Adding two strings is the weird one here.

Is there any reason (besides the Javascript implicits ambiguity) why concatenation can't be thought of as an additive operation and thus use the + operator?

Strings are invariant, so + is okay.
 
 
And ++ is traditionally a unary post and/or pre-increment operator, so that makes it very, very confusing for everyone coming from C-like languages (which is the mainstream).

Yes, but unfortunately that operator is a really poor choice since it eats an extremely valuable symbol that can distinguish between addition/appending and concatenation.

Why do we need to distinguish? As far as I can see so far, additive operation on two lists can't have any other meaning, unless it is a typeclass that only operates on Lists that contain elements which implement the + operator, then we define + as adding element-wise. But this is the wrong way to do it, since we should use Applicative for that any way since it is more general.

Seq() + 5

What does this do?

Seq() + x

What does this do (for any type of x)?

Seq() + Seq(5)

What does this do?
 

Which is important unless all your collections are invariant.

I don't understand. Do you mean immutable, yet that would apply only to += not +.

See above.

  --Rex
 

Shelby

unread,
Sep 18, 2013, 4:32:36 AM9/18/13
to scala-...@googlegroups.com, Shelby
Ah yes, seems I knew this in the past and forgotten the issue.

As I explained in my recent post in scala-language, we can't prevent the input types of the method of a covariant collection from subsuming to Any unless we restrict the type of elements of the collection:


So we can fix the problem above by forcing collections to have an element type restriction which is not Any, c.f. the C type parameter at my link above. This would arguably be an improvement in another way, because we would be forced to be explicit when we want to subsume the element type when adding to the collection. And we wouldn't need the separate operator for concatenation in most cases. Also I think this might be necessary to fix contains:


There would still be cases where the overloads would be ambiguous or not match, e.g. your Seq() has type element type Nothing thus nothing can be added to it without being explicit, as well a Seq with an element type of Seq (or disjunction including Seq) would have ambiguous overloads for concatenation and append and would force being explict.

Being explicit wouldn't be so onerous and would be require rarely as far as I can see:

Seq().append(...)
Seq().concat(...)

I am really thinking out-of-the-box. Nothing is sacred.

Note I am thinking List(0, "", List(...)) is same as List(0, "") + List(...) which is the same as List(0, "") ++ List(...) because the non-null tail of a list is always a List. Nevertheless that doesn't hold true for other collections.

Shelby

unread,
Sep 18, 2013, 4:41:17 AM9/18/13
to scala-...@googlegroups.com, she...@coolpage.com
On Wednesday, September 18, 2013 1:58:59 PM UTC+8, Shelby wrote: 
Another option would be to abandon Copute's sugar for writing typeclasses, yet I find it enables the quite SPOT and concise syntax to specify the default typeclasses for a type. And it gives the illusion (for easier comprehension) that they are subtypes even if I change my proposal to not make them subtypes (so they can be reimplemented thus avoids the fragile base class problem).

Jason Zaugg passed along other work in this area. I am cross referencing (since he intended it to go here):

Shelby

unread,
Sep 18, 2013, 8:26:13 AM9/18/13
to scala-...@googlegroups.com, Shelby
The use of the C parameter at the first link makes the collection invariant, so that is not a solution. The second link argues that Scala's subsumption to Any is the problem. If that could be fixed or selectively toggled off at the method definition site, then perhaps we could solve both the + issue we are discussing and also the contains problem.

Rex Kerr

unread,
Sep 18, 2013, 8:42:53 PM9/18/13
to Shelby, scala-debate
On Wed, Sep 18, 2013 at 1:32 AM, Shelby <she...@coolpage.com> wrote:
On Wednesday, September 18, 2013 3:44:38 PM UTC+8, Rex Kerr wrote:


Seq() + 5

What does this do?

Seq() + x

What does this do (for any type of x)?

Seq() + Seq(5)

What does this do?

Ah yes, seems I knew this in the past and forgotten the issue.

As I explained in my recent post in scala-language, we can't prevent the input types of the method of a covariant collection from subsuming to Any unless we restrict the type of elements of the collection:


So we can fix the problem above by forcing collections to have an element type restriction which is not Any, c.f. the C type parameter at my link above.

Seq() + "5" vs. Seq() + Seq("5")

Now it's AnyRef.

Or maybe Serializable.  Or Product.

"Not Any" really isn't a robust solution.
 
Being explicit wouldn't be so onerous and would be require rarely as far as I can see:

Seq().append(...)
Seq().concat(...)

I am really thinking out-of-the-box. Nothing is sacred.

Note I am thinking List(0, "", List(...)) is same as List(0, "") + List(...) which is the same as List(0, "") ++ List(...) because the non-null tail of a list is always a List. Nevertheless that doesn't hold true for other collections.

Now you have three things to remember: +, append, and concat.  Before you had two: + and ++.  Why is this better?

And you still have the problem of Product with Serializable, or whatever other ancestor happens to pertain to both branches.

val o: Object = ...
Seq() + o
Seq() + Seq(o)

Now what?

  --Rex
 

Shelby

unread,
Sep 18, 2013, 11:25:09 PM9/18/13
to scala-...@googlegroups.com, Shelby
On Thursday, September 19, 2013 8:42:53 AM UTC+8, Rex Kerr wrote:
On Wed, Sep 18, 2013 at 1:32 AM, Shelby <she...@coolpage.com> wrote:
On Wednesday, September 18, 2013 3:44:38 PM UTC+8, Rex Kerr wrote:


Seq() + 5

What does this do?

Seq() + x

What does this do (for any type of x)?

Seq() + Seq(5)

What does this do?

Ah yes, seems I knew this in the past and forgotten the issue.

As I explained in my recent post in scala-language, we can't prevent the input types of the method of a covariant collection from subsuming to Any unless we restrict the type of elements of the collection:


So we can fix the problem above by forcing collections to have an element type restriction which is not Any, c.f. the C type parameter at my link above.

Seq() + "5" vs. Seq() + Seq("5")

Now it's AnyRef.

Or maybe Serializable.  Or Product.

"Not Any" really isn't a robust solution.

The solution is not "Not Any". The solution is to not subsume the covariant type parameter, only subsume the input value. And thus my point is correct:

 

Being explicit wouldn't be so onerous and would be require rarely as far as I can see:

Seq().append(...)
Seq().concat(...)

Btw, that is silly to add to an empty collection *explicitly*.

I am thinking that the append method (called by List.apply) will subsume so that Paul's example (at above linked thread) works correct List(None, Nil), yet the + will not subsume, so it can be overloaded.

Rarer cases of boilerplate when the compiler complains it can resolve the overload, as for the case above where you explicitly append to an empty collection (which makes no sense in the explicit case any way).

I am really thinking out-of-the-box. Nothing is sacred.

Note I am thinking List(0, "", List(...)) is same as List(0, "") + List(...) which is the same as List(0, "") ++ List(...) because the non-null tail of a list is always a List. Nevertheless that doesn't hold true for other collections.

Now you have three things to remember: +, append, and concat.  Before you had two: + and ++.  Why is this better?

Astute that you deduced I would make the overloaded + perform differently than append and concat w.r.t. to subsumption.

As far as I can see, the casual user will almost never use append nor concat directly, instead use the List.apply constructor and + when being explicit.

And you still have the problem of Product with Serializable, or whatever other ancestor happens to pertain to both branches.

val o: Object = ...
Seq() + o
Seq() + Seq(o)

Now what?

No problem, c.f. my link.

Cheers,
Shelby
 

  --Rex
 

Shelby

unread,
Sep 18, 2013, 11:29:19 PM9/18/13
to scala-...@googlegroups.com, Shelby
On Thursday, September 19, 2013 11:25:09 AM UTC+8, Shelby wrote: 
Now you have three things to remember: +, append, and concat.  Before you had two: + and ++.  Why is this better?

Astute that you deduced I would make the overloaded + perform differently than append and concat w.r.t. to subsumption.

As far as I can see, the casual user will almost never use append nor concat directly, instead use the List.apply constructor and + when being explicit.

Another benefit besides unifying the + with overloading, is that we will need these two methods each for append and concat, one the subsumed and the other not. So my overloading solution gives us 3 instead of 4 things to remember.

If we don't have the non-subsumed choice, then we can force type checking (to not subsume to Any or in the future the inferred union).

We absolutely need that. 

Shelby

unread,
Sep 18, 2013, 11:30:30 PM9/18/13
to scala-...@googlegroups.com, Shelby
On Thursday, September 19, 2013 11:29:19 AM UTC+8, Shelby wrote: 
If we don't have the non-subsumed choice, then we can force

s/we can force/we can't force/ 

Rex Kerr

unread,
Sep 19, 2013, 1:40:13 AM9/19/13
to Shelby, scala-debate
On Wed, Sep 18, 2013 at 8:25 PM, Shelby <she...@coolpage.com> wrote:
And you still have the problem of Product with Serializable, or whatever other ancestor happens to pertain to both branches.

val o: Object = ...
Seq() + o
Seq() + Seq(o)

Now what?

No problem, c.f. my link.

But you want to make collections not take anything in contravariant position so that although they're nominally covariant, they don't act it.  This works, but it's not always convenient.

  List(Some("herring")) ++ (if (foo) Some("salmon") else None)

would require an extra type annotation, for instance.

It's an interesting tradeoff to try.  It'd be nice to have a collections library to play with that took that approach so one could assess the impact on code reuse, efficient implementation, and other things that inheritance usually gets you.  (Note that this scheme removes a fair number of methods from the inheritance hierarchy.)

  --Rex

Shelby Moore

unread,
Sep 19, 2013, 5:41:24 AM9/19/13
to Rex Kerr, scala-debate
replying from mobile

> On Wed, Sep 18, 2013 at 8:25 PM, Shelby <she...@coolpage.com> wrote:
>
>> And you still have the problem of Product with Serializable, or whatever
>> other ancestor happens to pertain to both branches.
>>
>>>
>>> val o: Object = ...
>>> Seq() + o
>>> Seq() + Seq(o)
>>>
>>> Now what?
>>>
>>
>> No problem, c.f. my link.
>>
>
> But you want to make collections not take anything in contravariant
> position so that although they're nominally covariant, they don't act it.

Just to make sure we have the same understanding, specifically those
methods marked to not subsume would not subsume the element T unless
explicitly given a supertype. They would not for example subsume Some and
None to Option, unless the collection element type was already Option.

Yet there could be two methods, one that subsumes and one that doesn't so
you could call the one that subsumes instead of doing a cast, in the
example below. So then you get the best of both worlds.

And the compiler should warn when subsuming to Any, so you don't
accidentally make a mistake.

> This works, but it's not always convenient.
>
> List(Some("herring")) ++ (if (foo) Some("salmon") else None)
>
> would require an extra type annotation, for instance.

Agreed, or you could just use the named method which subsumes:

List(Some("herring")) append (if (foo) Some("salmon") else None)

Then no cast needed. The compiler error would be an indication it is
required, so it is not a silent issue.

> It's an interesting tradeoff to try. It'd be nice to have a collections
> library to play with that took that approach so one could assess the
> impact
> on code reuse, efficient implementation, and other things that inheritance
> usually gets you.

We can try it now employing Vlad's solution, but his solution would not
allow input supertypes to subsume the collection element type, so the cast
would not work. With Vlad's solution we would need to always use the
method without his solution in the case above (e.g. append). Also Vlad's
solution is inefficient.

So we really can't try this in ful flavor until the compiler is fixed.

All the other solutions besides Vlad's that are available now without a
compiler fix, would require casting subtypes which is ridiculous. I assume
you read my latest post in the relevant thread in scala-language for the
details.

> (Note that this scheme removes a fair number of methods
> from the inheritance hierarchy.)

Hmmm, I didn't know that. Thanks.

Thanks for taking some interest in experimenting. Hope we can reach
consensus on fixing the compiler to allow this.

Shelby

unread,
Sep 19, 2013, 8:50:22 PM9/19/13
to scala-...@googlegroups.com, Shelby
On Friday, September 13, 2013 2:59:56 PM UTC+8, Shelby wrote:
On Friday, September 13, 2013 2:34:58 PM UTC+8, Jason Zaugg wrote:
On Thu, Sep 12, 2013 at 11:55 PM, Shelby <she...@coolpage.com> wrote:

Even if working with Scala isn't the plan, it's still the JVM, so people *will* try to use it with other languages, and the type signatures are going to leak.  So I'd encourage you not to turn these into Any (which is a gigantic red flag for me), but instead into some sort of wrapper that can enforce at least *some* good behaviour.

How is subsumption to Any not a reasonable behavior? You are forced to specialize the type with match-case before you can do anything with it besides pass it along as an Any.

I haven't been following this too closely, but I'll chime in here. Apologies is I've missed the point.

Previous efforts to support unboxed unions (ie without using Either or hypothetical OneOf3, .., OneOfN. have run into problems with erasure. We've had a compiler around at one stage (caveat: said compiler was cobbled together in a bar by a certain profilic scalac hacker) that would type `if (true) a: A else b: B` as `A|B`. But when it comes time to typecase, you can only do so if the erased types of `A` and `B` are rich enough. So you can't write:

   def foo[A, B](ab: A|B) = ab match { case _: A => case _: B }

You'd need a scheme by which values of these types were at least boxed in `class Union[T](value: Any, tpe; TypeTag[T])` and where pattern matching could use those reified types at runtime to pick the right case.

When A and B are not type parameters (i.e. are concrete classes) this is only an issue when they have the same 0-kind, e.g. they are both List[_].

Thus is just the same issue as two overloads:

def foo(ab: List[String]) = ...
def foo(ab: List[Int]) = ...

Which can be solved with TypeTag context bounds.

When A and B are type parameters, then if the compiler was smart enough, we could have two overloads, one that inputs a boxed union and one that inputs the first class union. The compiler could call the correct overload depending on whether A and B have the same 0-kind.

Shelby

unread,
Sep 20, 2013, 5:03:11 AM9/20/13
to scala-...@googlegroups.com, she...@coolpage.com
I am retracting the quoted prior post. On further thought (actually remembering my original logic from before I got sidetracked with illness), it appears Copute's design employing F-founded polymorphism instead of typeclasses is correct. Btw, I remembered that I had before concluded that Copute's design was F-bounded polymorphism, that isn't new to me, I just had forgotten.

Referring to the `sequence` example in the quoted post, the problem with typeclasses when the list is heterogeneous, so the input implicit `tc` must have an untyped Monad[Any] or an infeasible type Monad[first-class-union-of-types-in-the-list]. Monad[Any] is unimplementable even if it employs a match-cases for known types there would still be a case _ => that must throw a runtime exception. And the variants of Monad[first-class-union-of-types-in-the-list] would be Cartesian product of existent types.

I had covered the implications of the inflexibility that typeclasses can't do virtual inheritance in my SO/SE Q&A on "Complete Solutions to the Expression Problem?", c.f. the section "Typeclass Solution Can't Virtually Inherit" in my first answer that had 10 votes and also see my second answer that had -2 votes (note SO/SE deleted my Q&A so I link to my copy):


As far as I can see, the benefits of typeclasses don't really exist.

(Generic programming with views)

* Items in the collections aren’t required to implement Ordered,
  but Ordered uses are still statically type checked.
* You can define your own orderings without any additional library support

Well in my opinion `min` shouldn't even be a method on Seq. That is a violation of separation-of-concerns. Instead, `min` should be a global function that inputs a Functor[Ordered], and/or a Functor[T] along with a (n implicit) typeclass Ordered[T]. So then you can still use a typeclass for certain T without needing to support the entire world of T. So this gives you flexibility to choose an optimum strategy without refactoring code.

Yet need to consider how `min` would interact with views (lazy collections), although min is essentially a fold thus I guess it could force:


And the above requires another key point in my above linked "Complete Solutions to the Expression Problem?" in section "Immutable Partial Solution" of my answer. For immutable data types, pimping a run-time type (e.g. to add the Ordered inferface) doesn't have to wrap (hide and delegate to) the pimped type. Thus subtyping just works and we can add methods to anything at any time. This was another reason that I decided to make Copute target pure functional programming with immutable data types.

These concepts can be applied to Scala programming.


On Wednesday, September 18, 2013 12:58:20 PM UTC+8, Shelby wrote:

Shelby

unread,
Sep 20, 2013, 8:42:27 AM9/20/13
to scala-...@googlegroups.com, she...@coolpage.com
On Friday, September 20, 2013 5:03:11 PM UTC+8, Shelby wrote:
I am retracting the quoted prior post. On further thought (actually remembering my original logic from before I got sidetracked with illness), it appears Copute's design employing F-founded polymorphism instead of typeclasses is correct. Btw, I remembered that I had before concluded that Copute's design was F-bounded polymorphism, that isn't new to me, I just had forgotten.

Referring to the `sequence` example in the quoted post, the problem with typeclasses when the list is heterogeneous, so the input implicit `tc` must have an untyped Monad[Any] or an infeasible type Monad[first-class-union-of-types-in-the-list]. Monad[Any] is unimplementable even if it employs a match-cases for known types there would still be a case _ => that must throw a runtime exception. And the variants of Monad[first-class-union-of-types-in-the-list] would be Cartesian product of existent types.

A potential solution might be if Scala's planned macros would be able to parse the union type and create an implicit object to satisfy TypeClass[first-class-union-of-types-in-the-list], then it could create a match-case and delegate to each TypeClass[Type-from-the-union]. This would be declared at the call site. I suppose macros would need to have access to the type-annotated AST. Then we could view typeclasses to be as generally applicable as subtyping.

Perhaps this might work even if using Shapeless to model the unions.
Reply all
Reply to author
Forward
0 new messages