[Haskell-cafe] A Procedural-Functional Language (WIP)

76 views
Skip to first unread message

Rik Howard

unread,
Oct 22, 2016, 8:19:02 AM10/22/16
to haskel...@haskell.org
Dear Haskell Cafe Subscribers

on the recommendation of someone for whom I have great respect, I have just subscribed to this list, it having been suggested as being a good place for me to get feedback regarding a project that I have been working on.  I am humbled by the level of discussion and it feels to be a very bold step for me to request anybody's time for my words.

The linked document is a four-page work-in-progress summary: the length being stipulated, potential novelty being the other main requirement.  Given the requirements, the summary necessarily glosses over some details and is not yet, I fear, completely correct.  The conclusion is, more or less, the one at which I am aiming; the properties are, more or less, the ones that are needed.


The work arises from an investigation into functional programming syntax and semantics.  The novelty seems to be there but there is too a question as to whether it is simply a gimmick.  I try to suggest that it is not but, by that stage, there have been many assumptions so it is hard to be sure whether the suggestion is valid.  If anyone has any comments, questions or suggestions, they would be gratefully received.

Yours sincerely
Rik Howard

Richard Eisenberg

unread,
Oct 22, 2016, 3:35:47 PM10/22/16
to Rik Howard, haskel...@haskell.org
Hi Rik,

I'm unsure what to make of your proposal, as it's hard for me to glean out what you're proposing. Do you have some sample programs written in your proposed language? What is the language's grammar? What is its type system (stated in terms of inference rules)? Having these concrete descriptors of the language would be very helpful in assessing this work.

Richard

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

Joachim Durchholz

unread,
Oct 22, 2016, 3:48:30 PM10/22/16
to haskel...@haskell.org
Am 22.10.2016 um 14:18 schrieb Rik Howard:
> Dear Haskell Cafe Subscribers
>
> on the recommendation of someone for whom I have great respect, I have
> just subscribed to this list, it having been suggested as being a good
> place for me to get feedback regarding a project that I have been
> working on. I am humbled by the level of discussion and it feels to be
> a very bold step for me to request anybody's time for my words.
>
> The linked document is a four-page work-in-progress summary: the length
> being stipulated, potential novelty being the other main requirement.
> Given the requirements, the summary necessarily glosses over some
> details and is not yet, I fear, completely correct.

It is a programme for designing a programming language.
It is leaving out a number of central issues: How to approach
modularity, whether it should have opaque types (and why), whether there
should be subtypes or not, how the type system is supposed to deal with
arithmetic which has almost-compatible integer types and floating-point
types.
That's just off the top of my head, I am pretty sure that there are
other issues.

It is hard to discuss merits or problems at this stage, since all of
these issues tend to influence each other.

> The work arises from an investigation into functional programming syntax
> and semantics. The novelty seems to be there but there is too a
> question as to whether it is simply a gimmick. I try to suggest that it
> is not but, by that stage, there have been many assumptions so it is
> hard to be sure whether the suggestion is valid. If anyone has any
> comments, questions or suggestions, they would be gratefully received.

One thing I have heard is that effects, subtypes and type system
soundness do not mix well. Subtypes are too useful to ignore, unsound
types systems are not worth the effort, so I find it a bit surprising
that the paper has nothing to say about the issue.

Just my 2c.
Regards,
Jo

Rik Howard

unread,
Oct 23, 2016, 4:52:58 AM10/23/16
to Richard Eisenberg, haskel...@haskell.org
Hi Richard

thank you for your reply.  It is becoming apparent that my explanation can be made clearer.  I'm investigating a language that takes something like core Haskell (a Lambda Calculus augmented with let blocks) to satisfy its pure function requirements but that takes a different approach when it comes to IO by employing procedures.

For IO, a procedure returns only 'okay' or an error via the mechanism that a function would use for returning values; the non-deterministic values returned by the procedure are done so with variable parameters.  For example, to define a procedure to echo once from standard in to out:

    echo = try (read string) (write string) error

The value coming from standard-in is to be captured in the 'string' out-variable and so is available to be written to standard-out, the 'try' built-in being analogous to 'if'.

Rough (inconsistent) examples exist; its grammar and type system are in a slightly better state but not yet written up properly.  What I could quickly add as an appendix, I have but the notation needs to be made more standard for easier comparison.  I am working on another paper that will address the need for a more-concrete and standard presentation.  I hope that this goes some way to answering your questions; please say if it doesn't!

Rik



Sven SAULEAU

unread,
Oct 23, 2016, 7:35:35 AM10/23/16
to Rik Howard, haskel...@haskell.org
Hi,

Seems interesting but I have difficulties to understand. Some concret example would help.

Your example could be done in Haskell using IO Monads.

Wouldn’t procedures be inlined as a compilation optimisation in Haskell ?

From my point of view your proposal will degrade code readability and wouldn’t improve efficiency of the execution.

As said earlier, I may have misunderstood. Feel free to correct me.

Regards,
Sven

--

Sven SAULEAU - Xtuc

 

Développeur Web

con...@xtuc.fr

06 28 69 51 44

www.xtuc.fr

https://www.linkedin.com/in/svensauleau

Joachim Durchholz

unread,
Oct 23, 2016, 8:15:05 AM10/23/16
to haskel...@haskell.org
Am 23.10.2016 um 10:52 schrieb Rik Howard:
> Hi Richard
>
> thank you for your reply. It is becoming apparent that my explanation
> can be made clearer. I'm investigating a language that takes something
> like core Haskell (a Lambda Calculus augmented with let blocks) to
> satisfy its pure function requirements but that takes a different
> approach when it comes to IO by employing procedures.

Are you aware how "monadic IO" became the standard in Haskell?
It was one of three competing approaches, and AFAIK one turned out to be
less useful, and the other simply wasn't ready in time (so it might
still be interesting to investigate).

> For IO, a procedure returns only 'okay' or an error via the mechanism
> that a function would use for returning values; the non-deterministic
> values returned by the procedure are done so with variable parameters.

What's the advantage here?

Given the obvious strong disadvantage that it forces callers into an
idiom that uses updatable data structures, the advantage better be
compelling.

> For example, to define a procedure to echo once from standard in to out:
>
> echo = try (read string) (write string) error
>
> The value coming from standard-in is to be captured in the 'string'
> out-variable and so is available to be written to standard-out, the
> 'try' built-in being analogous to 'if'.

What is the analogy?
That stuff is evaluated only on a by-need basis? That's already there in
Haskell.

> Rough (inconsistent) examples exist; its grammar and type system are in
> a slightly better state but not yet written up properly. What I could
> quickly add as an appendix, I have but the notation needs to be made
> more standard for easier comparison. I am working on another paper that
> will address the need for a more-concrete and standard presentation. I
> hope that this goes some way to answering your questions; please say if
> it doesn't!

Right now I fail to see what's new&better in this.

Regards,
Jo

Tom Ellis

unread,
Oct 23, 2016, 4:40:18 PM10/23/16
to haskel...@haskell.org
On Sun, Oct 23, 2016 at 02:14:58PM +0200, Joachim Durchholz wrote:
> Are you aware how "monadic IO" became the standard in Haskell?
> It was one of three competing approaches, and AFAIK one turned out
> to be less useful, and the other simply wasn't ready in time

That's very tantalising. Can you link to a reference?!

Rik Howard

unread,
Oct 23, 2016, 5:44:56 PM10/23/16
to Sven SAULEAU, haskel...@haskell.org
Hi Sven

thanks for the response.  Some examples will definitely be produced.  The syntax in the note is one that has been stripped down to support not much more than is needed for the note.  On reflection, this does not make for a good presentation.  A suggestion has been made to use the syntax of a standard reference to make easier a conceptual evaluation of the language.  I think this could be a good way to clarify the ideas in the note; if they stand, a more-concrete (or practical) syntax could then be considered.  The example could certainly be done in Haskell and many other languages.  I'm not sure of the optimisation that you mention, so far the emphasis has been more on matters of semantics and, in the note, what effect the approach would have on a type system.  You may be right about the readability, some examples would be useful.  Coming soon!

Best
Rik

Rik Howard

unread,
Oct 23, 2016, 6:12:21 PM10/23/16
to haskel...@haskell.org
All -- thanks for the feedback: it has been thought-provoking and enlightening.  There are some obvious deficiencies that can be addressed, which is probably worth doing before further discussion.  Please feel free to email me directly, otherwise I think I've taken enough space for the meantime on this valuable thread.

Best, R.



KC

unread,
Oct 23, 2016, 6:23:47 PM10/23/16
to Rik Howard, haskel...@haskell.org

You may want to look at

Call-By-Push-Value
A Functional/Imperative Synthesis
By Springer

--
--

Sent from an expensive device which will be obsolete in a few months! :D

Casey
   


Joachim Durchholz

unread,
Oct 24, 2016, 12:54:06 AM10/24/16
to haskel...@haskell.org
Am 23.10.2016 um 22:40 schrieb Tom Ellis:
> On Sun, Oct 23, 2016 at 02:14:58PM +0200, Joachim Durchholz wrote:
>> Are you aware how "monadic IO" became the standard in Haskell?
>> It was one of three competing approaches, and AFAIK one turned out
>> to be less useful, and the other simply wasn't ready in time
>
> That's very tantalising. Can you link to a reference?!

I think it is in one or both of these:

Tackling the Awkward Squad
A History of Haskell: being lazy with class
A retrospective on Haskell

(It's been a decade since I last read them so they might be not exactly
what you're after, but they should be close.)

Sven SAULEAU

unread,
Oct 24, 2016, 1:35:25 AM10/24/16
to Rik Howard, haskel...@haskell.org
Hi,

Ok great.

Thanks,

Regards,
Sven

Rik Howard

unread,
Oct 24, 2016, 2:22:47 AM10/24/16
to KC, haskel...@haskell.org
Thanks, I will.

R

Clinton Mead

unread,
Oct 24, 2016, 2:31:58 AM10/24/16
to Joachim Durchholz, Haskell Cafe
Just curious, what was the IO approach that "wasn't ready in time"?

Ronald Legere

unread,
Oct 24, 2016, 10:50:21 AM10/24/16
to haskel...@haskell.org
I must admit to some curiosity about this as well.  My recollection was that the original approach was to use lazy streams 
IO:: [request] -> [respose].  

This can be managed a bit better using continuations (Perhaps continuations can also be considered a separate approach?)

And now we have the IO Monad.  (which can be defined in terms of the stream based approach but is not implemented that way)

The only other approach I am aware of is Clean's "Unique types".  
 

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.



--
Ron Legere (rjl...@gmail.com)
 C'est le temps que tu as perdu pour ta rose qui fait ta rose si importante

Simon Thompson

unread,
Oct 24, 2016, 12:40:20 PM10/24/16
to Ronald Legere, Haskell Cafe
Miranda modelled IO as a (lazy) function from [Char] -> [Char] … pretty unwieldy to use “raw”, but it was made controllable by a set of combinators that could be seen as a proto-monadic library.

Simon Y.


On 24 Oct 2016, at 15:50, Ronald Legere <rjl...@gmail.com> wrote:

I must admit to some curiosity about this as well.  My recollection was that the original approach was to use lazy streams 
IO:: [request] -> [respose].  

This can be managed a bit better using continuations (Perhaps continuations can also be considered a separate approach?)

And now we have the IO Monad.  (which can be defined in terms of the stream based approach but is not implemented that way)

The only other approach I am aware of is Clean's "Unique types".  

Simon Thompson | Professor of Logic and Computation 
School of Computing | University of Kent | Canterbury, CT2 7NF, UK
s.j.th...@kent.ac.uk | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt

Rik Howard

unread,
Oct 25, 2016, 3:12:16 AM10/25/16
to Joachim Durchholz, haskel...@haskell.org
Hello Jo

apologies for the delayed reply!  Thank you for your time and thoughts.  (I'd not realised that some messages come only through the digest.  And apologies generally for taking more bandwidth after I said I wouldn't -- I hadn't realised then that there were emails lacking a response; this might be tl;dr as it is a response to two.)

It is a programme for designing a programming language.

From about this point in the design of the language, all you can do can be defined as functions (e.g., logical negation).  So introducing a symbol for an operation becomes something between syntactic sugar and an opportunity for optimisation.  At least, that's the thought process so far.

It is leaving out a number of central issues: How to approach modularity

The note needs to be more clear that abstraction, application and let-blocks are an extended Lambda calculus so giving higher-order functions; it is to be lazy (or normal order, at least).  A packaging system has been conceived for the language, as have some other features, some of which may bear traces of novelty, but space is tight and the variable parameters need to get treated first.  The note tends to focus on the would-be type system because that is where the presence of out-vars makes itself most apparent.

whether it should have opaque types (and why), whether there should be subtypes or not, how the type system is supposed to deal with arithmetic which has almost-compatible integer types and floating-point types.  That's just off the top of my head, I am pretty sure that there are other issues.  It is hard to discuss merits or problems at this stage, since all of these issues tend to influence each other.

There are gaps in the design.  This step is to show that at least those considerations haven't been precluded in some way by the mix of features.

One thing I have heard is that effects, subtypes and type system soundness do not mix well. Subtypes are too useful to ignore, unsound types systems are not worth the effort, so I find it a bit surprising that the paper has nothing to say  about the issue.

I'm not sure what you mean by effects (may I ask you to elaborate?  Or side effects maybe?) but subtypes would appear to offer an intuitive analogy with set theory.  It would mean extra look-ups in the deciding function to check inclusion, possibly using some sort of 'narrowable' type, and that would make showing soundness that much more involved.  Are there other beyond-the-norm complications?

Are you aware how "monadic IO" became the standard in Haskell?  It was one of three competing approaches, and AFAIK one turned out to be less useful, and the other simply wasn't ready in time (so it might still be interesting to investigate).

No, I'm not, it sounds fascinating.  Thank you for subsequently providing references.

> For IO, ... variable parameters. 
What's the advantage here?  Given the obvious strong disadvantage that it forces callers into an idiom that uses updatable data structures, the advantage better be compelling.

The out-vars are the same as other variables in terms of updating: they have to be fresh on the way in and can't be modified after coming out -- I should make that more clear -- or was that not what you meant?  The difference (I don't know that it can be called an advantage) is that IO can be done pretty much wherever-whenever but the insistence of a try-then-else for penultimate invocations forces that doing not to be unnoticed.

>... the 'try' built-in being analogous to 'if'.
What is the analogy?  That stuff is evaluated only on a by-need basis? That's already there in Haskell.

Yes.  And at the risk of labouring the point, 'if' has a true-false condition determining which expression to evaluate; 'try' has an okay-error condition for the same.

Right now I fail to see what's new&better in this.

Some languages allow IO expressions without any further thought being paid to the matter; some provide explicit mechanisms for dealing with IO.  The language in the note takes a mid-way approach, in some sense, that I'm not familiar with from elsewhere.  Assuming that this approach isn't in a language that I should know by now, could the approach not count as new?  It may be irrelevant on some level, I suppose.

I hope that this goes some way towards being an adequate response.  Once again, thank you for your invaluable feedback -- much appreciated!

R



Richard A. O'Keefe

unread,
Oct 25, 2016, 8:19:43 PM10/25/16
to haskel...@haskell.org

On 25/10/16 3:50 AM, Ronald Legere wrote:
> The only other approach I am aware of is Clean's "Unique types".

That approach was also adopted in the programming language Mercury,
which includes both statically typed and moded logic programming
and statically typed functional programming, using Haskell-ish types.

It's not clear to me why Haskell, Clean, and Mercury don't already
qualify as "procedural-functional languages", especially when you
consider that the logic programming part of Mercury provides a
very clean approach to result arguments.

Joachim Durchholz

unread,
Oct 26, 2016, 12:36:14 AM10/26/16
to Rik Howard, haskel...@haskell.org
Am 25.10.2016 um 09:12 schrieb Rik Howard:
> whether it should have opaque types (and why), whether there should
> be subtypes or not, how the type system is supposed to deal with
> arithmetic which has almost-compatible integer types and
> floating-point types. That's just off the top of my head, I am
> pretty sure that there are other issues. It is hard to discuss
> merits or problems at this stage, since all of these issues tend to
> influence each other.
>
>
> There are gaps in the design. This step is to show that at least those
> considerations haven't been precluded in some way by the mix of features.

Question is not whether these things are precluded, question is how you
want to tackle them. It's not even stating design goals here.

> One thing I have heard is that effects, subtypes and type system
> soundness do not mix well. Subtypes are too useful to ignore,
> unsound types systems are not worth the effort, so I find it a bit
> surprising that the paper has nothing to say about the issue.
>
>
> I'm not sure what you mean by effects (may I ask you to elaborate? Or
> side effects maybe?)

Yes.

> but subtypes would appear to offer an intuitive
> analogy with set theory.

That's the least interesting part of subtypes, actually.
The salient point of this and some other features is that they make it
easier to reason about a given program's properties, at the expense of
making programming harder.
(One of the major design points in designing a new programming language
is where exactly to place that trade-off, and a lot of streaks of genius
went into easing the burden on the programmer.)

> It would mean extra look-ups in the deciding
> function to check inclusion, possibly using some sort of 'narrowable'
> type, and that would make showing soundness that much more involved.
> Are there other beyond-the-norm complications?

Lots.
The basic concept of subtypes is simple, but establishing a definition
of "subtype" that is both useful and sound is far from trivial.

For example. mutable circles and mutable ellipses are not in a subtype
relationship to each other if there is an updating "scale" operation
with an x and y scaling factor (you cannot guarantee that a scaled
circle stays circular).
The design space for dealing with this is far from fully explored.

Also, subtypes and binary operators do not really mix; google for
"parallel type hierarchy". (The core of the problem is that if you make
Byte a subtype of Word, declaring the (+) operator in Word as Word ->
Word will preclude Byte from being a subtype because you want a
covariant signature in Byte but that violates subtyping rules for
functions. So you need parametric polymorphism, but now you cannot use
the simple methods for subtyping anymore.)

> Are you aware how "monadic IO" became the standard in Haskell? It
> was one of three competing approaches, and AFAIK one turned out to
> be less useful, and the other simply wasn't ready in time (so it
> might still be interesting to investigate).
>
>
> No, I'm not, it sounds fascinating. Thank you for subsequently
> providing references.
>
> > For IO, ... variable parameters.
>
> What's the advantage here? Given the obvious strong disadvantage
> that it forces callers into an idiom that uses updatable data
> structures, the advantage better be compelling.
>
> The out-vars are the same as other variables in terms of updating: they
> have to be fresh on the way in and can't be modified after coming out --
> I should make that more clear

Oh, you don't have in-place updates, you have just initialization?
I missed that.

The key point to mention is that you want to maintain referential integrity.

BTW this still makes loops useless for putting values in variables,
because you can't update variables in an iteration; programmers will
still have to write recursive functions.

BTW nobody who is familiar with functional languages would consider that
a disadvantage.
Speaking of user groups: I am not sure what crowd you want to attract
with your design. It's not necessary to put that into the paper, but one
of the things that went "er, what?" in the back of my head was that I
could not infer for whom this kind of language would be useful.

> -- or was that not what you meant? The
> difference (I don't know that it can be called an advantage) is that IO
> can be done pretty much wherever-whenever but the insistence of a
> try-then-else for penultimate invocations forces that doing not to be
> unnoticed.

Sounds pretty much like the conseqences of having the IO monad in Haskell.
I think you should elaborate similarities and differences with how
Haskell does IO, that's a well-known standard it is going to make the
paper easier to read. Same goes for Clean&Mercury.

> Right now I fail to see what's new&better in this.
>
> Some languages allow IO expressions without any further thought being
> paid to the matter; some provide explicit mechanisms for dealing with
> IO. The language in the note takes a mid-way approach, in some sense,
> that I'm not familiar with from elsewhere. Assuming that this approach
> isn't in a language that I should know by now, could the approach not
> count as new? It may be irrelevant on some level, I suppose.

It's hard to tell whether it is actually new, too many details are missing.

> I hope that this goes some way towards being an adequate response. Once
> again, thank you for your invaluable feedback -- much appreciated!

You're welcome :-)

Regards
Jo

Rik Howard

unread,
Oct 26, 2016, 11:48:55 AM10/26/16
to Joachim Durchholz, haskel...@haskell.org
Jo


Question is not whether these things are precluded, question is how you want to tackle them. It's not even stating design goals here.

The section on Types has been amended to note that these questions form a part of the ongoing work.
 
 
The salient point of this and some other features is that they make it easier to reason about a given program's properties, at the expense of making programming harder.

You put that point well.
 

The basic concept of subtypes is simple, but establishing a definition of "subtype" that is both useful and sound is far from trivial.

I hope that there is nothing that I've said that could be interpreted as me thinking otherwise.  As you mentioned elsewhere, though, they look too appealing to ignore.
 

For example. mutable circles and mutable ellipses are not in a subtype relationship to each other if there is an updating "scale" operation with an x and y scaling factor (you cannot guarantee that a scaled circle stays circular).
The design space for dealing with this is far from fully explored.

I'm not sure that the language could support mutable structures but I take your point that there are complications.
 

Also, subtypes and binary operators do not really mix; google for "parallel type hierarchy". (The core of the problem is that if you make Byte a subtype of Word, declaring the (+) operator in Word as Word -> Word will preclude Byte from being a subtype because you want a covariant signature in Byte but that violates subtyping rules for functions. So you need parametric polymorphism, but now you cannot use the simple methods for subtyping anymore.)

Clearly there is more to be done in this area.
 

The key point to mention is that you want to maintain referential integrity.

The document now mentions this.
 

Sounds pretty much like the conseqences of having the IO monad in Haskell.

That seems fair to me although the broader impact on an entire program would be different I think.
 

I think you should elaborate similarities and differences with how Haskell does IO, that's a well-known standard it is going to make the paper easier to read. Same goes for Clean&Mercury.

Something like that is addressed in Related Work.  Clean is already on the list but it sounds, from your comments and those of others, as if Mercury may be worth including as well. 
 

It's hard to tell whether it is actually new, too many details are missing.

Certainly you have spotted the vagueness in the types however I think that that issue can be momentarily set aside from the point of view of novelty.  The language is purely functional with respect to functions and provides out-vars as the only mechanism for dealing with IO.  Let's assume for the moment that that all hangs together: if there is another language that does that, no novelty; otherwise, there is novelty.

Once again, your feedback has been useful and stimulating.  Many thanks!

Regards
Rik



Rik Howard

unread,
Oct 26, 2016, 6:34:03 PM10/26/16
to Joachim Durchholz, haskel...@haskell.org
  
 
Let's assume for the moment that that all hangs together: if there is another language that does that, no novelty; otherwise, there is novelty.


Maybe not quite so clear-cut but still in the area, I think.



Richard A. O'Keefe

unread,
Oct 26, 2016, 7:05:54 PM10/26/16
to haskel...@haskell.org
I'd like to point out that "procedural-functional" is not novel.
The programming language Euclid, an "industrial strength Pascal",
was sufficiently nailed down that a blue-and-white report from
Xerox PARC showed that it could be viewed as a pure functional
language. And I don't know if anyone ever pointed it out, but
the language used in Dijkstra's "A Discipline of Programming",
and in a number of papers subsequently, was constrained in the
same way, to the point where that language can be seen as a
sheep in wolf's clothing too.

I'd like to point out something else. We are looking at the
end of Moore's Law. If that end hasn't already overtaken us,
it's visible in the rear view mirror and coming up fast. HP,
for example, are revisiting the idea of having *LOTS* of CPUs
mixed in with the memory because conventional multicore has
its own problems. And the Square Kilometre Array telescope's
computers are definitely going to be chock full of FPGAs as
well as more conventional things like PGPUs.

This means that in the foreseeable future we are going to need
to learn a new style of programming because the antique style,
as exemplified by say Java, just isn't going to scale.

APL was once described as "the language of the future for the
problems of the past". I suspect that WIP may be headed in
the same direction.

Disciple http://trac.ouroborus.net/ddc/ may be of interest.
Quoting that page,
"DDC is a research compiler used to investigate program transformation
in the presence of computational effects. It compiles a family of strict
functional core languages and supports region and effect. This extra
information provides a handle on the operational behaviour of code that
isn't available in other languages. Programs can be written in either a
pure/functional or effectful/imperative style, and one of our goals is
to provide both styles coherently in the same language."

Joachim Durchholz

unread,
Oct 27, 2016, 1:17:40 AM10/27/16
to haskel...@haskell.org
Am 27.10.2016 um 01:05 schrieb Richard A. O'Keefe:
> This means that in the foreseeable future we are going to need
> to learn a new style of programming because the antique style,
> as exemplified by say Java, just isn't going to scale.

I think you underestimate the adaptability of existing languages.

Java has been preparing to move towards immutability&FP. At glacier-like
speed, admittedly, but once those massively multicore systems appear,
Java will be prepared to move.

Haskell can claim to be already there, but wrt. how many issues have
been staying unaddressed, it's no better than Java, it's just different
issues.
IOW this is not a predetermined race.

Richard A. O'Keefe

unread,
Oct 27, 2016, 8:49:40 PM10/27/16
to haskel...@haskell.org

On 27/10/16 6:17 PM, Joachim Durchholz wrote:
> Am 27.10.2016 um 01:05 schrieb Richard A. O'Keefe:
>> This means that in the foreseeable future we are going to need
>> to learn a new style of programming because the antique style,
>> as exemplified by say Java, just isn't going to scale.
>
> I think you underestimate the adaptability of existing languages.

Well, I don't think so.


>
> Java has been preparing to move towards immutability&FP. At glacier-like
> speed, admittedly, but once those massively multicore systems appear,
> Java will be prepared to move.

Nobody ever said Java (or any other language) can't ADD things.
The problem is that Java can't REMOVE the things that get in the
way without ceasing to be Java.

It's just like the way you can ADD things (complex arithmetic in C99,
threads in C11) to C, but you can't REMOVE the things that make C
horribly dangerous without it ceasing to be C (and thereby ceasing to
be useful in its admitted niche).

The fundamental operation in Java is the assignment statement.
It is fundamental to the Java Memory Model that when optimising
memory references the compiler is explicitly allowed to pretend
that threading doesn't exist.

If you fix those issues, you don't have Java any more.

> Haskell can claim to be already there, but wrt. how many issues have
> been staying unaddressed, it's no better than Java, it's just different
> issues.
> IOW this is not a predetermined race.

Nobody ever said it was. To be honest, I don't think ANY existing
language will survive unscathed. I really wasn't talking about a
race, simply making the point that we need new ideas, not just a
rehash of the old ones.

A very simple point: the more cores are running at once, the
sooner your program will run into trouble, if it runs into trouble
at all. And the more cores are running at once, the nastier it
gets for a human being trying to debug the code. So we're looking
for a language that can give us strong guarantees that certain
kinds of mistakes either CAN'T happen because the language cannot
express them or WON'T happen because it can verify that your
particular program doesn't do those bad things.

Joachim Durchholz

unread,
Oct 28, 2016, 1:11:30 AM10/28/16
to haskel...@haskell.org
Am 28.10.2016 um 02:49 schrieb Richard A. O'Keefe:
>
> Nobody ever said Java (or any other language) can't ADD things.
> The problem is that Java can't REMOVE the things that get in the
> way without ceasing to be Java.

Sure.
(Minor nitpick: Languages can change their nature by adding things, too.)

> It's just like the way you can ADD things (complex arithmetic in C99,
> threads in C11) to C, but you can't REMOVE the things that make C
> horribly dangerous without it ceasing to be C (and thereby ceasing to
> be useful in its admitted niche).

Sure, but still, it's a lot more grey area than you say - the dangerous
things in C++ are still there but the language became much less
dangerous because more modern versions come with other constructs so you
are not forced to use the horribly dangerous stuff anymore.

> The fundamental operation in Java is the assignment statement.
> It is fundamental to the Java Memory Model that when optimising
> memory references the compiler is explicitly allowed to pretend
> that threading doesn't exist.
>
> If you fix those issues, you don't have Java any more.

Value types would fix those issues without making it non-Java. There
have been multiple projects to get them into the language, so the
knowledge and interest is there, multicore is just not prevalent enough
to make Oracle recognize their relevance and putting their inclusion
high on the to-do list for the next language version.

Aliasing cannot be fixed in C++ because its constness annotations are
too weakly enforced to be useful to an optimizer.
In Java, this could be pretty different because you can't
reinterpret_cast things unless you copy them to a byte buffer before, so
the compiler does have all the guarantees.

>> Haskell can claim to be already there, but wrt. how many issues have
>> been staying unaddressed, it's no better than Java, it's just different
>> issues.
>> IOW this is not a predetermined race.
>
> Nobody ever said it was.

A certain smugness in a previous post implied something in that direction.
At least there's the idea that Haskell is in a better position than most
languages to adapt to that situation; I am sceptical, not because
Haskell is a bad language (I'd LOVE to code in Haskell) but because it
is missing some key elements to make it production-read for general use,
so it's not even going to enter the race. (Some people are happy with
that situation, which I think is pretty selfish.)

> To be honest, I don't think ANY existing
> language will survive unscathed. I really wasn't talking about a
> race, simply making the point that we need new ideas, not just a
> rehash of the old ones.

New ideas and testbeds to see which of them hold up in practice.

> A very simple point: the more cores are running at once, the
> sooner your program will run into trouble, if it runs into trouble
> at all. And the more cores are running at once, the nastier it
> gets for a human being trying to debug the code.

Actually imperative languages are slowly coming to grips with that.
E.g. updatable data structures have generally fallen out of favor for
interthread communication, which has removed 90% of race conditions.

The rest is more on the level of specification problems. However, I am
not aware of Haskell helping with multithreading once IO comes into play
- does it?

> So we're looking
> for a language that can give us strong guarantees that certain
> kinds of mistakes either CAN'T happen because the language cannot
> express them or WON'T happen because it can verify that your
> particular program doesn't do those bad things.

I do have some ideas, but not even a proof of concept so it's much too
early to talk much about that.

Nicholls, Mark

unread,
Nov 3, 2016, 4:47:59 AM11/3/16
to haskel...@haskell.org
im writing a msc project which touches on typeclasses (i am a relative noob at haskell, im using it in comparison to other techniques)

i believe there is advice to avoid typeclasses stylistically? (see haskell wiki on style, allegedly because it leads to creeping haskell extensions...overlapping instances etc)...which lead you into the "wild" (allegedly SPJ)

is there some sort of potted pros vs cons? its such a general question its actually hard to find something specific.

or is it simply that?
some cases overlap and some cases are undecidable, throw in functional dependency and type families and you're in a world of complexity you may not need?

vs

extensibility
?

CONFIDENTIALITY NOTICE

This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited.

While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data.

Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us.

MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT.

Olaf Klinke

unread,
Nov 3, 2016, 4:08:53 PM11/3/16
to nichol...@vimn.com, haskel...@haskell.org
Mark,

Haskell typeclasses are a wonderful thing to have. I dare say that typeclasses are what makes haskell code modular and re-usable, because you can develop an algorithm "aginst the class interface" and users only need to supply a class instance for their own data type in order to use your algorithm. What the wiki warns about is misuse or over-use of typeclasses. Typeclasses express properties that different types have in common, and in that respect they are somewhat similar to OO-classes. However, there is no hierarchy of inheritance.
It is true that typeclasses, particularly the more advanced ones with e.g. multiple parameters or functional dependencies, sometimes get into your way. One example is ambiguity arising from multi-parameter type classes: Suppose you have a two-parameter class

{-# LANGUAGE MultiParamTypeClasses,FunctionalDependencies #-}
class Foo a b where
foo :: a -> b
bar :: b -> (a,[Int])

Now you write a function using only bar:

f :: (Foo a b) => b -> Int
f = length.snd.bar

The compiler will complain that it does not know which Foo instance to use, because while bar itself mentions both class parameters a and b, snd.bar has type b -> [Int] and hence lost the information about the first class parameter. Adding a functional dependency b -> a helps in this case. But the promise that for every b there is only one a such that Foo a b holds may not be what you want. So what went wrong? Observe that

bar :: b -> (a,[Int])

is equivalent to two functions

bar1 :: b -> a
bar2 :: b -> [Int]

and bar2 probably did not belong into the class Foo. We've been cramming to much into one typeclass and should therefore split Foo into two classes.

Olaf

Brandon Allbery

unread,
Nov 3, 2016, 4:20:40 PM11/3/16
to Nicholls, Mark, haskel...@haskell.org

On Thu, Nov 3, 2016 at 4:47 AM, Nicholls, Mark <nichol...@vimn.com> wrote:
i believe there is advice to avoid typeclasses stylistically? (see haskell wiki on style, allegedly because it leads to creeping haskell extensions...overlapping instances etc)...which lead you into the "wild" (allegedly SPJ)

Typeclasses are great when used properly and for their intended purpose. There is a tendency for new Haskellers to overuse them, or to mistake them for OOP classes and try to use them as such (which won't work, but *will* cause the compiler to start suggesting increasingly-far-afield extensions as it tries to make sense of what the programmer was trying to do).

--
brandon s allbery kf8nh                               sine nomine associates
allb...@gmail.com                                  ball...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net

Nicholls, Mark

unread,
Nov 4, 2016, 4:32:05 AM11/4/16
to Brandon Allbery, haskel...@haskell.org
i like the example

but you say yourself it just means that bar2 should be elsewhere.::?

ie an oo'er would just say youve created the wrong abstraction, but not....dont create the abstraction....creating the abstraction is the mantra oo'ers are stylistically indoctonated in, in the name of extensibility (for good or evil)

so...is the problem with type classes that this modelling process is quite subtle and advanced? and thus should be avoided until you know what you're doing?
or something more?

(maybe the cost of the modelling process outweighs the benefit? even when you do know what youre doing?)


Excuse the spelling, sent from a phone with itty bitty keys, it like trying to sow a button on a shirt with a sausage.

Nicholls, Mark

unread,
Nov 4, 2016, 5:57:53 AM11/4/16
to Brandon Allbery, haskel...@haskell.org

I’m nervous of the statement “used properly”…it’s a bit tautological, and the inference is often used the wrong way around…i.e. if it doesn’t work you didn’t use it properly….and if it did…then you did!

 

i.e. the logic goes.

Creating an open data abstraction is not a “proper” usage?

But not knowing future extensions requirements is a reality?

So having a language where data abstraction are open by default (unless you WANT to close them) is an attractive objective?

Typeclasses provide this?

Eureka!

Becomes DISASTER?

 

So is the objection is really empirical ?…i.e. typeclasses will cause you problems IF you work like this (and you don’t know what you’re doing)?

 

Once you know where the problems are…best steer clear….once you know the pitfalls, you can use them “properly” (successfully)?

 

(ironically the process of finding the problems IS not using them properly!...but that’s life I think).

 

 

 

Tobias Dammers

unread,
Nov 4, 2016, 9:29:50 AM11/4/16
to Nicholls, Mark, haskel...@haskell.org
My rule of thumb is that if there is one obvious instance (or none) for
every given type, then your abstraction is a good candidate for a
typeclass; if multiple instances make sense, then it's a judgment call;
and if many instances could arbitrarily make sense or not depending on
the usage / circumstances, then it's probably not a good candidate for a
typeclass.

For example, Functor is a very good typeclass - every type can have
exactly zero or one lawful Functor instance; if you think about it a
little, this means that with Functor, you can never run into overlapping
or undecidable instances, and the open universe is a fairly benign
problem - even if you derive or implement an orphan instance, it will at
least be functionally and logically equivalent to other possible
instances.

Most of the typeclasses in base are tougher calls, but overall, I would
say they still make a lot of sense. I'd be hard pressed to come up with
a type that has more than one sensible Monad instance, for example.

Something like Aeson's ToJSON / FromJSON is kind of a judgment call -
On the one hand, they shouldn't be typeclasses, because ideally you want to
separate JSON format specifications from data types (i.e., the user code
of a given type should decide on the serialization format, not the type
itself), but on the other hand, more often than not one designs types
specifically for the purpose of representing a certain JSON schema, and
an extra translation step sits between these types and the actual domain
types (at least that's how I often do it).

On a side note; it's not like the open universe principle is a bad idea.
There are lots of benefits to be had here.

> _______________________________________________


> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.


--
Tobias Dammers - tdam...@gmail.com

Miguel

unread,
Nov 4, 2016, 9:46:36 AM11/4/16
to Nicholls, Mark, Brandon Allbery, haskel...@haskell.org
More that one Monad instance is actually pretty easy: type String -> (String, a) has different monad instances if you treat it as a State monad, or as a combination of Reader and Writer. Thanks to Denis Moskvin for inspiration.

Nicholls, Mark

unread,
Nov 4, 2016, 10:06:26 AM11/4/16
to Tobias Dammers, haskel...@haskell.org
>Subject: Re: [Haskell-cafe] programming style...and type classes...
>
>My rule of thumb is that if there is one obvious instance (or none) for every
>given type, then your abstraction is a good candidate for a typeclass; if multiple
>instances make sense, then it's a judgment call; and if many instances could
>arbitrarily make sense or not depending on the usage / circumstances, then it's
>probably not a good candidate for a typeclass.

OK, that’s a bit more prescriptive...so if many instances make sense (Ord a) then what?

Seems like Ord is a "bad" typeclass?...it should really be a dictionary data type? And then you simply define which order you want?

There needs to be 1 (or 0) instances because this is all resolved compile time?....i.e. so we (the programmer/compiler) need a function from types, and function name into instances...if there 0 instance...then boom...not defined..so you need to do something, that’s fixable....if 2...then boom...you're in trouble?

Tobias Dammers

unread,
Nov 4, 2016, 11:10:19 AM11/4/16
to Nicholls, Mark, haskel...@haskell.org
On Fri, Nov 04, 2016 at 02:06:18PM +0000, Nicholls, Mark wrote:
> >Subject: Re: [Haskell-cafe] programming style...and type classes...
> >
> >My rule of thumb is that if there is one obvious instance (or none) for every
> >given type, then your abstraction is a good candidate for a typeclass; if multiple
> >instances make sense, then it's a judgment call; and if many instances could
> >arbitrarily make sense or not depending on the usage / circumstances, then it's
> >probably not a good candidate for a typeclass.
>
> OK, that’s a bit more prescriptive...so if many instances make sense (Ord a) then what?
>
> Seems like Ord is a "bad" typeclass?...it should really be a dictionary data type? And then you simply define which order you want?

Ord is a "bad" typeclass if you interpret it as "the ordering rules for
this type"; but it is OK if you interpret it as "the default ordering
rules for this type". Which, arguably, makes more sense for some types
(Int) and less for others (Text). However, in the case of Ord, there is
little harm in defining a default instance and using more specialized
sorting functions that take an explicit ordering functions; this stuff
is simple enough for these things to not be very intrusive at all, and
giving arguable cases the benefit of the doubt, I guess, is considered
pragmatic.

>
> There needs to be 1 (or 0) instances because this is all resolved
> compile time?....i.e. so we (the programmer/compiler) need a function
> from types, and function name into instances...if there 0
> instance...then boom...not defined..so you need to do something,
> that’s fixable....if 2...then boom...you're in trouble?

Sort of, yes. You can't have two instances of the same typeclass for the
same type. However, because instances are always global, and can be
defined separately from both the type and the typeclass, it is possible
to write instances which, when imported, break the code that imports
them by introducing conflicting instances. This is why the
recommendation is to always define instances either in the module that
defines the type, or the module that defines the typeclass. Instances
that are defined separately from both are called "orphan instances", and
people generally avoid them where possible, but occasionally practical
concerns make them necessary. The most common reason for orphan
instances is to avoid dependencies: suppose you write a library that
defines an alternative string type; and someone else writes a library
that defines useful typeclasses for string-like types. Ideally, you want
your custom string type to have instances for those typeclasses, but you
can't put them in the typeclass library (because you don't control it),
and you don't want to put them in your own library, because then your
own library has to depend on the typeclass library even though most
people won't need that functionality. In such a situation, you'd bite
the bullet and provide an extra library that contains just the
instances, peppered with warnings about orphan instances. Another
example is when you write a program that uses types from some
third-party library, and wants to marshal them to and from JSON - for
that to work, you need to implement ToJSON and FromJSON instances for
those types, but since you control neither the type nor the JSON
typeclasses, the only choice you have is to make orphan instances in
your program. In that case, the damage is limited, because you won't
expose the orphan instances to the outside world, so it's sort of
acceptable usually.

Nicholls, Mark

unread,
Nov 4, 2016, 11:31:35 AM11/4/16
to Tobias Dammers, haskel...@haskell.org
>> >My rule of thumb is that if there is one obvious instance (or none)
>> >for every given type, then your abstraction is a good candidate for a
>> >typeclass; if multiple instances make sense, then it's a judgment
>> >call; and if many instances could arbitrarily make sense or not
>> >depending on the usage / circumstances, then it's probably not a good
>candidate for a typeclass.
>>
>> OK, that’s a bit more prescriptive...so if many instances make sense (Ord a)
>then what?
>>
>> Seems like Ord is a "bad" typeclass?...it should really be a dictionary data
>type? And then you simply define which order you want?
>
>Ord is a "bad" typeclass if you interpret it as "the ordering rules for this type";
>but it is OK if you interpret it as "the default ordering rules for this type".

:-)
This is the similar linguistic trickery as earler
You can always select a specific instance and claim it default, and then your rule of thumb becomes very very weak.

Lets order some tuples...whats the natural order?..
Ordering the natural numbers, maybe "natural" but I quite often want the reverse order, then I either build custom functions to do this, or have to map between orders (reverse).


> Which,
>arguably, makes more sense for some types
>(Int) and less for others (Text). However, in the case of Ord, there is little harm
>in defining a default instance and using more specialized sorting functions that
>take an explicit ordering functions; this stuff is simple enough for these things to
>not be very intrusive at all, and giving arguable cases the benefit of the doubt, I
>guess, is considered pragmatic.

Ok...but really your rule of thumb says "Ord" is bad...I haven’t got a problem with that...it may be "bad" but still useful enough to preserve....and the language isnt the arbiter of good practice...that's you.

Someone else has used this as a problematic typeclass in another mail (thanks).

you can decide :-)

Is Ord a bad typeclass? Or is the rule of thumb a bit weak (and allows almost any typeclass if we insert the word "default" into the spec)?

(I think Ord is a "bad" typeclass, for your heuristic to survive...and I think your heuristic captures something of value, keep the heuristic and label Ord bad...but still use it).


>
>>
>> There needs to be 1 (or 0) instances because this is all resolved
>> compile time?....i.e. so we (the programmer/compiler) need a function
>> from types, and function name into instances...if there 0
>> instance...then boom...not defined..so you need to do something,
>> that’s fixable....if 2...then boom...you're in trouble?
>
>Sort of, yes. You can't have two instances of the same typeclass for the same
>type. However, because instances are always global, and can be defined
>separately from both the type and the typeclass, it is possible to write instances
>which, when imported, break the code that imports them by introducing
>conflicting instances. This is why the recommendation is to always define
>instances either in the module that defines the type, or the module that defines
>the typeclass.


Which IS interesting.

Who's recommendation? Have you got a reference?

> Instances that are defined separately from both are called
>"orphan instances", and people generally avoid them where possible, but
>occasionally practical concerns make them necessary. The most common reason
>for orphan instances is to avoid dependencies: suppose you write a library that
>defines an alternative string type; and someone else writes a library that defines
>useful typeclasses for string-like types. Ideally, you want your custom string type
>to have instances for those typeclasses, but you can't put them in the typeclass
>library (because you don't control it), and you don't want to put them in your
>own library, because then your own library has to depend on the typeclass
>library even though most people won't need that functionality. In such a
>situation, you'd bite the bullet and provide an extra library that contains just the
>instances, peppered with warnings about orphan instances. Another example is
>when you write a program that uses types from some third-party library, and
>wants to marshal them to and from JSON - for that to work, you need to
>implement ToJSON and FromJSON instances for those types, but since you
>control neither the type nor the JSON typeclasses, the only choice you have is to
>make orphan instances in your program. In that case, the damage is limited,
>because you won't expose the orphan instances to the outside world, so it's sort
>of acceptable usually.

Ok...very interesting.

As I say if you've got a reference to some sort of "real world Haskell" book or something like that, then that’s brilliant....I can dig about the edge cases.

CONFIDENTIALITY NOTICE

This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited.

While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data.

Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us.

MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT.

Will Yager

unread,
Nov 4, 2016, 1:16:09 PM11/4/16
to Tobias Dammers, haskel...@haskell.org

> On Nov 4, 2016, at 10:10, Tobias Dammers <tdam...@gmail.com> wrote:
>
> However, in the case of Ord, there is
> little harm in defining a default instance and using more specialized
> sorting functions that take an explicit ordering functions;

I disagree. This is the purpose of newtype wrappers. There are few scenarios where it's syntactically more convenient to pass around a bunch of random functions rather than just define a newtype with the behavior you want. GeneralizedNewtypeDeriving makes this trivial.

Will

Tobias Dammers

unread,
Nov 4, 2016, 1:38:07 PM11/4/16
to Will Yager, haskell-cafe

As with most things "apply taste" and "apply brain"... uhm... apply.

Olaf Klinke

unread,
Nov 5, 2016, 11:46:03 AM11/5/16
to haskel...@haskell.org
While we have stated in this thread what typeclasses should _not_ be used for, we probably do not have carved out yet what `proper' use is. Please help me here.

There are some typeclasses, e.g. Eq, Ord, Num, Monoid, Functor, Monad and Category, that arise from category theory and algebra, so these may be regarded as "good" typeclasses even though a type can have several sensible instances. In that case, newtypes may express this. (Data.Monoid is full of such newtypes.)

In defense of Ord, there are in general many ways to totally order a set (read "set" as "total elements of a type"), and none of these ways can be preferred a priori. I would not say this is an inherently bad property of Ord. Similarly, there are many ways to put a monoid or ring structure on a set.

Sometimes a type class can be avoided. For example, there is Data.Foldable.toList. Hence, instead of writing a Foldable instance for your type and use a function from the Data.Foldable module, you could instead write your own toList conversion function and then fold that list. (Does anyone know a function involving Foldable that can not be reduced to 'toList' followed by some list operations? Is [] the "free instance" of Fodable?)
However, this reduction may be much more inefficient than using the Foldable instance directly.

As a contrived example of what I mean by "free instance", consider the following class that abstracts traversal over a sequence of unknown length.

class ListLike l where
patternmatch :: l a -> Maybe (a,l a)

You can write ListLike instances for [], Vector, Sequence and Set, but only [a] is the solution to the type equation

x = Maybe (a,x).

Then there are multi-parameter type classes. This has been described as metaphorical to a type-level Prolog language [1], since ultimately multi-parameter type classes (without methods) are relations between types.

Another debatable use case: You define many record types, each of which has a sort key component:

data A = A {foo :: Foo, bar :: Bar, keyA :: Int}
data B = B {baz :: Baz, keyB :: Int}
data C = C {... , keyC :: Int}

It might be convenient to define a typeclass

class HasSortKey a where
key :: a -> Int

and endow the types A, B and C with the obvious instances. But one could also avoid the class by making a parametric type:

data Keyed a = Keyed {payload :: a, key :: Int}

and then let data A = A {foo :: Foo, bar :: Bar} and so forth. Which version would you say is `proper' use?

-- Olaf


[1] https://www.reddit.com/r/haskell/comments/ck459/typed_typelevel_programming_in_haskell_part_i/

David Menendez

unread,
Nov 5, 2016, 11:06:30 PM11/5/16
to Olaf Klinke, haskell
On Sat, Nov 5, 2016 at 11:45 AM, Olaf Klinke <o...@aatal-apotheke.de> wrote:
While we have stated in this thread what typeclasses should _not_ be used for, we probably do not have carved out yet what `proper' use is. Please help me here.

It may be helpful to recall the original purpose of classes, which was to allow controlled, ad-hoc overloading. Otherwise, we’d be stuck using specialized functions for every type, passing around explicit equality tests, or being stuck with the equivalent of deriving(Eq) on every type.

Now that we have qualified types, we can write algorithms which are generic over class members. This, I would say, is the sign that a class is useful. If you can’t usefully write code which is generic over a class, then you don’t gain much from using the class. (You do gain not having to come up with new names, though, and that’s not nothing.)

Some people claim that you shouldn’t define a class unless you can describe some sort of algebraic laws for it, but I think that’s too strong. For one thing, it would mean we lose equality and numeric operations for Float, which is one of the things classes were invented for in the first place.

Instead, define a class if (1) you have a set of operations that can be applied at multiple types but aren’t polymorphic, (2) you can give a formal or informal description of the operations which are enough to reasonably distinguish a good instance from a bad one, and (3) using a class makes your code easier to understand.

--

Ruben Astudillo

unread,
Nov 6, 2016, 10:02:44 AM11/6/16
to David Menendez, haskel...@haskell.org
On 06/11/16 00:06, David Menendez wrote:
> Some people claim that you shouldn’t define a class unless you can
> describe some sort of algebraic laws for it, but I think that’s too
> strong. For one thing, it would mean we lose equality and numeric
> operations for Float, which is one of the things classes were invented
> for in the first place.

I've heard that too and agree with your objection. I think what
underlies that critic, is that introduccion type classes without laws
makes it harder to see the "intuition" of what such class ought to do
(this in the context of reading, not writing). I specifically remember
being a beginners and see

class RegexOptions regex compOpt execOpt

not understanding what really meant and why had to sprinkle it through
my code. A good case of what you say is Data.Vector.Generic.Vector,
(which argueably make more sense as a backpack-module) that makes the type
class usage intuition clear.

So in resume, what we want is "easy to understand usage" typeclasses
more than laws. To that I think putting emphasis on the important
instances goes great length in helping.

> Instead, define a class if (1) you have a set of operations that can
> be applied at multiple types but aren’t polymorphic, (2) you can give
> a formal or informal description of the operations which are enough to
> reasonably distinguish a good instance from a bad one, and (3) using a
> class makes your code easier to understand.

(2) hits the nail, specially when reading others code!

--
-- Ruben

Joachim Durchholz

unread,
Dec 12, 2016, 11:10:17 AM12/12/16
to Haskell Cafe
Sorry for replying so late, this got stuck in the outbox.

Am 26.10.2016 um 17:48 schrieb Rik Howard:
> Also, subtypes and binary operators do not really mix; google for
> "parallel type hierarchy". (The core of the problem is that if you
> make Byte a subtype of Word, declaring the (+) operator in Word as
> Word -> Word will preclude Byte from being a subtype because you
> want a covariant signature in Byte but that violates subtyping rules
> for functions. So you need parametric polymorphism, but now you
> cannot use the simple methods for subtyping anymore.)
>
>
> Clearly there is more to be done in this area.

That's a pretty well-explored part of design space.
Unfortunately, there seems to a really hard trade-off between things
like modularity / extensibility, freeness from surprises, and soundness,
so most languages to not offer this.

Striking a useful compromise is hard.
Even integer arithmetic isn't easy; e.g. what's the type and value of
(byte 254)+(byte 254)? For some languages it is (word 510), for some it
is (word -4), some say (bigint 510). I have seen strong arguments
brought forward for each of these answers, but since these solutions are
mutually incompatible, and whatever answer you pick for your language,
you have to ignore the arguments in favor of the other choices.

> I think you should elaborate similarities and differences with how
> Haskell does IO, that's a well-known standard it is going to make
> the paper easier to read. Same goes for Clean&Mercury.
>
> Something like that is addressed in Related Work. Clean is already on
> the list but it sounds, from your comments and those of others, as if
> Mercury may be worth including as well.

Just for reference, e.g. which of your concepts are the same in the
other languages, which are different and what are the differences.

> It's hard to tell whether it is actually new, too many details are
> missing.
>
> Certainly you have spotted the vagueness in the types however I think
> that that issue can be momentarily set aside from the point of view of
> novelty.

Novelty is entirely uninteresting.
Usefulness is.
Novelty *may* point out new ways to constructing something that's more
useful than we have, but you need to point out how the novelty will help
with that if you want to raise serious interest in your work.

Regards,

Reply all
Reply to author
Forward
0 new messages