_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
It is a programme for designing a programming language.
It is leaving out a number of central issues: How to approach
modularity, whether it should have opaque types (and why), whether there
should be subtypes or not, how the type system is supposed to deal with
arithmetic which has almost-compatible integer types and floating-point
types.
That's just off the top of my head, I am pretty sure that there are
other issues.
It is hard to discuss merits or problems at this stage, since all of
these issues tend to influence each other.
> The work arises from an investigation into functional programming syntax
> and semantics. The novelty seems to be there but there is too a
> question as to whether it is simply a gimmick. I try to suggest that it
> is not but, by that stage, there have been many assumptions so it is
> hard to be sure whether the suggestion is valid. If anyone has any
> comments, questions or suggestions, they would be gratefully received.
One thing I have heard is that effects, subtypes and type system
soundness do not mix well. Subtypes are too useful to ignore, unsound
types systems are not worth the effort, so I find it a bit surprising
that the paper has nothing to say about the issue.
Just my 2c.
Regards,
Jo
--
Sven SAULEAU - Xtuc
Développeur Web
06 28 69 51 44
Are you aware how "monadic IO" became the standard in Haskell?
It was one of three competing approaches, and AFAIK one turned out to be
less useful, and the other simply wasn't ready in time (so it might
still be interesting to investigate).
> For IO, a procedure returns only 'okay' or an error via the mechanism
> that a function would use for returning values; the non-deterministic
> values returned by the procedure are done so with variable parameters.
What's the advantage here?
Given the obvious strong disadvantage that it forces callers into an
idiom that uses updatable data structures, the advantage better be
compelling.
> For example, to define a procedure to echo once from standard in to out:
>
> echo = try (read string) (write string) error
>
> The value coming from standard-in is to be captured in the 'string'
> out-variable and so is available to be written to standard-out, the
> 'try' built-in being analogous to 'if'.
What is the analogy?
That stuff is evaluated only on a by-need basis? That's already there in
Haskell.
> Rough (inconsistent) examples exist; its grammar and type system are in
> a slightly better state but not yet written up properly. What I could
> quickly add as an appendix, I have but the notation needs to be made
> more standard for easier comparison. I am working on another paper that
> will address the need for a more-concrete and standard presentation. I
> hope that this goes some way to answering your questions; please say if
> it doesn't!
Right now I fail to see what's new&better in this.
Regards,
Jo
That's very tantalising. Can you link to a reference?!
You may want to look at
Call-By-Push-Value
A Functional/Imperative Synthesis
By Springer
--
--
Sent from an expensive device which will be obsolete in a few months! :D
Casey
I think it is in one or both of these:
Tackling the Awkward Squad
A History of Haskell: being lazy with class
A retrospective on Haskell
(It's been a decade since I last read them so they might be not exactly
what you're after, but they should be close.)
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
On 24 Oct 2016, at 15:50, Ronald Legere <rjl...@gmail.com> wrote:I must admit to some curiosity about this as well. My recollection was that the original approach was to use lazy streamsIO:: [request] -> [respose].This can be managed a bit better using continuations (Perhaps continuations can also be considered a separate approach?)And now we have the IO Monad. (which can be defined in terms of the stream based approach but is not implemented that way)The only other approach I am aware of is Clean's "Unique types".
It is a programme for designing a programming language.
It is leaving out a number of central issues: How to approach modularity
whether it should have opaque types (and why), whether there should be subtypes or not, how the type system is supposed to deal with arithmetic which has almost-compatible integer types and floating-point types. That's just off the top of my head, I am pretty sure that there are other issues. It is hard to discuss merits or problems at this stage, since all of these issues tend to influence each other.
One thing I have heard is that effects, subtypes and type system soundness do not mix well. Subtypes are too useful to ignore, unsound types systems are not worth the effort, so I find it a bit surprising that the paper has nothing to say about the issue.
Are you aware how "monadic IO" became the standard in Haskell? It was one of three competing approaches, and AFAIK one turned out to be less useful, and the other simply wasn't ready in time (so it might still be interesting to investigate).
> For IO, ... variable parameters.
What's the advantage here? Given the obvious strong disadvantage that it forces callers into an idiom that uses updatable data structures, the advantage better be compelling.
>... the 'try' built-in being analogous to 'if'.
What is the analogy? That stuff is evaluated only on a by-need basis? That's already there in Haskell.
Right now I fail to see what's new&better in this.
On 25/10/16 3:50 AM, Ronald Legere wrote:
> The only other approach I am aware of is Clean's "Unique types".
That approach was also adopted in the programming language Mercury,
which includes both statically typed and moded logic programming
and statically typed functional programming, using Haskell-ish types.
It's not clear to me why Haskell, Clean, and Mercury don't already
qualify as "procedural-functional languages", especially when you
consider that the logic programming part of Mercury provides a
very clean approach to result arguments.
Question is not whether these things are precluded, question is how you
want to tackle them. It's not even stating design goals here.
> One thing I have heard is that effects, subtypes and type system
> soundness do not mix well. Subtypes are too useful to ignore,
> unsound types systems are not worth the effort, so I find it a bit
> surprising that the paper has nothing to say about the issue.
>
>
> I'm not sure what you mean by effects (may I ask you to elaborate? Or
> side effects maybe?)
Yes.
> but subtypes would appear to offer an intuitive
> analogy with set theory.
That's the least interesting part of subtypes, actually.
The salient point of this and some other features is that they make it
easier to reason about a given program's properties, at the expense of
making programming harder.
(One of the major design points in designing a new programming language
is where exactly to place that trade-off, and a lot of streaks of genius
went into easing the burden on the programmer.)
> It would mean extra look-ups in the deciding
> function to check inclusion, possibly using some sort of 'narrowable'
> type, and that would make showing soundness that much more involved.
> Are there other beyond-the-norm complications?
Lots.
The basic concept of subtypes is simple, but establishing a definition
of "subtype" that is both useful and sound is far from trivial.
For example. mutable circles and mutable ellipses are not in a subtype
relationship to each other if there is an updating "scale" operation
with an x and y scaling factor (you cannot guarantee that a scaled
circle stays circular).
The design space for dealing with this is far from fully explored.
Also, subtypes and binary operators do not really mix; google for
"parallel type hierarchy". (The core of the problem is that if you make
Byte a subtype of Word, declaring the (+) operator in Word as Word ->
Word will preclude Byte from being a subtype because you want a
covariant signature in Byte but that violates subtyping rules for
functions. So you need parametric polymorphism, but now you cannot use
the simple methods for subtyping anymore.)
> Are you aware how "monadic IO" became the standard in Haskell? It
> was one of three competing approaches, and AFAIK one turned out to
> be less useful, and the other simply wasn't ready in time (so it
> might still be interesting to investigate).
>
>
> No, I'm not, it sounds fascinating. Thank you for subsequently
> providing references.
>
> > For IO, ... variable parameters.
>
> What's the advantage here? Given the obvious strong disadvantage
> that it forces callers into an idiom that uses updatable data
> structures, the advantage better be compelling.
>
> The out-vars are the same as other variables in terms of updating: they
> have to be fresh on the way in and can't be modified after coming out --
> I should make that more clear
Oh, you don't have in-place updates, you have just initialization?
I missed that.
The key point to mention is that you want to maintain referential integrity.
BTW this still makes loops useless for putting values in variables,
because you can't update variables in an iteration; programmers will
still have to write recursive functions.
BTW nobody who is familiar with functional languages would consider that
a disadvantage.
Speaking of user groups: I am not sure what crowd you want to attract
with your design. It's not necessary to put that into the paper, but one
of the things that went "er, what?" in the back of my head was that I
could not infer for whom this kind of language would be useful.
> -- or was that not what you meant? The
> difference (I don't know that it can be called an advantage) is that IO
> can be done pretty much wherever-whenever but the insistence of a
> try-then-else for penultimate invocations forces that doing not to be
> unnoticed.
Sounds pretty much like the conseqences of having the IO monad in Haskell.
I think you should elaborate similarities and differences with how
Haskell does IO, that's a well-known standard it is going to make the
paper easier to read. Same goes for Clean&Mercury.
> Right now I fail to see what's new&better in this.
>
> Some languages allow IO expressions without any further thought being
> paid to the matter; some provide explicit mechanisms for dealing with
> IO. The language in the note takes a mid-way approach, in some sense,
> that I'm not familiar with from elsewhere. Assuming that this approach
> isn't in a language that I should know by now, could the approach not
> count as new? It may be irrelevant on some level, I suppose.
It's hard to tell whether it is actually new, too many details are missing.
> I hope that this goes some way towards being an adequate response. Once
> again, thank you for your invaluable feedback -- much appreciated!
You're welcome :-)
Regards
Jo
Question is not whether these things are precluded, question is how you want to tackle them. It's not even stating design goals here.
The salient point of this and some other features is that they make it easier to reason about a given program's properties, at the expense of making programming harder.
The basic concept of subtypes is simple, but establishing a definition of "subtype" that is both useful and sound is far from trivial.
For example. mutable circles and mutable ellipses are not in a subtype relationship to each other if there is an updating "scale" operation with an x and y scaling factor (you cannot guarantee that a scaled circle stays circular).
The design space for dealing with this is far from fully explored.
Also, subtypes and binary operators do not really mix; google for "parallel type hierarchy". (The core of the problem is that if you make Byte a subtype of Word, declaring the (+) operator in Word as Word -> Word will preclude Byte from being a subtype because you want a covariant signature in Byte but that violates subtyping rules for functions. So you need parametric polymorphism, but now you cannot use the simple methods for subtyping anymore.)
The key point to mention is that you want to maintain referential integrity.
Sounds pretty much like the conseqences of having the IO monad in Haskell.
I think you should elaborate similarities and differences with how Haskell does IO, that's a well-known standard it is going to make the paper easier to read. Same goes for Clean&Mercury.
It's hard to tell whether it is actually new, too many details are missing.
Let's assume for the moment that that all hangs together: if there is another language that does that, no novelty; otherwise, there is novelty.
I'd like to point out something else. We are looking at the
end of Moore's Law. If that end hasn't already overtaken us,
it's visible in the rear view mirror and coming up fast. HP,
for example, are revisiting the idea of having *LOTS* of CPUs
mixed in with the memory because conventional multicore has
its own problems. And the Square Kilometre Array telescope's
computers are definitely going to be chock full of FPGAs as
well as more conventional things like PGPUs.
This means that in the foreseeable future we are going to need
to learn a new style of programming because the antique style,
as exemplified by say Java, just isn't going to scale.
APL was once described as "the language of the future for the
problems of the past". I suspect that WIP may be headed in
the same direction.
Disciple http://trac.ouroborus.net/ddc/ may be of interest.
Quoting that page,
"DDC is a research compiler used to investigate program transformation
in the presence of computational effects. It compiles a family of strict
functional core languages and supports region and effect. This extra
information provides a handle on the operational behaviour of code that
isn't available in other languages. Programs can be written in either a
pure/functional or effectful/imperative style, and one of our goals is
to provide both styles coherently in the same language."
I think you underestimate the adaptability of existing languages.
Java has been preparing to move towards immutability&FP. At glacier-like
speed, admittedly, but once those massively multicore systems appear,
Java will be prepared to move.
Haskell can claim to be already there, but wrt. how many issues have
been staying unaddressed, it's no better than Java, it's just different
issues.
IOW this is not a predetermined race.
On 27/10/16 6:17 PM, Joachim Durchholz wrote:
> Am 27.10.2016 um 01:05 schrieb Richard A. O'Keefe:
>> This means that in the foreseeable future we are going to need
>> to learn a new style of programming because the antique style,
>> as exemplified by say Java, just isn't going to scale.
>
> I think you underestimate the adaptability of existing languages.
Well, I don't think so.
>
> Java has been preparing to move towards immutability&FP. At glacier-like
> speed, admittedly, but once those massively multicore systems appear,
> Java will be prepared to move.
Nobody ever said Java (or any other language) can't ADD things.
The problem is that Java can't REMOVE the things that get in the
way without ceasing to be Java.
It's just like the way you can ADD things (complex arithmetic in C99,
threads in C11) to C, but you can't REMOVE the things that make C
horribly dangerous without it ceasing to be C (and thereby ceasing to
be useful in its admitted niche).
The fundamental operation in Java is the assignment statement.
It is fundamental to the Java Memory Model that when optimising
memory references the compiler is explicitly allowed to pretend
that threading doesn't exist.
If you fix those issues, you don't have Java any more.
> Haskell can claim to be already there, but wrt. how many issues have
> been staying unaddressed, it's no better than Java, it's just different
> issues.
> IOW this is not a predetermined race.
Nobody ever said it was. To be honest, I don't think ANY existing
language will survive unscathed. I really wasn't talking about a
race, simply making the point that we need new ideas, not just a
rehash of the old ones.
A very simple point: the more cores are running at once, the
sooner your program will run into trouble, if it runs into trouble
at all. And the more cores are running at once, the nastier it
gets for a human being trying to debug the code. So we're looking
for a language that can give us strong guarantees that certain
kinds of mistakes either CAN'T happen because the language cannot
express them or WON'T happen because it can verify that your
particular program doesn't do those bad things.
Sure.
(Minor nitpick: Languages can change their nature by adding things, too.)
> It's just like the way you can ADD things (complex arithmetic in C99,
> threads in C11) to C, but you can't REMOVE the things that make C
> horribly dangerous without it ceasing to be C (and thereby ceasing to
> be useful in its admitted niche).
Sure, but still, it's a lot more grey area than you say - the dangerous
things in C++ are still there but the language became much less
dangerous because more modern versions come with other constructs so you
are not forced to use the horribly dangerous stuff anymore.
> The fundamental operation in Java is the assignment statement.
> It is fundamental to the Java Memory Model that when optimising
> memory references the compiler is explicitly allowed to pretend
> that threading doesn't exist.
>
> If you fix those issues, you don't have Java any more.
Value types would fix those issues without making it non-Java. There
have been multiple projects to get them into the language, so the
knowledge and interest is there, multicore is just not prevalent enough
to make Oracle recognize their relevance and putting their inclusion
high on the to-do list for the next language version.
Aliasing cannot be fixed in C++ because its constness annotations are
too weakly enforced to be useful to an optimizer.
In Java, this could be pretty different because you can't
reinterpret_cast things unless you copy them to a byte buffer before, so
the compiler does have all the guarantees.
>> Haskell can claim to be already there, but wrt. how many issues have
>> been staying unaddressed, it's no better than Java, it's just different
>> issues.
>> IOW this is not a predetermined race.
>
> Nobody ever said it was.
A certain smugness in a previous post implied something in that direction.
At least there's the idea that Haskell is in a better position than most
languages to adapt to that situation; I am sceptical, not because
Haskell is a bad language (I'd LOVE to code in Haskell) but because it
is missing some key elements to make it production-read for general use,
so it's not even going to enter the race. (Some people are happy with
that situation, which I think is pretty selfish.)
> To be honest, I don't think ANY existing
> language will survive unscathed. I really wasn't talking about a
> race, simply making the point that we need new ideas, not just a
> rehash of the old ones.
New ideas and testbeds to see which of them hold up in practice.
> A very simple point: the more cores are running at once, the
> sooner your program will run into trouble, if it runs into trouble
> at all. And the more cores are running at once, the nastier it
> gets for a human being trying to debug the code.
Actually imperative languages are slowly coming to grips with that.
E.g. updatable data structures have generally fallen out of favor for
interthread communication, which has removed 90% of race conditions.
The rest is more on the level of specification problems. However, I am
not aware of Haskell helping with multithreading once IO comes into play
- does it?
> So we're looking
> for a language that can give us strong guarantees that certain
> kinds of mistakes either CAN'T happen because the language cannot
> express them or WON'T happen because it can verify that your
> particular program doesn't do those bad things.
I do have some ideas, but not even a proof of concept so it's much too
early to talk much about that.
i believe there is advice to avoid typeclasses stylistically? (see haskell wiki on style, allegedly because it leads to creeping haskell extensions...overlapping instances etc)...which lead you into the "wild" (allegedly SPJ)
is there some sort of potted pros vs cons? its such a general question its actually hard to find something specific.
or is it simply that?
some cases overlap and some cases are undecidable, throw in functional dependency and type families and you're in a world of complexity you may not need?
vs
extensibility
?
CONFIDENTIALITY NOTICE
This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited.
While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data.
Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us.
MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT.
Haskell typeclasses are a wonderful thing to have. I dare say that typeclasses are what makes haskell code modular and re-usable, because you can develop an algorithm "aginst the class interface" and users only need to supply a class instance for their own data type in order to use your algorithm. What the wiki warns about is misuse or over-use of typeclasses. Typeclasses express properties that different types have in common, and in that respect they are somewhat similar to OO-classes. However, there is no hierarchy of inheritance.
It is true that typeclasses, particularly the more advanced ones with e.g. multiple parameters or functional dependencies, sometimes get into your way. One example is ambiguity arising from multi-parameter type classes: Suppose you have a two-parameter class
{-# LANGUAGE MultiParamTypeClasses,FunctionalDependencies #-}
class Foo a b where
foo :: a -> b
bar :: b -> (a,[Int])
Now you write a function using only bar:
f :: (Foo a b) => b -> Int
f = length.snd.bar
The compiler will complain that it does not know which Foo instance to use, because while bar itself mentions both class parameters a and b, snd.bar has type b -> [Int] and hence lost the information about the first class parameter. Adding a functional dependency b -> a helps in this case. But the promise that for every b there is only one a such that Foo a b holds may not be what you want. So what went wrong? Observe that
bar :: b -> (a,[Int])
is equivalent to two functions
bar1 :: b -> a
bar2 :: b -> [Int]
and bar2 probably did not belong into the class Foo. We've been cramming to much into one typeclass and should therefore split Foo into two classes.
Olaf
i believe there is advice to avoid typeclasses stylistically? (see haskell wiki on style, allegedly because it leads to creeping haskell extensions...overlapping instances etc)...which lead you into the "wild" (allegedly SPJ)
I’m nervous of the statement “used properly”…it’s a bit tautological, and the inference is often used the wrong way around…i.e. if it doesn’t work you didn’t use it properly….and if it did…then you did!
i.e. the logic goes.
Creating an open data abstraction is not a “proper” usage?
But not knowing future extensions requirements is a reality?
So having a language where data abstraction are open by default (unless you WANT to close them) is an attractive objective?
Typeclasses provide this?
Eureka!
Becomes DISASTER?
So is the objection is really empirical ?…i.e. typeclasses will cause you problems IF you work like this (and you don’t know what you’re doing)?
Once you know where the problems are…best steer clear….once you know the pitfalls, you can use them “properly” (successfully)?
(ironically the process of finding the problems IS not using them properly!...but that’s life I think).
For example, Functor is a very good typeclass - every type can have
exactly zero or one lawful Functor instance; if you think about it a
little, this means that with Functor, you can never run into overlapping
or undecidable instances, and the open universe is a fairly benign
problem - even if you derive or implement an orphan instance, it will at
least be functionally and logically equivalent to other possible
instances.
Most of the typeclasses in base are tougher calls, but overall, I would
say they still make a lot of sense. I'd be hard pressed to come up with
a type that has more than one sensible Monad instance, for example.
Something like Aeson's ToJSON / FromJSON is kind of a judgment call -
On the one hand, they shouldn't be typeclasses, because ideally you want to
separate JSON format specifications from data types (i.e., the user code
of a given type should decide on the serialization format, not the type
itself), but on the other hand, more often than not one designs types
specifically for the purpose of representing a certain JSON schema, and
an extra translation step sits between these types and the actual domain
types (at least that's how I often do it).
On a side note; it's not like the open universe principle is a bad idea.
There are lots of benefits to be had here.
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.
--
Tobias Dammers - tdam...@gmail.com
Ord is a "bad" typeclass if you interpret it as "the ordering rules for
this type"; but it is OK if you interpret it as "the default ordering
rules for this type". Which, arguably, makes more sense for some types
(Int) and less for others (Text). However, in the case of Ord, there is
little harm in defining a default instance and using more specialized
sorting functions that take an explicit ordering functions; this stuff
is simple enough for these things to not be very intrusive at all, and
giving arguable cases the benefit of the doubt, I guess, is considered
pragmatic.
>
> There needs to be 1 (or 0) instances because this is all resolved
> compile time?....i.e. so we (the programmer/compiler) need a function
> from types, and function name into instances...if there 0
> instance...then boom...not defined..so you need to do something,
> that’s fixable....if 2...then boom...you're in trouble?
Sort of, yes. You can't have two instances of the same typeclass for the
same type. However, because instances are always global, and can be
defined separately from both the type and the typeclass, it is possible
to write instances which, when imported, break the code that imports
them by introducing conflicting instances. This is why the
recommendation is to always define instances either in the module that
defines the type, or the module that defines the typeclass. Instances
that are defined separately from both are called "orphan instances", and
people generally avoid them where possible, but occasionally practical
concerns make them necessary. The most common reason for orphan
instances is to avoid dependencies: suppose you write a library that
defines an alternative string type; and someone else writes a library
that defines useful typeclasses for string-like types. Ideally, you want
your custom string type to have instances for those typeclasses, but you
can't put them in the typeclass library (because you don't control it),
and you don't want to put them in your own library, because then your
own library has to depend on the typeclass library even though most
people won't need that functionality. In such a situation, you'd bite
the bullet and provide an extra library that contains just the
instances, peppered with warnings about orphan instances. Another
example is when you write a program that uses types from some
third-party library, and wants to marshal them to and from JSON - for
that to work, you need to implement ToJSON and FromJSON instances for
those types, but since you control neither the type nor the JSON
typeclasses, the only choice you have is to make orphan instances in
your program. In that case, the damage is limited, because you won't
expose the orphan instances to the outside world, so it's sort of
acceptable usually.
> On Nov 4, 2016, at 10:10, Tobias Dammers <tdam...@gmail.com> wrote:
>
> However, in the case of Ord, there is
> little harm in defining a default instance and using more specialized
> sorting functions that take an explicit ordering functions;
I disagree. This is the purpose of newtype wrappers. There are few scenarios where it's syntactically more convenient to pass around a bunch of random functions rather than just define a newtype with the behavior you want. GeneralizedNewtypeDeriving makes this trivial.
Will
As with most things "apply taste" and "apply brain"... uhm... apply.
There are some typeclasses, e.g. Eq, Ord, Num, Monoid, Functor, Monad and Category, that arise from category theory and algebra, so these may be regarded as "good" typeclasses even though a type can have several sensible instances. In that case, newtypes may express this. (Data.Monoid is full of such newtypes.)
In defense of Ord, there are in general many ways to totally order a set (read "set" as "total elements of a type"), and none of these ways can be preferred a priori. I would not say this is an inherently bad property of Ord. Similarly, there are many ways to put a monoid or ring structure on a set.
Sometimes a type class can be avoided. For example, there is Data.Foldable.toList. Hence, instead of writing a Foldable instance for your type and use a function from the Data.Foldable module, you could instead write your own toList conversion function and then fold that list. (Does anyone know a function involving Foldable that can not be reduced to 'toList' followed by some list operations? Is [] the "free instance" of Fodable?)
However, this reduction may be much more inefficient than using the Foldable instance directly.
As a contrived example of what I mean by "free instance", consider the following class that abstracts traversal over a sequence of unknown length.
class ListLike l where
patternmatch :: l a -> Maybe (a,l a)
You can write ListLike instances for [], Vector, Sequence and Set, but only [a] is the solution to the type equation
x = Maybe (a,x).
Then there are multi-parameter type classes. This has been described as metaphorical to a type-level Prolog language [1], since ultimately multi-parameter type classes (without methods) are relations between types.
Another debatable use case: You define many record types, each of which has a sort key component:
data A = A {foo :: Foo, bar :: Bar, keyA :: Int}
data B = B {baz :: Baz, keyB :: Int}
data C = C {... , keyC :: Int}
It might be convenient to define a typeclass
class HasSortKey a where
key :: a -> Int
and endow the types A, B and C with the obvious instances. But one could also avoid the class by making a parametric type:
data Keyed a = Keyed {payload :: a, key :: Int}
and then let data A = A {foo :: Foo, bar :: Bar} and so forth. Which version would you say is `proper' use?
-- Olaf
[1] https://www.reddit.com/r/haskell/comments/ck459/typed_typelevel_programming_in_haskell_part_i/
While we have stated in this thread what typeclasses should _not_ be used for, we probably do not have carved out yet what `proper' use is. Please help me here.
I've heard that too and agree with your objection. I think what
underlies that critic, is that introduccion type classes without laws
makes it harder to see the "intuition" of what such class ought to do
(this in the context of reading, not writing). I specifically remember
being a beginners and see
class RegexOptions regex compOpt execOpt
not understanding what really meant and why had to sprinkle it through
my code. A good case of what you say is Data.Vector.Generic.Vector,
(which argueably make more sense as a backpack-module) that makes the type
class usage intuition clear.
So in resume, what we want is "easy to understand usage" typeclasses
more than laws. To that I think putting emphasis on the important
instances goes great length in helping.
> Instead, define a class if (1) you have a set of operations that can
> be applied at multiple types but aren’t polymorphic, (2) you can give
> a formal or informal description of the operations which are enough to
> reasonably distinguish a good instance from a bad one, and (3) using a
> class makes your code easier to understand.
(2) hits the nail, specially when reading others code!
--
-- Ruben
Am 26.10.2016 um 17:48 schrieb Rik Howard:
> Also, subtypes and binary operators do not really mix; google for
> "parallel type hierarchy". (The core of the problem is that if you
> make Byte a subtype of Word, declaring the (+) operator in Word as
> Word -> Word will preclude Byte from being a subtype because you
> want a covariant signature in Byte but that violates subtyping rules
> for functions. So you need parametric polymorphism, but now you
> cannot use the simple methods for subtyping anymore.)
>
>
> Clearly there is more to be done in this area.
That's a pretty well-explored part of design space.
Unfortunately, there seems to a really hard trade-off between things
like modularity / extensibility, freeness from surprises, and soundness,
so most languages to not offer this.
Striking a useful compromise is hard.
Even integer arithmetic isn't easy; e.g. what's the type and value of
(byte 254)+(byte 254)? For some languages it is (word 510), for some it
is (word -4), some say (bigint 510). I have seen strong arguments
brought forward for each of these answers, but since these solutions are
mutually incompatible, and whatever answer you pick for your language,
you have to ignore the arguments in favor of the other choices.
> I think you should elaborate similarities and differences with how
> Haskell does IO, that's a well-known standard it is going to make
> the paper easier to read. Same goes for Clean&Mercury.
>
> Something like that is addressed in Related Work. Clean is already on
> the list but it sounds, from your comments and those of others, as if
> Mercury may be worth including as well.
Just for reference, e.g. which of your concepts are the same in the
other languages, which are different and what are the differences.
> It's hard to tell whether it is actually new, too many details are
> missing.
>
> Certainly you have spotted the vagueness in the types however I think
> that that issue can be momentarily set aside from the point of view of
> novelty.
Novelty is entirely uninteresting.
Usefulness is.
Novelty *may* point out new ways to constructing something that's more
useful than we have, but you need to point out how the novelty will help
with that if you want to raise serious interest in your work.
Regards,