Not to beat a dead horse or anything, but before generics get added I
think you'd have to convince the language designers that generics are
desirable. A lot of people are finding the language works just fine
without them. I for one don't want to give up either the type-
checking safety or awesome compile speed the current language
provides. Your mileage may vary.
The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?
But I think that in order to do that kind of checking the compiler
will slower that it is now, and I prefer fast compilers, otherwise
things like gorun wouldn't be useful.
--
André Moraes
http://andredevchannel.blogspot.com/
switch f := sometype.(type) {
case func(int), func(interface{}):
f(100)
}
rather than the current behaviour where f isn't recognised as callable and a compile error occurs.
Ellie
Eleanor McHugh
Games With Brains
http://feyeleanor.tel
----
if _, ok := reality.(Reasonable); !ok { panic(reality) }
That package existed in the begining of the language where we don't
had append/copy builtin.
If you just want dynamic containers, stick with slices append/copy.
> The type vector.Vector is of type []interface{}. the Insert method has
> a parameter of type interface{}.
> So, a vector.Vector variable can hold different types of data at the
> same time. I think this is not consistent.
Sure, but in this case you have to be explicity when reading values,
either using a type-switch or a type-assert.
> It fails the address the way Ada, Delphi, Modula-3, Eiffel, ML, D,
> Haskell, .NET, just to name a few,
> implement generic types.
In what way do those languages avoid the question that Russ points out?
> Those languages/runtimes don't have the issues C++ has, because they
> have proper module
> systems, which allow for optimizations C++ cannot afford. C++ main
> issue is that it is supposed
> to be able to generate code compatible with archaic linkers.
It's not a module system that is an issue. Go is a compiled language,
so for this discussion we can restrict ourselves to that space. The
question then boils down to: if you have a generic function, do you
generate a version of it specialized for each type (slower compiler) or
do you generate a version that does type switches at runtime (slower
execution time)?
Ian
I've spent a lot of time experimenting with different ways of implementing efficient generic container types, and the main problem I find is with code repetition. I think this could be considerably reduced by allowing the following statement to work as intended:switch f := sometype.(type) {
case func(int), func(interface{}):
f(100)
}rather than the current behaviour where f isn't recognised as callable and a compile error occurs.
> It's not a module system that is an issue. Go is a compiled language,
> so for this discussion we can restrict ourselves to that space. The
> question then boils down to: if you have a generic function, do you
> generate a version of it specialized for each type (slower compiler) or
> do you generate a version that does type switches at runtime (slower
> execution time)?
I don't think it's as dichotomous as that. ML, for example, has generic
(at least, polymorphic) functions that don't require specialised copies
and don't require type switches. What it /does/ require is uniform
representation, which is likely another route to your "slower execution"
but with different tradeoffs. (Go is fixed with non-uniform representations,
but I wonder if a good cut at useful genericity would be a layer around
interfaces, which are what you ahve to use for polymorphism in Go
anyway.)
Chris
--
Chris "allusive" Dollin
> On Thursday, March 1, 2012 5:47:12 PM UTC+1, Eleanor McHugh wrote:
> I've spent a lot of time experimenting with different ways of implementing efficient generic container types, and the main problem I find is with code repetition. I think this could be considerably reduced by allowing the following statement to work as intended:
> switch f := sometype.(type) {
> case func(int), func(interface{}):
> f(100)
> }
>
> rather than the current behaviour where f isn't recognised as callable and a compile error occurs.
>
> It works *as intended*. In the example above, 'f' in the only case clause can have only one type, 'interface{}' in this case.
>
> This version IMO works "as intended":
And IMO the current behaviour serves little (if any) practical purpose and is semantically confusing. The compiler knows the concrete type and can enforce type safety throughout the associated case, but instead it boxes the value in interface{} and effectively erases its type until a further type assertion is performed.
> func foo(x interface{}) {
> switch f := x.(type) {
> case func(int):
> f(100)
> case func(interface{}):
> f(200)
> }
> }
>
This is different to my example which would more correctly be:
func foo(x interface{}) {
switch f := x.(type) {
case func(int):
f(100)
case func(interface{}):
f(100)
}
}
It's the very fact that both code paths are identical save for the type of their function call which is of interest to me because that more closely approximates the use cases generics are intended to address, not to mention a considerable volume of the code I've written in recent months.
This simple change to the language spec would reduce code duplication when implementing generic algorithms and data structures without introducing an unreasonable compile-time overhead.
> This simple change to the language spec would reduce code duplication
> when implementing generic algorithms and data structures without
> introducing an unreasonable compile-time overhead.
It may be a simple change to the language spec but it's not a simple
change to the compiler. In effect the compiler would have to introduce
the type switches that you prefer not to write. This is of course
doable but at least to me it sounds like surprising behaviour. The
simple expression f(100) would turn into a runtime type switch.
And in truth I'm not sure it's a simple change to the language spec.
After all, presumably you don't want any special treatment for
case func(int), int:
So you would have to define some notion of "types that are compatible
enough for a type switch but are not actually identical." There is no
such notion in the language at present.
Ian
No, it need only introduce the additional cases to the existing type switch and then apply knowledge it already has to ensure that the expanded code is correct.
> This is of course
> doable but at least to me it sounds like surprising behaviour. The
> simple expression f(100) would turn into a runtime type switch.
case func(int), func interface{}:
f(100)
would expand to
case func(int):
f(100)
case func(interface{}):
f(100)
> And in truth I'm not sure it's a simple change to the language spec.
> After all, presumably you don't want any special treatment for
> case func(int), int:
If the two cases would expand to identical code save for the type of the variable itself then this case would be just as reasonable as
case func(int), func(interface{}):
or
case int, uint:
and if it doesn't then the compiler should report an error.
> So you would have to define some notion of "types that are compatible
> enough for a type switch but are not actually identical." There is no
> such notion in the language at present.
It isn't necessarily required because when the code is expanded it can be statically checked as mentioned above.
That's not to say that there wouldn't be value in bringing the reflect.Kind concept into the core language, but that's a discussion for another day.
>> This is of course
>> doable but at least to me it sounds like surprising behaviour. The
>> simple expression f(100) would turn into a runtime type switch.
>
> case func(int), func interface{}:
> f(100)
>
> would expand to
>
> case func(int):
> f(100)
> case func(interface{}):
> f(100)
Fair enough, I suppose.
Still definitely makes me uneasy. It seems strange to write code that
is expanded multiple times with different types. Sort of generics
through the backdoor.
Ian
Whereas to me it seems entirely natural, but I have spent more than a decade in the Ruby world so my instinctive response to unavoidable code duplication is to abstract it behind a code generator for consistency.
> Sort of generics through the backdoor.
I suspect the CS-literate would disagree as this doesn't address such issues as callability, indexability, etc. To do that the concept expressed in the runtime by reflect.Kind would somehow have to be exposed at the type-system level and whilst I've pondered the use of a .(kind) switch it's a bit of a rabbit-hole which probably leads nowhere.
However this modification to type switches expresses a simple contract between programmer and compiler, which makes it very easy to reason about. The programmer is making the assertion that two types will behave the same way in a particular context and the compiler is already knowledgeable enough to put that assertion to the test. I think that's very much in keeping with the spirit of Go even if it might have a slightly spooky deus ex machina feel for people used to the current behaviour.
I also have a vested interest as I'm working on a package which would greatly benefit from such a feature.
Sequences seeks to be a generalised container library which provides optimised performance for slices and channels of all the basic scalar types Go provides, along with interfaces that allow user-defined types to implement their own optimisations. The most tedious aspect of implementing this is the code repetition involved: there are 20 basic types (including reflect.Value, which isn't really a basic type but is useful) and whilst there are cases which deviate I suspect I could reduce the code for handling their commonalities by 90% if my type switches could use cases as described above. This equates to several thousand conceptually redundant lines of code being pruned from the codebase, making it much easier to read and maintain.
it seems to me that this would be a hard thing to define well.
everywhere in Go, types are crystal clear - something is
either of a particular type or it's not. the types in
this proposal are not clear - they're a superposition
of the types in the case statement.
you could have a situation where this is OK:
switch x := x.(type) {case S, T: x.Foo()}
and this is OK:
switch x := x.(type) {case S, T: x.Bar()}
but this is not OK:
switch x := x.(type) {case S, T: x.Foo(); x.Bar()}
i think that would be weird.
we've got reflection for this kind of thing.
Possibly. I haven't given much thought to how it would be expressed in the spec.
> everywhere in Go, types are crystal clear - something is
> either of a particular type or it's not. the types in
> this proposal are not clear - they're a superposition
> of the types in the case statement.
Go already allows for type superpositions in the form of interfaces so this argument is misleading.
switch x := x.(type) {
case S:
case T, U:
case V, W, X:
default:
}
In this snippet both [T, U] and [V, W, X] have an effective type of interface{}, so their values have effectively been superposed in a type singularity which must be actively introspected (with a type switch, type assertion or as a reflect.Value) to be unboxed and manipulated. That leads to code such as:
switch x := x.(type) {
case S:
case T, U:
if t, ok := x.(T) {
} else {
}
case V, W, X:
switch x := x.(type) {
...
}
default:
}
But who would write this code?
> you could have a situation where this is OK:
>
> switch x := x.(type) {case S, T: x.Foo()}
>
> and this is OK:
>
> switch x := x.(type) {case S, T: x.Bar()}
>
> but this is not OK:
>
> switch x := x.(type) {case S, T: x.Foo(); x.Bar()}
Yes, you could. If either S or T had a Bar() method but the other didn't then this would be an error. An error which could be reported at compile time just as any other attempt to call Bar() on a type which doesn't support it would be reported.
> i think that would be weird.
And I think it would be helpful.
> we've got reflection for this kind of thing.
Reflection is orders of magnitude slower than using a type switch or assertion, and until that changes I'd prefer not to have to incur its cost unnecessarily. In the case we're discussing all the information needed for type checking is available at compile time so the overhead associated with reflection is completely unnecessary.
On 2 Mar 2012, at 12:42, roger peppe wrote:
> On 1 March 2012 21:49, Eleanor McHugh <> wrote:
>> case func(int), func interface{}:
>> f(100)
>>
>> would expand to
>>
>> case func(int):
>> f(100)
>> case func(interface{}):
>> f(100)
>
> it seems to me that this would be a hard thing to define well.
> everywhere in Go, types are crystal clear - something is
> either of a particular type or it's not. the types in
> this proposal are not clear - they're a superposition
> of the types in the case statement.
Go already allows for type superpositions in the form of interfaces so this argument is misleading.
switch x := x.(type) {
case S:
case T, U:
case V, W, X:
default:
}
> Having a module system resolves the compile time issue. The major
> problem with C++ compile times is due to the #include<> which have
> to be parsed multiple times, as the compiler cannot be sure if their
> content
> remains constant.
I know. But that is separate from the issue of generics.
> Usually the most common solution is to generate a concrete type for
> each set of
> types. There are no duplicates of generic types instantiations, thus
> reducing the
> code bloat that templates in C++ are known for.
There is no code bloat when using a modern C++ compiler. But there is
compilation time bloat: every template function is recompiled for every
type for which it is instantiated.
I don't know what you mean by say "a concrete type for each set of
types." What are the sets of types you are referring to?
> Other not so mainstream languages show that is possible to have fast
> compile times,
> with generic types, while avoiding code bloat.
This fact does not resolve the issue that Russ pointed out.
I agree with an earlier poster that you can reduce the compile time
impact somewhat if all types are the same size. Unfortunately that is
not the case for Go, because Go is designed to allow more precise
control over how types are laid out in memory.
Ian
>> Sort of generics through the backdoor.
>
> I suspect the CS-literate would disagree as this doesn't address such
> issues as callability, indexability, etc. To do that the concept
> expressed in the runtime by reflect.Kind would somehow have to be
> exposed at the type-system level and whilst I've pondered the use of a
> .(kind) switch it's a bit of a rabbit-hole which probably leads
> nowhere.
Right, but wouldn't a real generics solution cover what you need and
also be generally better? What I mean is, I don't think we should patch
the language to get around the fact that it doesn't have have generics.
At least, we shouldn't do it unless we definitely decide that the
language is not going to get generics.
Ian
Generics might cover what I need but to date no proposal has proven attractive.
> What I mean is, I don't think we should patch
> the language to get around the fact that it doesn't have have generics.
However if a simple, elegant extension to existing idioms were able to express many of the type isomorphisms that real-world code exhibits, then surely that would be a good thing?
> At least, we shouldn't do it unless we definitely decide that the
> language is not going to get generics.
Then decide not to add generics and focus on exploring how the existing type system can be generalised.
Simples! :) [0]
Ellie
[0] for non-UK readers: http://en.wikipedia.org/wiki/Aleksandr_Orlov_(advertising)
I'm not sure what you mean...
>> I don't know what you mean by say "a concrete type for each set of
>> types." What are the sets of types you are referring to?
>
> I was referring to generics instantiations for each set of argument
> types. Meaning
> that if multiple modules (packages) refer to the same generic
> instantiation, lets
> say Stack<int>, the final executable will have the corresponding code
> generated only once.
Are you saying that you can avoid "instantiating templates for each type" by
"instantiating for each type" ?
Rémy.
it seems to me that this would be a hard thing to define well.
everywhere in Go, types are crystal clear - something is
either of a particular type or it's not. the types in
this proposal are not clear - they're a superposition
of the types in the case statement.
Right, but wouldn't a real generics solution cover what you need and
also be generally better? What I mean is, I don't think we should patch
the language to get around the fact that it doesn't have have generics.
At least, we shouldn't do it unless we definitely decide that the
language is not going to get generics.
Ian
Nowadays all mainstream static languages support some form of
generics,
with exception of C, so going back to the hacks the C++ community had
to
create in the late 80's, early 90's, seems quite strange for a young
language
that is still looking for its place in the computer landscape.
> Nowadays all mainstream static languages support some form of
> generics,
> with exception of C, [....]
Even C has generics. They are implemented via the preprocessor. Here
is an example of a generic vector implementation in pure C:
http://gcc.gnu.org/viewcvs/trunk/gcc/vec.h?revision=181258&content-type=text%2Fplain&view=co
Ian
What do you mean by "definition" of generics?
Rémy.
That is not generics. The preprocessor is not part of the compiler.
Generics are type safe and compiler supported as per CS definition.
http://en.wikipedia.org/wiki/Generic_programming
> You can twist the definition as much as you want, but I don't see any
> reference to C there.
Obviously I was being slightly facetious in claiming that C supports
generic programming, but the fact is that the preprocessor is part of
the C language, and the preprocessor can be used in a way that supports
the "broad definition" from the wikipedia page. It's not an important
point in any discussion of generics, but I believe it is a true
nonetheless.
Ian
I have sympathy with both sides of this argument. However as a coder what really matters to me is DRYness. Any Go code which seeks to expose the same API for a number of different numeric, map, function or slice types results in a commensurate expansion of duplicated code and that's a real long-term maintenance issue.
Now as Jan suggested this could be addressed by using an external templating tool to generate the code at compile-time, but personally I find this unsatisfying in the same way that C's macros are unsatisfying. Instead I'd much rather have some form of type composition mechanism built directly into the language that works at the same scale as the existing type switch idiom but addresses the core functional properties of type.
The reflection system can be used to address much of this but at a considerable performance cost and the resulting code can become somewhat dense. For the places where runtime introspection of code is necessary these are acceptable tradeoffs, but for the cases generics are concerned with this can become a significant anti-pattern.
Steven's suggestion of Union types would be another way of addressing this, and is rather elegantly symmetrical to interfaces (which could be considered a form of Intersection type :). In the past I've also raised the possibility of some form of Category typing. Interfaces are a form of this based on method signatures, but interfaces can't express the fundamental type axioms of callability, indexability etc. expressed by Go's standard types. If the language had idiomatic statements for dealing with these - and I concede that perhaps extra type-switch semantics aren't the right place for such an idiom - then most of the use cases for generics would be addressed.
One thing I think we can all agree on though is that there are well-known cases related to type in which Go is currently more verbose than it needs to be. Solving this elegantly would make the language a bit more productive to work with - especially when implementing generalised APIs - and somewhat simplify long-term maintenance of large codebases.
So whilst generics are not a panacea and the language can probably survive without them, enabling DRYness within the existing framework of Go's features would be a solid, pragmatic win for real-world development.
Ellie
i'm not sure what you mean by union type in this context.
C-style union types are not type safe, and can be discounted, i think.
discriminated union types wouldn't do the job either, as you'd
have to decide which branch to choose when using the type.
as i mentioned before, i find it hard to think of a coherent type model
that would allow x.Foo or x.Bar but not both. the only decent explanation
is macro expansion, and i'm not keen on that.
Currently writing a large program which is half c#.net and half go.
c# advantages:
generics - fast thanks to .net JIT
excellent IDE
easy dynamic typing
linq - would like to see something like this in go
extremely flexible collections library (mostly to make up for poor sync support)
go advantages:
no 180 character wide generic def hell
no cumbersome IDE
Much easier dep mgmt
no goofy manifests
much smaller codebase
no public/private static/nostatic visibility
no synchronization mess
thousands of threads
easier to understand memory model
better net support
most of my headaches in c# come about as a result of poorly planned generics and bad support for threading.
take your pick :)
i'm not sure what you mean by union type in this context.
C-style union types are not type safe, and can be discounted, i think.
discriminated union types wouldn't do the job either, as you'd
have to decide which branch to choose when using the type.
as i mentioned before, i find it hard to think of a coherent type model
that would allow x.Foo or x.Bar but not both.
the only decent explanation
is macro expansion, and i'm not keen on that.
yes. that's the hard bit.
for instance, Eleanor's original example was this:
switch f := sometype.(type) {
case func(int), func(interface{}):
f(100)
}
how do we define the intersection between
func(int) and func(interface{}) in such a way
that the above function call is legal?
what about the intersection between
[]string and map[uint64]string ?
if i've got a value of this intersection type, can
i pass it to another function? if so, how do i describe it?
>> as i mentioned before, i find it hard to think of a coherent type model
>>
>> that would allow x.Foo or x.Bar but not both.
>
>
> I'm not entirely sure what you mean by this. If you're asking how to
> restrict the possible fields (for example) to those shared by all
> component types in the union, then it seems that the same as asking
> how to have interfaces restrict their method sets to less than that of
> a given implementing type. Otherwise, I can't see under what
> circumstances you'd need a type that has one of two fields but never
> both. Accessing one field should never impair your ability to access
> another.
i was referring to a previous example where the selected cases had
mutually distinct sets of fields.
> It's possible to keep field offsets in the type information. Other
> operations could be similarly abstracted, interfaces already do this
> with methods. A nice thing about Go is that types support a limited
> set of operations, and vice versa, which helps limit the scope of
> these abstractions, and several kinds of types are already
> generalized, which helps.
we've got interfaces. they seem sufficient to me. if you want
an intersection of operations, we can define an interface that
encapsulates that intersection.
defining intersection on other type properties seems like
a hard problem, and probably not one that's going to lead
towards a better language IMHO.
It is in the general case, so let's not go down that rabbit-hole.
We already know a lot about Go's type categories from the design of the reflection system, so we should let that guide our thinking on this. Indeed consistency with the type guarantees provided by reflection would be essential to consistent type behaviour.
> for instance, Eleanor's original example was this:
>
> switch f := sometype.(type) {
> case func(int), func(interface{}):
> f(100)
> }
>
> how do we define the intersection between
> func(int) and func(interface{}) in such a way
> that the above function call is legal?
This one's fairly easy to describe. The intersection is that both functions are callable with a single parameter of one of a range of types, and the code which follows should be able to provide that one parameter in any of the types required by the specified functions.
> what about the intersection between
> []string and map[uint64]string ?
In the general case the two could be considered keyable and we could interpret that as a shared property, however reflection has already separated keyability and indexability so their intersection must be interpreted as producing an empty set.
> if i've got a value of this intersection type, can
> i pass it to another function? if so, how do i describe it?
I can see the benefit of being able to write functions which only work on a particular kind of type (I often do this in reflected code) but we would need to add additional meta-types to the core language which are equivalent to the various reflect.Kind values to make that possible in this sense. And if we did that then there's the question of user-defined types based on them: not necessarily a bad idea, but one I don't currently have time to think through in depth.
And in any event I think the real value of the type intersection would be in passing through the original type. Remember that this is a compile-time intersection and not type reflection so we're using concealed code generation to improve run-time performance compare to a reflection-based approach and at the same time seeking to increase both simplicity of expression and DRYness.
As a localised type-safe macro expansion this would be a coder win.
<snip/>
> we've got interfaces. they seem sufficient to me. if you want
> an intersection of operations, we can define an interface that
> encapsulates that intersection.
Only for user-defined types.
> defining intersection on other type properties seems like
> a hard problem, and probably not one that's going to lead
> towards a better language IMHO.
It's already been solved for reflection, so it is possible and much less of a deviation from Go's current design than a full-blown generics system would be.
does the intersection go deeper than the top level?
what about func()func(int) vs func()func(interface{}) ?
or func()interface{Foo() int; Bar(int)} vs func() interface{Foo()
interface{}); Bar(interface{})}?
if you can come up with a coherent one- or two-paragraph specification
of how this could work, i'll be surprised.
IMHO it would introduce lots of complexity with limited applicability.
type switches aren't that common. i think that by working a lot on
"wannabe generic" code, you perhaps have a slightly unusual
perspective on how useful this feature would be.
> And in any event I think the real value of the type intersection would
> be in passing through the original type.
how can you pass through the original type if you don't know what it is?
if i've got some code like:
switch y := x.(type) {
case A, B:
Foo(y)
}
how can i write Foo such that the above code can compile?
the only way i can see this working is if y has x's original
type, which is the current behaviour.
does the intersection go deeper than the top level?
what about func()func(int) vs func()func(interface{}) ?
or func()interface{Foo() int; Bar(int)} vs func() interface{Foo()
interface{}); Bar(interface{})}?
if you can come up with a coherent one- or two-paragraph specification
of how this could work, i'll be surprised.
IMHO it would introduce lots of complexity with limited applicability.
type switches aren't that common. i think that by working a lot on
"wannabe generic" code, you perhaps have a slightly unusual
perspective on how useful this feature would be.
how can you pass through the original type if you don't know what it is?
if i've got some code like:
switch y := x.(type) {
case A, B:
Foo(y)
}
how can i write Foo such that the above code can compile?
the only way i can see this working is if y has x's original
type, which is the current behaviour.
it seems to me that that would mean that
case func(int), func(int32):
would have give rise to nothing callable, which kinda loses everything
that Eleanor was aiming towards AFAICS.
> • T is an interface type, and for each method on T, x's type V has a method
> of the same name with the same number of parameters and return values such
> that each parameter of the method on V is assignable to the corresponding
> parameter of the method on T, and each return value of the method on T is
> assignable to the corresponding return value of the method on V.
> (this second one may be better handle in the definition of "implements" but
> I stated it this way for simplicity)
same kind of problem.
> If A and B are both compatible with some interface/abstraction T ⊈ V, where V is the type of x, then `func Foo(t T)`.
i remain unconvinced. go currently has no notion of "compatible" type.
i can't see this working
without large degrees of added complexity, which doesn't seem worth it to me.