I'm not sure what you mean. This is a solved problem in many existing "real" languages (e.g. Eiffel, Haskell, etc.). Have you looked at e.g. Eiffel's Void safety? In what way does that not make sense to you?You simply have *two* "decorators" for types. One for "pointer to" which can never be null (except in well defined scenarios, such as constructors, which end with a dynamic test to ensure the "not null" assumption is valid in the rest of the program). The other decorator is for "nullable" which makes any value (including pointers) potentially null.
On Fri, Nov 13, 2009 at 10:33 PM, Sebastian Sylvan <sebastia...@gmail.com> wrote:
I'm not sure what you mean. This is a solved problem in many existing "real" languages (e.g. Eiffel, Haskell, etc.). Have you looked at e.g. Eiffel's Void safety? In what way does that not make sense to you?You simply have *two* "decorators" for types. One for "pointer to" which can never be null (except in well defined scenarios, such as constructors, which end with a dynamic test to ensure the "not null" assumption is valid in the rest of the program). The other decorator is for "nullable" which makes any value (including pointers) potentially null.
Well, you're talking about some different thing. The approach you showed is really language support of runtime checking of pointer to be non-null,
and if you prohibit direct dereferencing of "nullable" type, this would be good protection (gathered in one conversion instead of each dereferencing). But I've told about general approach to show "placeholder" instead of real value: null pointer is the easiest method to show absense of value, all another methods are much more cumbersome and error-prone.
Yes, you'd still have null, it just wouldn't be conflated with the concept of pointers - it would work for *any* type, not just pointers.These are really two totally different things ("nullable" vs "pointer"). It's an accident of history that they've been combined into one construct in so many languages, and there now seems to be a pretty broad consensus among language designers that it's a mistake that we unfortunately have to live with for many existing languages and VMs because it's so ingrained.
Go is a new language, though, so it seems a shame for yet another group of language designers to make the same mistake again (especially when they have a stated goal to avoid just this kind of thing!), when so many others have already lived to regret it.
On Fri, Nov 13, 2009 at 10:53 PM, Sebastian Sylvan <sebastia...@gmail.com> wrote:
Yes, you'd still have null, it just wouldn't be conflated with the concept of pointers - it would work for *any* type, not just pointers.These are really two totally different things ("nullable" vs "pointer"). It's an accident of history that they've been combined into one construct in so many languages, and there now seems to be a pretty broad consensus among language designers that it's a mistake that we unfortunately have to live with for many existing languages and VMs because it's so ingrained.
Saying again, you insist that combining these two concepts was incorrect. I try to show it was correct because is the simplest and supported way to show pointer without value (comparing it to literal 0 is just psychologic artifact). This doesn't avoid us from using another kind of pointers - non-nullable - as you showed here.
But, as soon as it's incorrect to do _checking_ without assigning of null pointer as you show:
var pp *foo nullable;
var p *foo;
if pp != nil { p = pp; }
(it's quite easy to write code without such check)
you shall combine checking and assignment in one action:
try {
p = pp;
do_something(p);
}
On Fri, Nov 13, 2009 at 9:12 PM, Valentin Nechayev <net...@gmail.com> wrote:On Fri, Nov 13, 2009 at 10:53 PM, Sebastian Sylvan <sebastia...@gmail.com> wrote:
Yes, you'd still have null, it just wouldn't be conflated with the concept of pointers - it would work for *any* type, not just pointers.These are really two totally different things ("nullable" vs "pointer"). It's an accident of history that they've been combined into one construct in so many languages, and there now seems to be a pretty broad consensus among language designers that it's a mistake that we unfortunately have to live with for many existing languages and VMs because it's so ingrained.
Saying again, you insist that combining these two concepts was incorrect. I try to show it was correct because is the simplest and supported way to show pointer without value (comparing it to literal 0 is just psychologic artifact). This doesn't avoid us from using another kind of pointers - non-nullable - as you showed here.
But, as soon as it's incorrect to do _checking_ without assigning of null pointer as you show:
var pp *foo nullable;
var p *foo;
if pp != nil { p = pp; }This would be invalid, p would have to be initialized at the declaration site. Something like:
if pp != nil {var p *foo;p = pp;// rest of code using p in this clause}
This would be invalid, p would have to be initialized at the declaration site. Something like:
if pp != nil {var p *foo;p = pp;// rest of code using p in this clause}
I'm sorry, I just don't understand what you mean here. Could you try to explain it differently?
On Fri, Nov 13, 2009 at 11:27 PM, Sebastian Sylvan <sebastia...@gmail.com> wrote:
This would be invalid, p would have to be initialized at the declaration site. Something like:
if pp != nil {var p *foo;p = pp;// rest of code using p in this clause}
Well, let's the programmer make mistake:
if pq != nil { // pq is some another variable
var p *foo;
p = pp;
// rest of code using p in this clause
}
You simply missed the real check and allowed `p' to be null. Congrats.
I'm sorry, I just don't understand what you mean here. Could you try to explain it differently?
You shall NOT allow to continue execution if conversion fails.
if pp != nil {var p *foo;p = pp;// rest of code using p in this clause}
Well, let's the programmer make mistake:
if pq != nil { // pq is some another variable
var p *foo;
p = pp;
// rest of code using p in this clause
}
You simply missed the real check and allowed `p' to be null. Congrats.
This would simply fail to compile with type errors on the assignment (p is non-nullable after all, and pp is nullable
On Fri, Nov 13, 2009 at 11:49 PM, Sebastian Sylvan <sebastia...@gmail.com> wrote:
if pp != nil {var p *foo;p = pp;// rest of code using p in this clause}
Well, let's the programmer make mistake:
if pq != nil { // pq is some another variable
var p *foo;
p = pp;
// rest of code using p in this clause
}
You simply missed the real check and allowed `p' to be null. Congrats.
This would simply fail to compile with type errors on the assignment (p is non-nullable after all, and pp is nullable
The same for your example, exactly. Compiler don't know whether pp isn't null. If you doubt, compare with the following:
I made a mistake in my example that I subsequently corrected. The compiler would've failed on that one too. Sorry.You should always have to initialize any non-null pointers to a valid value.
So the compiler would always know that something is potentially null, or definitely not null by virtue of regular old type checking (no magic needed!).It's just how you can't assign an int to a string, you have to convert it to a string first. This is exactly the same thing. They're different types, so the compiler would stop you doing any of that.
On Sat, Nov 14, 2009 at 12:06 AM, Sebastian Sylvan <sebastia...@gmail.com> wrote:
I made a mistake in my example that I subsequently corrected. The compiler would've failed on that one too. Sorry.You should always have to initialize any non-null pointers to a valid value.
Well, this is more closer. But what shall the code do if initializing value is null and so initialization shall fail? This code shall not execute further...
So the compiler would always know that something is potentially null, or definitely not null by virtue of regular old type checking (no magic needed!).It's just how you can't assign an int to a string, you have to convert it to a string first. This is exactly the same thing. They're different types, so the compiler would stop you doing any of that.
It can't stop me at compile time because doesn't know real source pointer value.
if p != nil{// inside this clause, the compiler knows p is not nil, so strips away the "nullness" from the type, giving it the type *intvar q *int = p; // no type error!}
var w *int = p; // TYPE ERROR. Here p is nullable.
Personally, I'm not a big fan of piggy-backing on the if statement to do this type promotion. It feels a bit too magic to me. I'd prefer a separate statement, maybe something like:null_cast p {// we only end up here if p was non-null, and its type in this clause is non-nullable}
On Sat, Nov 14, 2009 at 12:17 AM, Sebastian Sylvan <sebastia...@gmail.com> wrote:
if p != nil{// inside this clause, the compiler knows p is not nil, so strips away the "nullness" from the type, giving it the type *intvar q *int = p; // no type error!}var w *int = p; // TYPE ERROR. Here p is nullable.
All this means too strict code style limitation which isn't applicable for real tasks.
Sebastian Sylvan <sebastia...@gmail.com> writes:
> On Fri, Nov 13, 2009 at 9:06 PM, Ian Lance Taylor <ia...@google.com> wrote:
>
>> This leads us in the direction of the const type qualifier. In myIf you don't push nonnull through your program, then what benefit have
>> personal opinion, this kind of thing should not be part of the type.
>> I think this amounts to a language design choice. I think that
>> calling it a billion dollar mistake amounts to hyperbole.
>>
>
> I'm sorry but this makes no sense to me. In what way does this have a
> relation to the const qualifier? You can convert between them at any time,
> you just have to make sure you handle the failure case in the unsafe
> direction.
you really gained?
Go currently crashes if you dereference a nil pointer. Presumably
> It's not about adding information to types, it's about having a less error
> prone view of what a pointer *is*. Your argument seems to me to be just as
> applicable to the distinction between ints and bools as well - why don't we
> just stick those under the same type? Answer: Because they're fundamentally
> different! Just like the concept of pointers is different to the concept of
> nullable values.
>
> It's his mistake, he can call it what he wants. I don't think he's wrong in
> that a "feature" that introduces potential runtime crashes all over the
> place has been an incredibly expensive mistake.
with a nonnull qualifier, it would crash if you assign a nil pointer
to a pointer type with the nonnull qualifier. What is the fundamental
difference? Either way your program is taking an invalid action, and
it crashes.