write once

83 views
Skip to first unread message

Ryanne Dolan

unread,
Feb 11, 2010, 7:23:17 PM2/11/10
to golang-nuts
Is there a reason that local variables can be assigned-to more than once in Go?

a := 1
a = 2

In a copy-by-value language, this feature often causes confusion:

for i,x := range a {
    i = 1
    x = 2
}

Why would it ever make sense to assign to i or x after they have been initialized?

func foo(a int, b int) {
    a = 1
    b = 2
}

Why would I ever re-use these variables like this?

Contrast with fields:

for _,ptr := range a {
    ptr.field = 1
}

func foo (ptr *obj) {
    ptr.field = 1
}

which make sense to assign-to multiple times.

Looking at my own code, I never use 'a = 1' anywhere.  I would consider it bad practice to do so, for several reasons.  Off-hand:
1. makes it harder to read code when the values change, especially if they change conditionally
2. makes compiler optimizations harder
3. usually means that you are re-using a variable for something entirely different, which is bad all around

I know many languages are designed either way.  I also know that Go has a lot of influence from C and other procedural languages that allow this.  But the question remains: why hold on to this "feature" when it is never good and always bad?

Thanks.
Ryanne

--
www.ryannedolan.info

Kevin Ballard

unread,
Feb 11, 2010, 7:33:34 PM2/11/10
to Ryanne Dolan, golang-nuts
Saying that the ability to reassign a variable's value is never good
is awfully shortsighted. At the very least, it makes code modification
significantly easier. For example, if you want to change a large block
of code to use a new value that's based on the old value, you can just
reassign the variable. With your proposed immutability of local
variables, you would have to do a search&replace to find every
instance of the variable in the code and change it to the new
variable.
Using immutable variables works decently well in functional languages
like Erlang, which are generally built out of a lot of really small
blocks of code and rely on recursion for iteration and the like, but
procedural languages work rather differently.
Hell, the lack of mutable variables would make it rather difficult to
even assign a value to a variable based on a condition. Right now you
can do

var i int;
if b {
i = 3;
} else {
i = 5;
}

But this isn't possible with immutable variables (and the lack of a
trinary operator means we can't do this as an expression, at least not
without a rather ugly bit of syntax and a helper function).

I don't see any benefit to trying to cram this "feature" into a
procedural language. It's just not a good fit.

-Kevin Ballard

--
Kevin Ballard
http://kevin.sb.org
kbal...@gmail.com

Ryanne Dolan

unread,
Feb 11, 2010, 7:52:16 PM2/11/10
to Kevin Ballard, golang-nuts
For example, if you want to change a large block of code to use a new value that's based on the old value, you can just reassign the variable. 

... or better yet: pull the block out into a function or closure.

I don't think your code:

   var i int;
   if b {
       i = 3;
   } else {
       i = 5;
   }

...is a fair counter-example, because the variable is still only assigned-to once.

I'd love to see a good argument for allowing multiple writes to the same _local_ variable.  If you can come up with one, I'll shut up.  I don't think I'm being shortsighted, and I know quite a bit about functional languages.  There are reasons why functional languages are designed that way.

It makes sense in C for only a few fathomable reasons:
1. shared memory communication/synchronization
2. writing to specific regions in memory (e.g. registers) that affect the state of the processor, kernel, etc. I think this is impossible in "safe" Go anyway.

Both of these can be done using fields and pointers, so even in C there isn't a great reason to allow it.

In Go I can only imagine using this "feature" to communicate between closures, but we have objects and channels for that.  If you use variables to communicate between closures, you are no better off than using shared memory in C.  You will only create race conditions... much better to use channels.

Anyone have a good counter-example that is relevant to Go?

Thanks.
Ryanne

--
www.ryannedolan.info

Kevin Ballard

unread,
Feb 11, 2010, 8:07:49 PM2/11/10
to Ryanne Dolan, golang-nuts
On Thu, Feb 11, 2010 at 4:52 PM, Ryanne Dolan <ryann...@gmail.com> wrote:
>> For example, if you want to change a large block of code to use a new
>> value that's based on the old value, you can just reassign the variable.
>
> ... or better yet: pull the block out into a function or closure.

It sounds like you're advocating for writing Go in a functional style.
Functional style has its benefits, but using it just for the sake of
using it is a bit silly.

> I don't think your code:
>    var i int;
>    if b {
>        i = 3;
>    } else {
>        i = 5;
>    }
> ...is a fair counter-example, because the variable is still only assigned-to
> once.

Untrue. The variable declaration also assigns it the zero value.

> I'd love to see a good argument for allowing multiple writes to the same
> _local_ variable.  If you can come up with one, I'll shut up.  I don't think
> I'm being shortsighted, and I know quite a bit about functional languages.

Like I already said, modifying variables inside of pre-existing blocks
of code. I should not be required to refactor the entire block of code
every time I want to change one of the values it uses.

Making variables immutable is a great way to end up with a large
number of oddly-named variables which are intermediate values. For
example, every time you need to have an error return value (e.g. foo,
err = blah()), you'd have to come up with a unique name for the error
variable. Reusing variables is also a great way to "return" values
from inside control structures, e.g.

var err os.Error
for i := 1; i < 10 && err == nil; i++ {
var foo string
foo, err = someMethodThatCouldPotentiallyError(i)
}
if err != nil {
fmt.Println("Error:", err)
}

If variables were immutable we'd have to recreate all control
structures as functions that operate on closures, which would clutter
up programs needlessly.

Your arguments for why variables should not be reused don't seem
particularly relevant. Points 1 and 3 boil down to trying to protect
the programmer from writing bad code, and I don't think the language
should try to protect me, it should enable me to write what I want.
And you haven't given any justification for point 2.

-Kevin Ballard

Andrew Gerrand

unread,
Feb 11, 2010, 8:23:27 PM2/11/10
to Ryanne Dolan, Kevin Ballard, golang-nuts
On 11 February 2010 16:52, Ryanne Dolan <ryann...@gmail.com> wrote:
> Anyone have a good counter-example that is relevant to Go?

How about a non-recursive sort implementation?

Ryanne Dolan

unread,
Feb 11, 2010, 8:31:31 PM2/11/10
to Kevin Ballard, golang-nuts
The variable declaration also assigns it the zero value.

I guess that is a matter of definition.  I am defining "assignment" as an explicit operation which uses the '=' operator.  I wouldn't consider default-initialization to be an assignment, even tho memory is certainly being set.

So yeah, I agree that you need to be able to conditionally assign to variables... but I'm not sure there is a particularly good reason to assign to variables multiple times.

Points 1 and 3 boil down to trying to protect the programmer from writing bad code

Indeed.  We know from C and C++ lore that languages that allow you to shoot yourself in the foot (whether with a small or big gun) are dangerous.  I understand your point, but whenever there is a way to make programming safer without loss of power, it is a good thing.  I'm still wondering if there would be a loss of power.

It sounds like you're advocating for writing Go in a functional style. Functional style has its benefits, but using it just for the sake of using it is a bit silly.

Go is already a multi-paradigm language.  Writing it in functional style is common and relevant.  I don't advocate for purely functional languages (ever).

Making variables immutable is a great way to end up with a large number of oddly-named variables which are intermediate values

This is a matter of taste.  Some people will tell you that using many oddly-named variables makes your code more self-documenting, more readable, and more easily maintained.  Rather than mentally track the state of some variable through several lines of code, a variable _is_ what its name implies.

So point taken, but I am not convinced your way is better.
 
Thanks.
Ryanne

--
www.ryannedolan.info

Ostsol

unread,
Feb 11, 2010, 8:35:37 PM2/11/10
to golang-nuts
That would make something like the following illegal:

for i := 0; i < x; i++ {
// stuff
}

-Ostsol

Steven Blenkinsop

unread,
Feb 11, 2010, 8:41:40 PM2/11/10
to Kevin Ballard, Ryanne Dolan, golang-nuts
Any time you're iterating, you're incrementing, reslicing,
reassigning, updating, appending... I can't see how you can justify
your claim that there is no reason to update a local variable, or that
it makes code confusing.

Reusing the same variable for something completely different is
stupid, but updating a variable doesn't make code confusing.

Also, you shouldn't complain about other peoes examples when your only
justification is reassigning a variable before it's been used. Of
course that's stupid, but that's not how reassignment is normally used
anyway.

Ryanne Dolan

unread,
Feb 11, 2010, 8:42:37 PM2/11/10
to Ostsol, golang-nuts
Ostsol,
Indeed.  So what is the middle ground?  Is there a way to prevent my original pitfalls, without condemning the assignment, increment, etc operators?

I think the typical solution is to allow variables to be read-only within certain scopes.  In particular, I should never need to reassign to x or i in

for i,x := range a {
    i = 1
    x = 2
}

Like I said earlier, I don't advocate pure-functional programming, tho it might look like it from this conversation.  I'm more looking for a smarter compiler.

Thanks.
Ryanne

--
www.ryannedolan.info

Kevin Ballard

unread,
Feb 11, 2010, 8:43:44 PM2/11/10
to Ryanne Dolan, Ostsol, golang-nuts
I still don't see why reassigning a variable is considered a problem.
In your little snippet there, what exactly is the problem posed by
modifying those variables?

-Kevin Ballard

--

Ryanne Dolan

unread,
Feb 11, 2010, 8:51:14 PM2/11/10
to Steven Blenkinsop, Kevin Ballard, golang-nuts
Steven,

I guess I didn't consider that it would be confusing to disallow multiple assignments but allow increment, *=, etc operations.  I still think it would be possible to prevent multiple assignments using only the '=' operator, and I think such a rule would prevent my original pitfalls.  But I guess such a rule could be confusing (as evidenced by the fact that no one seems to agree with me!).

Thanks.
Ryanne

--
www.ryannedolan.info

Ryanne Dolan

unread,
Feb 11, 2010, 9:08:18 PM2/11/10
to Kevin Ballard, Ostsol, golang-nuts
why reassigning a variable is considered a problem

From a readability standpoint, something like:

a = 4
...
a = 5

indicates to me that the variable 'a' has been poorly named and is being abused (used contrary to its original purpose).  Notice that there is a subtle difference here between _operating_ on a variable, and outright assigning it.  Logically, numberOfDogs++ means "now I have one more dog than I had before", whereas "numberOfDogs = 5" means "forget what I said before about the number of dogs".

(admittedly it doesn't make sense, retrospectively, to allow a++ but not a=a+1)

The other reason: in a pass-by-value language, it doesn't make a lot of sense to allow modification (even increment, decrement, etc) of parameters.

Consider:

func foo (a int) int {
    a++
....
    return a
}

Sure this works, but it indicates that 'a' is changing, but in reality we are just abusing the parameter as a local (temporary) variable.  Better to say:

func foo (a int) int {
   b := a + 1
....
   return b
}

Thanks.
Ryanne

--
www.ryannedolan.info

David Roundy

unread,
Feb 11, 2010, 9:15:52 PM2/11/10
to Ryanne Dolan, golang-nuts
On Thu, Feb 11, 2010 at 6:08 PM, Ryanne Dolan <ryann...@gmail.com> wrote:
> The other reason: in a pass-by-value language, it doesn't make a lot of
> sense to allow modification (even increment, decrement, etc) of parameters.
> Consider:
> func foo (a int) int {
>     a++
> ....
>     return a
> }
> Sure this works, but it indicates that 'a' is changing, but in reality we
> are just abusing the parameter as a local (temporary) variable.  Better to
> say:
> func foo (a int) int {
>    b := a + 1
> ....
>    return b
> }

These will only be equivalent if you don't take into account
performance (or memory use, which comes down to the same thing), or if
you limit your arguments to builtin-types such as ints that happen to
fit into registers. Just consider what happens when the data type a
becomes big.

It's easy to reason about a language if you disregard the possibility
that it will ever be used to do something hard.

David

Ryanne Dolan

unread,
Feb 11, 2010, 9:22:57 PM2/11/10
to David Roundy, golang-nuts
Just consider what happens when the data type a becomes big.

Instead of passing around large parameters on the stack, you should pass pointers and explicitly copy only the memory you need to copy:

func foo (a *BigDataType) {
    b := a.field + 1
....
}

This is all-around better than allowing me to modify the parameter.

Thanks.
Ryanne

--
www.ryannedolan.info

Steven

unread,
Feb 11, 2010, 10:50:11 PM2/11/10
to Ryanne Dolan, David Roundy, golang-nuts
On 2010-02-11, at 9:22 PM, Ryanne Dolan <ryann...@gmail.com> wrote:

Just consider what happens when the data type a becomes big.

Instead of passing around large parameters on the stack, you should pass pointers and explicitly copy only the memory you need to copy:

func foo (a *BigDataType) {
    b := a.field + 1
....
}

This is all-around better than allowing me to modify the parameter.

Thanks.
Ryanne

--
www.ryannedolan.info

Why? You've shown you don't like modifying parameters, but why should it be disallowed? Again, a variable isn't just an alias for a value, it's a concept. If the concept embodied by the parameter fits with what you are doing, why should you have to create a duplicate you have no use for, and manage two names for the same thing? Of course, in many cases, a duplicate just makes sense, in case you want to refer back to the initial value. But its not so clear cut as to be forced. Also, in go, there are essentially three types of parameters: the receiver, input parameters, and return parameters. Where would you draw the line? Why would you create restrictions that cannot be consistently applied, generating a complex set of rules about what you can and can't do? At the moment, variables are variables, and the nature of a parameter only matters at the interface of a function and not in its body. Some programming languages are full of well intentioned but arbitrary rules that only make using the language more complex than it needs to be with little to no actual gain and potentially ruling out perfectly valid use cases.

Ryanne Dolan

unread,
Feb 11, 2010, 11:20:11 PM2/11/10
to Steven, David Roundy, golang-nuts
Some programming languages are full of well intentioned but arbitrary rules that only make using the language more complex than it needs to be with little to no actual gain and potentially ruling out perfectly valid use cases.

Yeah, I don't want to end up there; I love Go's simplicity and wouldn't want to add a bunch of confusing rules just to prevent pitfalls that only a novice would make.  It is clear from this discussion that the rules to prevent (what I consider to be) bad code are more complicated than learning not to write the bad code.

Thanks for the responses.

Thanks.
Ryanne

--
www.ryannedolan.info

Peter Williams

unread,
Feb 11, 2010, 11:48:20 PM2/11/10
to Ryanne Dolan, Ostsol, golang-nuts
On 12/02/10 11:42, Ryanne Dolan wrote:
> Ostsol,
> Indeed. So what is the middle ground? Is there a way to prevent my
> original pitfalls, without condemning the assignment, increment, etc
> operators?

I think that you should weaken your proposal so that it only applies to
the local variables of packages other than "main". You'd have a
stronger argument then. But still weak, as by definition a variable is
mutable (and constants are their immutable sibling). Strictly speaking
"immutable variable" is an oxymoron.

Peter

chris dollin

unread,
Feb 12, 2010, 2:23:45 AM2/12/10
to golan...@googlegroups.com
Opps, meant to send this to the list.


On 12 February 2010 00:33, Kevin Ballard <kbal...@gmail.com> wrote:
 
Using immutable variables works decently well in functional languages
like Erlang, which are generally built out of a lot of really small
blocks of code and rely on recursion for iteration and the like, but
procedural languages work rather differently.

Hardly. Almost all the local variables I use are assign-once. The ones
that are not are almost always bits of loop state. This holds (for me)
whether I'm coding in Java or C.
 
Hell, the lack of mutable variables would make it rather difficult to
even assign a value to a variable based on a condition. Right now you
can do

   var i int;
   if b {
       i = 3;
   } else {
       i = 5;
   }

But this isn't possible with immutable variables (and the lack of a
trinary operator means we can't do this as an expression, at least not
without a rather ugly bit of syntax and a helper function).

Indeed -- not having if-expressions is a mysterious hole in Go.  I wonder
what the rationale is?
 
--
Chris "allusive" Dollin



--
Chris "allusive" Dollin

roger peppe

unread,
Feb 12, 2010, 4:44:07 AM2/12/10
to Ryanne Dolan, golang-nuts
var y Foo
switch x := x.(type) {
case T1:
y = x.foo
case T2:
y = x.fooey
case Foo:
y = x
}
dosomething(y)

s := "3456"
n := 0
for c := range s {
n = (n * 10) + (c - '0')
}


a := []int{23,5,7,43,6}
max := 0
for n := range a {
if n > max {
max = n
}
}


if x, ok := node.(ParenExpr); ok {
node = node.SubExpr
}


func bar(x interface{}) (a int) {
switch x.(type) {
case int:
a = x
case Foo:
a = x.n
}
return
}

if i == len(buf) {
b := make([]byte, len(buf) * 2)
copy(b, buf)
buf = b
}
buf[i++] = x

chris dollin

unread,
Feb 12, 2010, 5:03:10 AM2/12/10
to roger peppe, Ryanne Dolan, golang-nuts
On 12 February 2010 09:44, roger peppe <rogp...@gmail.com> wrote:
var y Foo
switch x := x.(type) {
case T1:
 y = x.foo
case T2:
 y = x.fooey
case Foo:
 y = x
}
dosomething(y)

Once upon a time, there was technology that would allow
switches to return results, either with explicit syntax
(valof - resultis, oh BCPL we remember you fondly) or
just because statements were a kind of expression (oh,
Algol68 we remember you less fondly, or Pop11 raptures!).

I don't suppose the Go authors would consider doing a little
rummaging in the Constructs attic? Sometimes what looks
like old rubbish turns out to be rather valuable ...

--
Chris "antique" Dollin

unread,
Feb 12, 2010, 12:05:16 PM2/12/10
to golang-nuts

You are wrong here. The term "immutable variable" makes perfect sense.
The only thing left is to explain the *context* in which it makes
perfects sense:

The lifetime of a variable V spans from timepoint T1 to T2. Inbetween
T1 and T2, the variable V can be declared immutable for a sub-interval
of (T1,T2). This *is* how a constant in a computer comes to existence.
There is *no* other way of how to create constants (=immutable things)
in a computer! When you declare in a programming language that
variable V is a constant, such as "const int V = 10", you are in fact
declaring that the value of the memory cell containing V should be
*immutable* for the *lifetime* of the program. Before the program
started, the memory cell V could have had some other value, which
means that the program loader had to overwrite it with the value "10"
at program startup. After the program ends, the memory cell containing
V is mutable again.

... so, in my opinion, the "natural state of things" is that they are
mutable. Any kind of immutability is the result of applying certain
rules to *originally mutable* objects for a limited amount of time.

It is sad that there are no programming languages capable of modelling
this the way it actually is.

Ryanne Dolan

unread,
Feb 12, 2010, 1:29:54 PM2/12/10
to ⚛, golang-nuts
0xe2.0x9a.0x9b,

Good points.  I admitted before that making Go safer is very often more confusing than leaving it dangerous.  But personally I'd love to see variables become immutable within certain scopes.  In particular, parameters should be immutable (within the function body) and iterator variables should be immutable (within the loop body).  This wouldn't require any new keywords or anything either.  I don't think it would be complicated, but everyone else here seems to disagree.

To be specific: in for-loops, any variable declared in the for-statement itself should be immutable:

for i := 0; i < N; i++ {
  i = anything is wrong
  i += anything is wrong
}

Sure everyone will give me examples of bad code and clamor about losing power; however, this way is much clearer:

inc := 1
for i := 0; i < N; i = i + inc {
  i = anything is wrong
  inc = something is okay
}

It isn't surprising that many standard libraries have undefined behavior when you modify iterator variables.  It isn't a good idea in general.  Same thing with modifying parameters, but again no one seems to agree on the grounds that parameters are just variables and variables can't be immutable.

Thanks.
Ryanne

--
www.ryannedolan.info

Jason Catena

unread,
Feb 12, 2010, 2:02:59 PM2/12/10
to golang-nuts
> It is clear from this discussion that the rules to
> prevent (what I consider to be) bad code are more complicated than learning
> not to write the bad code.

I wish this showed up as the default text when someone starts to write
a message proposing a nanny addition to the language.

chris dollin

unread,
Feb 12, 2010, 2:11:24 PM2/12/10
to Jason Catena, golang-nuts
Apropos of write-oncer and final and all that, is there any
chance of an addition to Go that disallows writing into a
variable or field of a struct outside of the declaring package?

(Not for /all/ variables/fields, he added hastily. Just for ones
declared, oh I don't know, final or readonly or protected or
pawsoff or something.)

"outside of the declaring package" seems to me a reasonable
boundary -- you don't have funny rules about when and where
you can update the thing in the package responsible for managing
it, only once you're outside the existing abstraction boundary.

Chris

--
Chris "wensleydale" Dollin

Jason Catena

unread,
Feb 12, 2010, 2:17:14 PM2/12/10
to golang-nuts
> (Not for /all/ variables/fields, he added hastily. Just for ones
> declared, oh I don't know, final or readonly or protected or
> pawsoff or something.)

How about splitting your data into two different structs? Uppercase
one variable to export it outside the package (the one with fields
that it's okay to modify), and lowercase the other.

David Roundy

unread,
Feb 12, 2010, 2:17:25 PM2/12/10
to chris dollin, Jason Catena, golang-nuts
On Fri, Feb 12, 2010 at 11:11 AM, chris dollin
<ehog....@googlemail.com> wrote:
> Apropos of write-oncer and final and all that, is there any
> chance of an addition to Go that disallows writing into a
> variable or field of a struct outside of the declaring package?
>
> (Not for /all/ variables/fields, he added hastily. Just for ones
> declared, oh I don't know, final or readonly or protected or
> pawsoff or something.)

Why not just start it with a lowercase letter and then provide an
accessor? It's slightly more clumsy, but doesn't seem *that* bad.
That said, I'd love to have something like the const keyword, only
done better. I know const is used in the golang talks as an example
of a good idea that ends up being a pain, but somehow it seems like
there's got to be a "good" way to do it... maybe by inferring
const/mutable for function parameters, when not declared explicitly?

David

Mike Zraly

unread,
Feb 12, 2010, 2:17:35 PM2/12/10
to chris dollin, Jason Catena, golang-nuts
Can you not get that effect by declaring your variables or fields private (start with lowercase letter) then providing read-only access via functions and methods?

 

Steven Blenkinsop

unread,
Feb 12, 2010, 2:58:54 PM2/12/10
to Mike Zraly, chris dollin, Jason Catena, golang-nuts
Yes, I'd say that's the way to go. It makes the intention clear without cluttering namespace and adding complexity to types. The only thing I'd hope is, since go doesn't contain anything like a const reference, that the compiler is smart with memory when it comes to pass by value; i.e. only making a copy if/when the value is mutated.

unread,
Feb 12, 2010, 3:18:44 PM2/12/10
to golang-nuts
On Feb 12, 7:29 pm, Ryanne Dolan <ryannedo...@gmail.com> wrote:
> 0xe2.0x9a.0x9b,
>
> Good points.  I admitted before that making Go safer is very often more
> confusing than leaving it dangerous.

I think this is partially a consequence of the state of (mainstream)
programming languages. People don't expect/like them to be safer,
because it would mean harder work in terms of thinking and in terms of
the number of characters you have to type while programming. As a
sidenote, I personally don't quite understand why are people using
languages such as Python, Smalltalk, Bash or similar languages to
write complex software. If I want an object to be able to respond to a
certain set of messages, I want to declare it explicitly and I want
the compiler to check that "the flow of types in the program" makes
sense.

> But personally I'd love to see
> variables become immutable within certain scopes.  In particular, parameters
> should be immutable (within the function body) and iterator variables should
> be immutable (within the loop body).

I doubt Go will ever have something like this. The only way I see here
is to use an extensible programming language, or to design&implement
ones own programming language.

> This wouldn't require any new keywords
> or anything either.  I don't think it would be complicated, but everyone
> else here seems to disagree.

The truth is that it is more complicated than the current state.
Namely, currently the rules for using variables in Go are more-or-less
uniform - in the sense that those rules do not change depending on the
context. What you are proposing introduces a feature which breaks this
uniformity, which in effect requires the programmer to learn how the
rules for using variables change depending on what the variable
represents (e.g: parameter, iteration variable, normal variable,
output parameter, etc). Personally, I have nothing against it, but I
am sure many people will (including the Go creators).

> To be specific: in for-loops, any variable declared in the for-statement
> itself should be immutable:
>
> for i := 0; i < N; i++ {
>   i = anything is wrong
>   i += anything is wrong
>
> }
>
> Sure everyone will give me examples of bad code and clamor about losing
> power;

The resulting programming language would still be a universal Turing
machine, so technically, there is no loss in power.

> however, this way is much clearer:
>
> inc := 1
> for i := 0; i < N; i = i + inc {
>   i = anything is wrong
>   inc = something is okay
>
> }
>
> It isn't surprising that many standard libraries have undefined behavior
> when you modify iterator variables.

You mean for example something like java.lang.Iterator in Java? In
such a case, I agree. But I think you are *not* making a clear enough
distinction between the "name of an object" and "the object itself".
Those are different concepts. Assignment of a new value to a variable
is more like renaming, i. e. the expression "i=X" means that object X
will from now on be (also) accessible via the name "i". More
precisely, the expression "i=X" modifies the object "tuple of local
variables" (I am here assuming "i" is a local variable). So,
technically, we can reduce this whole immutability problem to
techniques about how a language should be able to enable and enforce
immutability of objects, under the assumption that "tuple of local
vars" is also an object albeit not explicitly visible in most
languages.

Considering what I have written in the above paragraph, it seems very
strange that you are *only* talking about controlling mutability of
variables but you are *not* talking about mutability of any other
objects in the language.

> It isn't a good idea in general.  Same
> thing with modifying parameters, but again no one seems to agree on the
> grounds that parameters are just variables and variables can't be immutable.

Well, I have my own programming language in which it is impossible to
assign to parameters. It works nicely, and there aren't many cases
where it poses a problem. So I agree that it can be done for
parameters.

Obviously, it means that parameters cannot be used as e.g. iteration
variables. As a result of this, the resulting code may consume
slightly more memory than an equivalent code in e.g. C++, because it
is necessary to introduce a new variable for doing the iteration while
the parameter sits idle and takes memory of its own. So, I can
understand that if someone wants not to allocate another few bytes of
memory than immutable parameters are indeed an obstacle. Well, but if
this should become a serious obstacle in solving some future problem,
I will no doubts allow parameters to be mutable. Since the default for
parameters should be "immutable" because it *is* the most common case
in real codes, the best way of introducing mutable parameters seems to
be by means of a keyword to explicitly mark a particular parameter as
mutable.

> Thanks.
> Ryanne
>
> --www.ryannedolan.info

chris dollin

unread,
Feb 13, 2010, 4:32:20 AM2/13/10
to Mike Zraly, Jason Catena, golang-nuts
So to make field or variable protected (ie not writable outside this package)
I have to (a) not publish it directly (b) write a method encapsulating the
/harmless/ operation, and (c) make the users of the code have to use a
different syntax to do the variable access?

That doesn't seem to me to be "fun", nor to make intent visible.

--
Chris "allusive" Dollin

chris dollin

unread,
Feb 13, 2010, 11:02:25 AM2/13/10
to David Roundy, Jason Catena, golang-nuts


On 12 February 2010 19:17, David Roundy <rou...@physics.oregonstate.edu> wrote:
On Fri, Feb 12, 2010 at 11:11 AM, chris dollin
<ehog....@googlemail.com> wrote:
> Apropos of write-oncer and final and all that, is there any
> chance of an addition to Go that disallows writing into a
> variable or field of a struct outside of the declaring package?
>
> (Not for /all/ variables/fields, he added hastily. Just for ones
> declared, oh I don't know, final or readonly or protected or
> pawsoff or something.)

Why not just start it with a lowercase letter and then provide an
accessor? It's slightly more clumsy, but doesn't seem *that* bad.

Adendum ... if that's to be the idiom, I'd like to be able to demand the compiler inline calls to such accessors.

--
Chris "allusive" Dollin

Steven

unread,
Feb 13, 2010, 12:11:32 PM2/13/10
to chris dollin, Mike Zraly, Jason Catena, golang-nuts
On Sat, Feb 13, 2010 at 4:32 AM, chris dollin <ehog....@googlemail.com> wrote:

So to make field or variable protected (ie not writable outside this package)
I have to (a) not publish it directly (b) write a method encapsulating the
/harmless/ operation, and (c) make the users of the code have to use a
different syntax to do the variable access?

That doesn't seem to me to be "fun", nor to make intent visible.

--
Chris "allusive" Dollin

I don't see what you're issue is here. This is common practice for access control. And what different syntax? You just have to add the () to call the function, which makes it clear that its not a variable which you can assign to. I also don't see how this makes the intent any less visible. I'd say its more visible than having a keyword because it stays persistent in the usage and not just in the declaration. You can even use the Get naming convention if it makes you feel better.

var num int
func Num() int { return num }


Its hard to argue that this is any less clear, or less convenient than a keyword that essentially does the same thing.

What do you need for it to be fun? I could try to make an editor that flashes fun colours whenever you use an accessor if you like...


On Sat, Feb 13, 2010 at 11:02 AM, chris dollin <ehog....@googlemail.com> wrote:

Adendum ... if that's to be the idiom, I'd like to be able to demand the compiler inline calls to such accessors.
 

That would just be a smart compiler. Its unfortunate you aren't able to be demanding, but at least in this case it seems that its not exactly something you need to demand. It seems to me like standard optimization that would be there in a mature compiler and may be there already. Though I'm not an expert, so if there's some reason a simple accessor wouldn't be inlined, I wouldn't know it, but I certainly can't imagine any.

chris dollin

unread,
Feb 13, 2010, 12:43:48 PM2/13/10
to Steven, Mike Zraly, Jason Catena, golang-nuts
On 13 February 2010 17:11, Steven <stev...@gmail.com> wrote:
On Sat, Feb 13, 2010 at 4:32 AM, chris dollin <ehog....@googlemail.com> wrote:

So to make field or variable protected (ie not writable outside this package)
I have to (a) not publish it directly (b) write a method encapsulating the
/harmless/ operation, and (c) make the users of the code have to use a
different syntax to do the variable access?

That doesn't seem to me to be "fun", nor to make intent visible.

I don't see what you're issue is here. This is common practice for access control.

That there are other languages with the same inconvenience isn't an
argument for living with the inconvenience, only for the possibility of
living with it.
 
And what different syntax? You just have to add the () to call the function,

Exactly. It flags up in all the reads that you might not be able to write
to the field. (Might, because one can write setters as well as
accessors.)
 
which makes it clear that its not a variable which you can assign to.

... in this language ...
 
I also don't see how this makes the intent any less visible.

Because an access x.Y() need not be there because Y is a read-only
entity but becuase Y is a generally computed entity. A keyword (or whatever
syntax) that says "this is read-only" says "this is read-only" more directly
than "this variable is not public and has a read accessor and ... scanning code or outline or whatever ... no, no setter -- it's read only!".
 
I'd say its more visible than having a keyword because it stays persistent in the usage and not just in the declaration.

The visibility lies in the compiler saying "paws off". There's no need to make it obtrusive in the usage.
 
You can even use the Get naming convention if it makes you feel better.

Significantly worse, actually.
 
What do you need for it to be fun? I could try to make an editor that flashes fun colours whenever you use an accessor if you like...

From http://golang.org/ I exhibit:

… fun

Go has fast builds, clean syntax, garbage collection, methods for any type, and run-time reflection. It feels like a dynamic language but has the speed and safety of a static language. It's a joy to use.

On Sat, Feb 13, 2010 at 11:02 AM, chris dollin <ehog....@googlemail.com> wrote:

Adendum ... if that's to be the idiom, I'd like to be able to demand the compiler inline calls to such accessors.

That would just be a smart compiler. Its unfortunate you aren't able to be demanding, but at least in this case it seems that its not exactly something you need to demand. It seems to me like standard optimization that would be there in a mature compiler and may be there already. Though I'm not an expert, so if there's some reason a simple accessor wouldn't be inlined, I wouldn't know it, but I certainly can't imagine any.

The question isn't your imagination, or mine, but what the expectations
and requirements are on the compilers. If the position is that read-onlyness is to be achieved using accessor methods/functions, then some assurance that such methods won't have run-time cost would be valuable.

I would hope the compiler would inline such calls, perhaps with a little provocation on the command-line. But I don't /know/ that it can or will.
Maybe the designers would say, no, we do not inline those calls, because
then you can't put breakpoints on them in the debugger, and for us it
is more valuable to be able to debug than to squeeze out a couple of
instructions and a pipeline break. I'm not saying that they do say that,
just that it's a rational argument.

Programmers need to understand the performance profiles of the languages
they use, and it's the designers and implementors who can tell them what
those profiles are.

Chris

--
Chris "overflow" Dollin

Eleanor McHugh

unread,
Feb 13, 2010, 3:15:49 PM2/13/10
to golang-nuts
On 12 Feb 2010, at 20:18, ⚛ wrote:
> On Feb 12, 7:29 pm, Ryanne Dolan <ryannedo...@gmail.com> wrote:
>> 0xe2.0x9a.0x9b,
>>
>> Good points. I admitted before that making Go safer is very often more
>> confusing than leaving it dangerous.
>
> I think this is partially a consequence of the state of (mainstream)
> programming languages. People don't expect/like them to be safer,
> because it would mean harder work in terms of thinking and in terms of
> the number of characters you have to type while programming.

Actually people don't like languages that make them think about how the machine works or rely on a deep comfort with formal logic. What they really want are languages which help them elegantly express the problem domain they're working on.

> As a
> sidenote, I personally don't quite understand why are people using
> languages such as Python, Smalltalk, Bash or similar languages to
> write complex software. If I want an object to be able to respond to a
> certain set of messages, I want to declare it explicitly and I want
> the compiler to check that "the flow of types in the program" makes
> sense.

If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue. For one thing, shouldn't this be caught during testing? Go ships with a testing tool that's trivially simple to use and thanks to the current speed of compilation feels wonderfully interactive.

Incidentally, Ruby and Smalltalk are both strongly typed languages that just happen to also use dynamic typing. It is a programmer decision to actively support that dynamic typing by implementing method_missing() or doesNotUnderstand respectively. There are cases when dynamic typing is very cool (as your LISPer friends will verify) and I suggest you at least spend a few months trying "duck-typing" before adopting a position either way.

Some people it suits, others have panic attacks :)


Ellie

Eleanor McHugh
Games With Brains
http://code.games-with-brains.net
http://slides.games-with-brains.net
----
raise ArgumentError unless @reality.responds_to? :reason

Eleanor McHugh

unread,
Feb 13, 2010, 3:16:28 PM2/13/10
to golang-nuts
On 12 Feb 2010, at 10:03, chris dollin wrote:
> Once upon a time, there was technology that would allow
> switches to return results, either with explicit syntax
> (valof - resultis, oh BCPL we remember you fondly) or
> just because statements were a kind of expression (oh,
> Algol68 we remember you less fondly, or Pop11 raptures!).
>
> I don't suppose the Go authors would consider doing a little
> rummaging in the Constructs attic? Sometimes what looks
> like old rubbish turns out to be rather valuable ...

Well you can always use Ruby if you want every statement to be an expression. The question is, does a systems language really benefit from the extra overhead and complexity of supporting that? Personally I don't think so.

Ryanne Dolan

unread,
Feb 13, 2010, 3:54:40 PM2/13/10
to Eleanor McHugh, golang-nuts
Actually people don't like languages that make them think about how the machine works or rely on a deep comfort with formal logic. What they really want are languages which help them elegantly express the problem domain they're working on.

This is contrary to the view of all systems programmers, embedded programmers, computer/electrical engineers, and anyone with a keen understanding of real hardware. 
 
If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue

I'm strongly of the opinion that languages should be designed to be as safe as possible, in the sense that they should make it damn near impossible to make mistakes (at least common ones).

A lot of novice programmers or programmers that think they are rock-stars will complain in such an environment, because they will feel too constrained, will get too many compile errors, will think they can't be creative or elegant, etc.  I think that is all non-sense that comes from inexperience, personally.

 There are cases when dynamic typing is very cool

I totally dismiss any such language as unacceptable for real code.  Or at least systems programming.  Sometimes you can't run unit tests or just assume that "a + b" is integer addition and not string concatenation. 

  I suggest you at least spend a few months trying "duck-typing"

Go uses duck typing, but it is not dynamic.  I am confused why are making an argument for dynamic programming by citing the convenience of duck typing.  Duck typing is great; dynamic programming is evil (at least in the domain of systems programming).

Thanks.
Ryanne

--
www.ryannedolan.info

befelemepeseveze

unread,
Feb 13, 2010, 6:19:56 PM2/13/10
to golang-nuts
On 13 ún, 21:54, Ryanne Dolan <ryannedo...@gmail.com> wrote:
> Go uses duck typing, but it is not dynamic.

Entities in Go are statically typed only. I don't see any duck typing
in Go.

Eleanor McHugh

unread,
Feb 13, 2010, 7:12:00 PM2/13/10
to golang-nuts
On 13 Feb 2010, at 20:54, Ryanne Dolan wrote:
> > Actually people don't like languages that make them think about how the machine works or rely on a deep comfort with formal logic. What they really
> > want are languages which help them elegantly express the problem domain they're working on.
>
> This is contrary to the view of all systems programmers, embedded programmers, computer/electrical engineers, and anyone with a keen understanding of real hardware.

Well it's my view and I'm a systems programmer. Indeed for eight years of my career I developed embedded systems including aircraft autopilots in assembler and real-time automation systems in C for broadcast video. I've also worked on and maintained million-line codebases in both languages and written more device drivers than I care to remember. Therefore if I take a contrary view to received wisdom it's because I've tackled some very demanding projects and spent a lot of time reflecting on the experience - both successes and failures.

People who believe that language restrictions lead to better code forget that on any complex project the majority of developers will not be experts in the language chosen and that thanks to commercial pressures they will be motivated to use any feature they run across that seems to solve a problem regardless of whether it's appropriate or maintainable. Generally there is also a disconnect on such projects between specification and implementation which results in frequent misunderstandings.

Just look at C++ - originally a perfectly reasonable OO extension to C that is now sufficiently complex to require several thousand pages of documentation if you wish to understand a fair subset of its nuances. Or ANS-Forth, which no implementer ever buys the actual standard for due to its cost and niche market meaning we live with "draft" ANS-94 as the de facto standard.

Forth is in fact a good example because it's a very flexible and dynamic language (almost an RPN Lisp I guess) and has a committed following precisely because of the power it offers and the enjoyment of working with it.

> > If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue
>
> I'm strongly of the opinion that languages should be designed to be as safe as possible, in the sense that they should make it damn near impossible to make mistakes (at least common ones).

If you write tests to exercise your code properly, design your system well (high cohesion, low coupling, expressive code which captures the requirements elegantly) and let every little black box be responsible for its own lifecycle then you generally develop robust code.

> > A lot of novice programmers or programmers that think they are rock-stars will complain in such an environment,
> > because they will feel too constrained,

Maybe, but a lot of programmers who really know what they're doing will find that the pain of developing in such a language outweighs its productivity and will consequently move on to pastures green. I won't touch Java or C++ anymore just because of the volume of ugly, ill-conceived, perfectly compilable code I've met in those communities. And I'm not alone. I know a lot of people with a similar background to me who now write most of their code in Ruby, Python, etc. and a large portion of the infrastructure of the internet is dependent on this languages. I agree it takes discipline, but programmers should have discipline anyway.

What's so exciting about Go is that instead of being another head-in-the-sand systems language which sees this trend as an abomination, it has amongst its expressed goals finding a way to marry the sheer joy of working in those languages with the additional degree of type precision which we sometimes need in systems coding.

Every suggestion which would add additional complexity to the language has to be considered in that light. Will it make Go less fun to work in? Will a developer have to learn additional rules which given commercial pressures they may not have time to fully comprehend? Will the change to existing idioms break the basic premise of least surprise which drives development of dynamic languages?

> > will get too many compile errors, will think they can't be creative or elegant, etc. I think that is all non-sense
> > that comes from inexperience, personally.

I disagree. My experience is that compilers churn out poorly worded and excessive error messages, usually as a result of a typo on my behalf, and that most of the real problems in code are identified as a result of good testing practices and a reasonable understanding of code design in the first place. When I'm coding interactively in C I rely primarily on printf() and a good debugger. In Go I find gotest saves me from doing that and makes me feel just as productive as when I'm using Ruby.

> >>There are cases when dynamic typing is very cool
>
> > I totally dismiss any such language as unacceptable for real code. Or at least systems programming. Sometimes you can't run
> > unit tests or just assume that "a + b" is integer addition and not string concatenation.

That's very true, hence why we have integration testing to ensure that when components interact they do what they're supposed to. However it really isn't our responsibility to second-guess how a potentially infinite number of other developers are going to use our code. As long as we publish our interfaces and stick to the contract they specify, those developers can then apply the same standard and everybody will be happy: we get to write more code solving our problems and that code will not be littered with unnecessary type-checking complicating future maintenance and/or enhancement.

Likewise when we use third-party code it's our responsibility to test it. Because a failure in someone else's library that we rely on is actually our failure to care sufficiently about our system. That might be acceptable for a hobby or academic project, but not for a commercial one.

> > I suggest you at least spend a few months trying "duck-typing"
>
> Go uses duck typing, but it is not dynamic. I am confused why are making an argument for dynamic programming by citing the convenience of duck typing. Duck typing is great; dynamic programming is evil (at least in the domain of systems programming).

Well for a dynamic systems language that supports duck-typing you'd be looking at Objective-C, which like Ruby and Smalltalk allows an object to decide for itself whether or not a message is understood. Go certainly doesn't feel like a duck-typed language in that sense (call it the 'strong' duck-typing proposition).

If anyone can enlighten me on how to replicate this feature without requiring a lot of complex boilerplate I'd be most obliged as I can conceive of several potential use cases in the design of GoLightly. However it's not a feature I need desperately enough to want to see the language changed - exceptions on the other hand...

Ryanne Dolan

unread,
Feb 13, 2010, 9:31:36 PM2/13/10
to Eleanor McHugh, golang-nuts
Very often people argue tho they agree.

on any complex project the majority of developers will not be experts in the language chosen and that thanks to commercial pressures they will be motivated to use any feature they run across that seems to solve a problem regardless of whether it's appropriate or maintainable. Generally there is also a disconnect on such projects between specification and implementation which results in frequent misunderstandings.

Looking just at that excerpt, I agree with you entirely.  But I see that as further evidence that languages should be more restrictive.  Doesn't a safe language help both novices and experts?

 Just look at C++ - originally a perfectly reasonable OO extension to C that is now sufficiently complex to require several thousand pages of documentation

Totally agree.  I'm definitely not saying that C++ should be a standard of excellence!  I would never recommend using C++ for real code; it is not safe, restrictive, or simple.  It is exactly the opposite!  I think Go is much safer than C++ so far, and I'm only suggesting that we make it safer, not more complicated.

As to your remarks about "enjoyable", Go already claims to be enjoyable, mostly based on the fact that there is a lot less typing, stuttering, specifying, etc.  That's why everyone says Go is like Python and C at the same time.  I for one don't find dynamic languages enjoyable at all.  I like Lua's syntax a whole lot, but I've been bitten way too many times by dynamic languages to use them for real code. Just because a language is pretty doesn't mean it is good, and just because a language is safe doesn't mean it must be a chore and bother to write.

I happen to like Go a lot, even tho it is not dynamic, is fairly restrictive, safe, etc.  Again, I'm only looking for ways to make it more safe, harder to abuse, easier to read... not more complicated and certainly not more dynamic.

Thanks.
Ryanne

--
www.ryannedolan.info

Ostsol

unread,
Feb 13, 2010, 10:49:50 PM2/13/10
to golang-nuts
Write-once seems to belong more in the domain of a style guideline, to
me. Just because one can do a thing does not mean that one should
always do it, yet at the same time this does not mean it should be
entirely forbidden. Take the '.' in Go's import declaration, for
example. Importing package contents into the global namespace can be
convenient and can in some cases reduce typing, but it creates
ambiguities and therefore should be kept to a minimum. The same thing
applies here: repeatedly writing to variables already initialized can
create ambiguities and should therefore be avoided; one should create
variables specific to the data they represent. However, that does not
justify disallowing it in the language as there are situations where
no ambiguity is created and re-writing makes perfect sense -- and
creating hard exceptions for those cases introduces unnecessary
complexity to the language and probably the compiler, too.

BTW: I have looked back at much of my own code and have discovered
that I rarely write to previously initialized variables. The major
exception is counters and other accumulators or trackers. I know that
some of these cases can even be eliminated via liberal use of
recursion. . . but I avoid recursion except in cases where iterative
approach is more complex.

-Ostsol

Jessta

unread,
Feb 14, 2010, 12:01:20 AM2/14/10
to Ostsol, golang-nuts
On 14 February 2010 14:49, Ostsol <ost...@gmail.com> wrote:
> Write-once seems to belong more in the domain of a style guideline, to
> me.  Just because one can do a thing does not mean that one should
> always do it, yet at the same time this does not mean it should be
> entirely forbidden.

Yeah, I don't think this is something the compiler should be worrying about.
If it's valid syntax the compiler should compile it.

But a gofmt/golint kind of tool to alert people to these kind of things might
be useful but there's no reason to make it part of the compiler.

- jessta


--
=====================
http://jessta.id.au

chris dollin

unread,
Feb 14, 2010, 3:15:48 AM2/14/10
to Eleanor McHugh, golang-nuts
On 13 February 2010 20:16, Eleanor McHugh <ele...@games-with-brains.com> wrote:
On 12 Feb 2010, at 10:03, chris dollin wrote:
> Once upon a time, there was technology that would allow
> switches to return results, either with explicit syntax
> (valof - resultis, oh BCPL we remember you fondly) or
> just because statements were a kind of expression (oh,
> Algol68 we remember you less fondly, or Pop11 raptures!).
>
> I don't suppose the Go authors would consider doing a little
> rummaging in the Constructs attic? Sometimes what looks
> like old rubbish turns out to be rather valuable ...

Well you can always use Ruby if you want every statement to be an expression.

I wasn't asking for every statement to be an expression. I was suggesting
that an existing imbalance in the language -- only statements can be
conditional -- was unnecessary, and there was existing, nay ancient,
technology to hand.
 
The question is, does a systems language really benefit from the extra overhead and complexity of supporting that? Personally I don't think so.

The "overhead and complexity" is negligable, even in a "systems language".
valof-resultis was in BCPL. Does it benefit? Well, I think so -- it provides
better orthogonality and simplicity. IMAO.

--
Chris "allusive" Dollin

Marcelo Cantos

unread,
Feb 14, 2010, 4:11:35 PM2/14/10
to golang-nuts
Any Go class that implements the methods declared by an interface is
automatically treated as an implementor of that interface. "If it
quacks like a duck..." This is the point Ryanne was making. Just
because it's resolved at compile-time, that doesn't mean it isn't duck
typing. C++ templates also work via a form of compile-time duck
typing.

On Feb 14, 10:19 am, befelemepeseveze <befelemepesev...@gmail.com>
wrote:

David Roundy

unread,
Feb 14, 2010, 5:21:19 PM2/14/10
to Marcelo Cantos, golang-nuts
On Sun, Feb 14, 2010 at 1:11 PM, Marcelo Cantos
<marcelo...@gmail.com> wrote:
> Any Go class that implements the methods declared by an interface is
> automatically treated as an implementor of that interface. "If it
> quacks like a duck..." This is the point Ryanne was making. Just
> because it's resolved at compile-time, that doesn't mean it isn't duck
> typing. C++ templates also work via a form of compile-time duck
> typing.

This sounds sort of like duck typing, but it is actually very
different from duck typing in a very important way. You specify the
type of arguments or variables, so the compiler forces them to have
all those methods, even if you don't use them. This means that you've
got much greater safety moving forward than duck typing systems like
C++ templates or python, since you know you can safely use any methods
in the interface you specified with out breaking the compile of any
code that would previously compile, even if you didn't previously use
them. It may sound like a trivial difference, but I think it makes a
huge difference.
--
David Roundy

Marcelo Cantos

unread,
Feb 14, 2010, 5:40:56 PM2/14/10
to David Roundy, golang-nuts
It may be a tighter form of duck-typing from the perspective of the interface consumer ("If it it quacks, walks, swims and flies like a duck..."), but it's still duck-typing. You may not have designed a class to implement some interface; it just happens to conform; if you happen to change it so that it no longer conforms, you will no longer be able to pass it off as an instance of that interface. Something will break at compile time, even if code that consumes the interface doesn't.

The C++ community is working towards stronger specification of template parameters via the "concepts" concept (and which is more powerful than interfaces), but it didn't make the cut for the next iteration of the ISO standard. So I'll heartily agree that this is a good thing. But it's splitting hairs to say it isn't duck-typing.

Eleanor McHugh

unread,
Feb 14, 2010, 7:48:32 PM2/14/10
to golang-nuts
On 14 Feb 2010, at 21:11, Marcelo Cantos wrote:
> Any Go class that implements the methods declared by an interface is
> automatically treated as an implementor of that interface. "If it
> quacks like a duck..." This is the point Ryanne was making. Just
> because it's resolved at compile-time, that doesn't mean it isn't duck
> typing. C++ templates also work via a form of compile-time duck
> typing.


A basic premise of duck-typing is that the only way to truly determine whether the type of an entity is appropriate is empirically at runtime. For languages like Ruby, Python, Smalltalk and Objective-C this is a very natural fit with their message passing semantics so people often find it early on and get in an awful muddle to start with but experienced developers in those languages find it indispensable.

Viewed from this perspective interfaces have little to do with duck-typing and everything to do with type inference. As such they're a technology that make static typing much less cumbersome without the bluntness of C's void * type (interface{} is much more elegant and still makes me smile every time I use it).

Indeed now that I've given some thought to the combination of interface{} and speculative type assertions it's clear that together they could be used to write duck-typed programs in Go so I withdraw my previous contention that the language doesn't support the functionality. It's just not obvious until you actively look.

Now I have to find some time to hack with the concept and see where it leads :)

Eleanor McHugh

unread,
Feb 14, 2010, 10:49:14 PM2/14/10
to golang-nuts
On 14 Feb 2010, at 02:31, Ryanne Dolan wrote:
> > on any complex project the majority of developers will not be experts in the language chosen and that thanks to commercial
> > pressures they will be motivated to use any feature they run across that seems to solve a problem regardless of whether it's
> > appropriate or maintainable. Generally there is also a disconnect on such projects between specification and implementation
> > which results in frequent misunderstandings.
>
> Looking just at that excerpt, I agree with you entirely. But I see that as further evidence that languages should be more restrictive.
> Doesn't a safe language help both novices and experts?

To paraphrase Knuth, the only real proof of a program's validity is whether or not it behaves correctly at runtime.

Memory leaks and buffer overruns are historically the two most significant causes of runtime failure and in-process concurrency is rapidly achieving equal status now multicore architectures are in the mainstream. Having a language which sanitises these three problems is a huge productivity gain because it removes a whole class of wheel-reinvention problems from projects which amount to nothing more than basic (and very tedious) accounting. However once a language has garbage collection, bounds checking and a safe concurrency model there're diminishing returns in additional safety features that need to be weighed against the costs they impose on development.

Compilation time is one such cost that has to be considered, as is runtime performance, although the largest cost is actually code maintenance which with any complex software system will be ongoing for many years. Go already boasts good compilation speed, runtime performance will improve as the compilers mature, and judging from the code I've studied so far I have a good feeling about longterm maintainability.

> > Just look at C++ - originally a perfectly reasonable OO extension to C that is now sufficiently complex to require several
> > thousand pages of documentation
>
> Totally agree. I'm definitely not saying that C++ should be a standard of excellence! I would never recommend using C++ for real code; it is not safe, restrictive, or simple. It is exactly the opposite! I think Go is much safer than C++ so far, and I'm only suggesting that we make it safer, not more complicated.
> As to your remarks about "enjoyable", Go already claims to be enjoyable, mostly based on the fact that there is a lot less typing, stuttering, specifying, etc. That's why everyone says Go is like Python and C at the same time. I for one don't find dynamic languages enjoyable at all. I like Lua's syntax a whole lot, but I've been bitten way too many times by dynamic languages to use them for real code. Just because a language is pretty doesn't mean it is good, and just because a language is safe doesn't mean it must be a chore and bother to write.

A language is only 'pretty' if the code it encourages one to write is itself beautiful, elegant, hopefully even sublime. To put that in less emotive language: I believe a language should result in consistent, concise, accessible, maintainable and performant code. Lisp and Ruby both rate very well in this regard, Python too despite its aesthetics being more those of an engineer than an artist.

Go also has that potential, especially if it acquires a runtime code generation facility of some kind [0].

> I happen to like Go a lot, even tho it is not dynamic, is fairly restrictive, safe, etc. Again, I'm only looking for ways to make it more safe, harder to abuse, easier to read... not more complicated and certainly not more dynamic.

Alas the cause of most coding abuses is a combination of unreasonable commercial pressures, lack of programmer motivation, arrogance and good old-fashioned humanity stupidity. Each of these is basically a social problem so whilst a technology fix may act as a bandaid I'm pessimistic of it leading to any real gains.

However there are a set of social fixes which work very well:

get programmers communicating with each other;
get them communicating with the people they're building software for;
refuse to implement any feature which doesn't have a strong use case;
refuse to accept arbitrary deadlines;
ensure everyone understands how to test code well;
check that programmers actually write their tests and validate them against the spec;
and never release software until it passes all the tests;
develop iteratively.

I'd say it's not rocket science, but actually my point is that in terms of process engineering it is very much like rocket science.


Ellie

[0] If you mail me off-list about your bad dynamic language experiences because I'll be happy to share ideas.

unread,
Feb 15, 2010, 3:58:01 AM2/15/10
to golang-nuts

On Feb 13, 9:15 pm, Eleanor McHugh <elea...@games-with-brains.com>
wrote:


> On 12 Feb 2010, at 20:18, ⚛ wrote:
>
> > I think this is partially a consequence of the state of (mainstream)
> > programming languages. People don't expect/like them to be safer,
> > because it would mean harder work in terms of thinking and in terms of
> > the number of characters you have to type while programming.
>
> Actually people don't like languages that make them think about how the machine works

You are "mixing apples with oranges". What has type safety to do with
how the machine works.

> or rely on a deep comfort with formal logic. What they really want are languages which help them elegantly express the problem domain they're working on.

Which is exactly what a good type system is about.

> > As a
> > sidenote, I personally don't quite understand why are people using
> > languages such as Python, Smalltalk, Bash or similar languages to
> > write complex software. If I want an object to be able to respond to a
> > certain set of messages, I want to declare it explicitly and I want
> > the compiler to check that "the flow of types in the program" makes
> > sense.
>
> If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue.

OK. It is programmer's failure. But if the compiler is unable to tell
the programmer that there is a problem although it could, then it is a
failure of language design.

> For one thing, shouldn't this be caught during testing?

The checks made during compilation aren't testing? I strongly think
they are.

> Go ships with a testing tool that's trivially simple to use and thanks to the current speed of compilation feels wonderfully interactive.
>
> Incidentally, Ruby and Smalltalk are both strongly typed languages that just happen to also use dynamic typing. It is a programmer decision to actively support that dynamic typing by implementing method_missing() or doesNotUnderstand respectively. There are cases when dynamic typing is very cool (as your LISPer friends will verify) and I suggest you at least spend a few months trying "duck-typing" before adopting a position either way.

The difference between say Java and Go is that [in order make an
object to respond to as particular set of messages] the Java language
enables you to do it only after you explicitly allowed it. There is no
other distinction. And since in Java the programmer is explicit about
his/her intentions and the compiler is performing compile-time checks,
it is impossible for Smalltalk's doesNotUnderstand to happen at run-
time. I think, the distinction is the explicitness, from a technical
viewpoint there does not seem to be any other distinction.

unread,
Feb 15, 2010, 4:22:58 AM2/15/10
to golang-nuts
On Feb 13, 9:54 pm, Ryanne Dolan <ryannedo...@gmail.com> wrote:
> I'm strongly of the opinion that languages should be designed to be as safe
> as possible, in the sense that they should make it damn near impossible to
> make mistakes (at least common ones).

What do you mean by "common ones"? Does this include proving that
every access to a list has a valid index? For example:

vector<char> v;
...
for(int i=0; i<100; i++)
print(v[i]);

> A lot of novice programmers or programmers that think they are rock-stars
> will complain in such an environment, because they will feel too
> constrained, will get too many compile errors, will think they can't be
> creative or elegant, etc.  I think that is all non-sense that comes from
> inexperience, personally.

I agree.

Some misconceptions about "highly advanced" type systems seem to be
rooted in not realizing that the types/classes/constraints/etc can be
used to capture the very nature of the program begin built. In other
words, to capture the truth about what the program is. That said, it
should also be mentioned that the "type systems" found in many
mainstream languages (e.g. C) do *not* reflect the truth at all. For
example, substracting two unsigned numbers (a-b) in the C language
yields an unsigned number - which is mathematically totally wrong of
course. The C's type system is in effect *lying*. The result then is
that the whole application contains so MANY lies that any programmer's
attempt to use the type system to express truths about the problem
domain are completely ridiculous. And Go is not an ideal language in
this respect either.

chris dollin

unread,
Feb 15, 2010, 4:28:35 AM2/15/10
to ⚛, golang-nuts
On 15 February 2010 09:22, ⚛ <0xe2.0x...@gmail.com> wrote:
 
That said, it
should also be mentioned that the "type systems" found in many
mainstream languages (e.g. C) do *not* reflect the truth at all. For
example, substracting two unsigned numbers (a-b) in the C language
yields an unsigned number - which is mathematically totally wrong of
course. The C's type system is in effect *lying*.

It's mathematically correct -- it's subtraction mod 2-to-the-whatever. That
is /surprising/ when it's not what you're expecting, but it's not wrong.

It's a shame that Go's signed arithmetic doesn't have overflow detection,
but if it did, there would have to be some way for it to report exceptional
behaviour. (Orthogonality on its own isn't coherence; every language
construct is influenced by all the others.)

Chris

--
Chris "allusive" Dollin

unread,
Feb 15, 2010, 4:57:23 AM2/15/10
to golang-nuts
On Feb 14, 1:12 am, Eleanor McHugh <elea...@games-with-brains.com>
wrote:

> That's very true, hence why we have integration testing to ensure that when components interact they do what they're supposed to. However it really isn't our responsibility to second-guess how a potentially infinite number of other developers are going to use our code. As long as we publish our interfaces and stick to the contract they specify, those developers can then apply the same standard and everybody will be happy: we get to write more code solving our problems and that code will not be littered with unnecessary type-checking complicating future maintenance and/or enhancement.

What do you mean by "unnecessary type-checking"?

By definition, *if* the program is correct, then all type-checking
*is* unnecessary. Dynamically typed languages do not perform any
compile-time analyses, so in effect they are optimistically assuming
the program is correct. They solve the problem of answering the
question "Is this program correct?" by never asking it. Yes, it can be
circumvented by implementing unit tests that do some checks - the
problem is that majority of those tests are only able to prove that
the program works for certain inputs, but not for all possible inputs.

Then there is also other use of type information: For example, type
information is required in places where the compiler needs to make
decisions about which particular method to invoke. For example, if I
want to compile "object.method()" and I want to resolve the method at
compile-time, I have to know the type of the object. Otherwise, the
compiler would have to treat the word "method" as plain text. If I
know which particular method I mean, why would I be denying the
compiler to also have that knowledge? Seriously, why shouldn't the
compiler be allowed to know some of the things the programmer knows?
Because somebody was unintelligent when designing the language and
lazy when implementing the compiler?

chris dollin

unread,
Feb 15, 2010, 5:15:29 AM2/15/10
to ⚛, golang-nuts
On 15 February 2010 09:57, ⚛ <0xe2.0x...@gmail.com> wrote:
Dynamically typed languages do not perform any
compile-time analyses, so in effect they are optimistically assuming
the program is correct. They solve the problem of answering the
question "Is this program correct?" by never asking it.

They ask the question "is this program type-correct" all over the
place, dynamically. It all boils down eventually to "is this argument
of a type suitable to this function -- if not, BOOM."
 
Then there is also other use of type information: For example, type
information is required in places where the compiler needs to make
decisions about which particular method to invoke. For example, if I
want to compile "object.method()" and I want to resolve the method at
compile-time, I have to know the type of the object. Otherwise, the
compiler would have to treat the word "method" as plain text. If I
know which particular method I mean, why would I be denying the
compiler to also have that knowledge? Seriously, why shouldn't the
compiler be allowed to know some of the things the programmer knows?
Because somebody was unintelligent when designing the language and
lazy when implementing the compiler?

Because effort is limited and has to be spent effectively? Because "things
the programmer knows" include "things that are hard to compute with"?
Because compile-time type-checking is forced to reject perfectly sensible
programs because the halting problem is unsolvable?

Casting aspersions isn't making a case.

--
Chris "allusive" Dollin

unread,
Feb 15, 2010, 5:26:25 AM2/15/10
to golang-nuts
On Feb 15, 4:49 am, Eleanor McHugh <elea...@games-with-brains.com>
wrote:

> To paraphrase Knuth, the only real proof of a program's validity is whether or not it behaves correctly at runtime.

Yes. Everywhere I use word "correctness" in my posts, I actually meant
"partial correctness". But, the less partial it is the better.

> Memory leaks and buffer overruns are historically the two most significant causes of runtime failure and in-process concurrency is rapidly achieving equal status now multicore architectures are in the mainstream. Having a language which sanitises these three problems is a huge productivity gain because it removes a whole class of wheel-reinvention problems from projects which amount to nothing more than basic (and very tedious) accounting. However once a language has garbage collection, bounds checking and a safe concurrency model there're diminishing returns in additional safety features that need to be weighed against the costs they impose on development.

1. Garbage collection does *not* entirely prevent memory leaks.

2. Putting Go's "bounds checking" and "safe concurrency" on the same
level is not correct. The bounds checking implementation ensures
absolute correctness (at run-time), in the sense that it is impossible
to create Go programs which would be accessing elements outside of an
array. On the other hand, the safety of a concurrent Go program is not
absolute, only optional. For example, Go is unable to ensure/express
that accesses to a shared composite structure should have a particular
ordering. In other words, it is possible to create valid and also
invalid concurrent programs in Go.

unread,
Feb 15, 2010, 5:33:30 AM2/15/10
to golang-nuts
On Feb 15, 10:28 am, chris dollin <ehog.he...@googlemail.com> wrote:

> On 15 February 2010 09:22, ⚛ <0xe2.0x9a.0...@gmail.com> wrote:
>
> > That said, it
> > should also be mentioned that the "type systems" found in many
> > mainstream languages (e.g. C) do *not* reflect the truth at all. For
> > example, substracting two unsigned numbers (a-b) in the C language
> > yields an unsigned number - which is mathematically totally wrong of
> > course. The C's type system is in effect *lying*.
>
> It's mathematically correct -- it's subtraction mod 2-to-the-whatever. That
> is /surprising/ when it's not what you're expecting, but it's not wrong.

Yes, I agree. But it is in contrast with how people are actually using
the operator "-". Most people are using it to do non-modulo
subtraction.

> It's a shame that Go's signed arithmetic doesn't have overflow detection,
> but if it did, there would have to be some way for it to report exceptional
> behaviour. (Orthogonality on its own isn't coherence; every language
> construct is influenced by all the others.)

It can be solved by having arbitrary precision numbers as the default
number type.

unread,
Feb 15, 2010, 5:37:22 AM2/15/10
to golang-nuts
On Feb 15, 11:15 am, chris dollin <ehog.he...@googlemail.com> wrote:

> On 15 February 2010 09:57, ⚛ <0xe2.0x9a.0...@gmail.com> wrote:
>
> > Then there is also other use of type information: For example, type
> > information is required in places where the compiler needs to make
> > decisions about which particular method to invoke. For example, if I
> > want to compile "object.method()" and I want to resolve the method at
> > compile-time, I have to know the type of the object. Otherwise, the
> > compiler would have to treat the word "method" as plain text. If I
> > know which particular method I mean, why would I be denying the
> > compiler to also have that knowledge? Seriously, why shouldn't the
> > compiler be allowed to know some of the things the programmer knows?
> > Because somebody was unintelligent when designing the language and
> > lazy when implementing the compiler?
>
> Because effort is limited and has to be spent effectively? Because "things
> the programmer knows" include "things that are hard to compute with"?
> Because compile-time type-checking is forced to reject perfectly sensible
> programs because the halting problem is unsolvable?

I don't understand why it should be forced to reject them. Why don't
you give an example?

chris dollin

unread,
Feb 15, 2010, 5:38:55 AM2/15/10
to ⚛, golang-nuts

I believe that would be contrary to Go's stated efficiency goals. (I could
be wrong,)

Further, it would not solve the problem for the fixed-precision numbers;
surely one wants to have overflow capture for those without having to
commit to arbitrary-precision number representations.

--
Chris "allusive" Dollin

chris dollin

unread,
Feb 15, 2010, 5:56:48 AM2/15/10
to ⚛, golang-nuts

It will be contrived but illustrative:

var x: 1..100 =  m(17)

def m(n:integer) = (n > 100 | n - 10 | m(m(n+11)))

How clever is the compiler? How expressive is the type system?
How long are you prepared to wait for the compilation to finish?

--
Chris "allusive" Dollin

unread,
Feb 15, 2010, 12:28:14 PM2/15/10
to golang-nuts
On Feb 15, 11:56 am, chris dollin <ehog.he...@googlemail.com> wrote:

> On 15 February 2010 10:37, ⚛ <0xe2.0x9a.0...@gmail.com> wrote:
>
> > On Feb 15, 11:15 am, chris dollin <ehog.he...@googlemail.com> wrote:
> > > On 15 February 2010 09:57, ⚛ <0xe2.0x9a.0...@gmail.com> wrote:
>
> > > Because effort is limited and has to be spent effectively? Because
> > "things
> > > the programmer knows" include "things that are hard to compute with"?
> > > Because compile-time type-checking is forced to reject perfectly sensible
> > > programs because the halting problem is unsolvable?
>
> > I don't understand why it should be forced to reject them. Why don't
> > you give an example?
>
> It will be contrived but illustrative:
>
> var x: 1..100 =  m(17)
>
> def m(n:integer) = (n > 100 | n - 10 | m(m(n+11)))

m 17
m m 28
m m m 39
m m m m 50
m m m m m 61
m m m m m m 72
m m m m m m m 83
m m m m m m m m 94
m m m m m m m m m 105
m m m m m m m m 95
m m m m m m m m m 106
m m m m m m m m 96
m m m m m m m m m 107
m m m m m m m m 97
m m m m m m m m m 108
m m m m m m m m 98
m m m m m m m m m 109
m m m m m m m m 99
m m m m m m m m m 110
m m m m m m m m 100
m m m m m m m m m 111
m m m m m m m m 101
m m m m m m m 91 (New rule: m m 105 --> 91)
m m m m m m m m 102
m m m m m m m 92
m m m m m m m m 103
m m m m m m m 93
m m m m m m m m 104
m m m m m m m 94 (New rule: m m 94 --> m 94)
m m m m m m 94
m m m m m 94
m m m m 94
m m m 94
m m 94
m 94
m m 105
91

That was fun!

> How clever is the compiler? How expressive is the type system?
> How long are you prepared to wait for the compilation to finish?

I agree those questions are important. I think the best way of making
a practical compiler is for it to resort to the human programmer in
the more complex cases. The programmer can then decide whether to
continue the proof, or leave it unsolved. If continuing, the human
should input the sequence of actions to be used by the compiler to
arrive at the desired conclusion.

What I find rather unfortunate is that all mainstream programming
languages seem to be built with the assumption that it is forbidden
for a compiler to ask the user any questions or to print the message
"I don't know what to do here". Consequently, the stuff performed by
such a compiler needs to be reduced to decidable and quickly
computable statements about the program being compiled.

Eleanor McHugh

unread,
Feb 15, 2010, 10:22:22 PM2/15/10
to golang-nuts
On 15 Feb 2010, at 08:58, ⚛ wrote:
> On Feb 13, 9:15 pm, Eleanor McHugh <elea...@games-with-brains.com>
> wrote:
>> Actually people don't like languages that make them think about how the machine works
>
> You are "mixing apples with oranges". What has type safety to do with
> how the machine works.

In recent years I've spent a lot of time mixing with web developers, many of whom consider a type system to be part of how the machine works and often have great difficulty grasping concepts that anyone used to working at a lower level takes for granted.

So whilst you might think I'm mixing apples with oranges, what I'm actually saying is that for those who don't come from a CompSci background a type system isn't necessarily an obvious artefact of abstract human thought but may well be perceived as a rather confusing mish-mash of rules related to how computers manipulate numbers.

>> or rely on a deep comfort with formal logic. What they really want are languages which help them elegantly express the problem domain they're working on.
>
> Which is exactly what a good type system is about.

I don't disagree with you.

>> If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue.
>
> OK. It is programmer's failure. But if the compiler is unable to tell
> the programmer that there is a problem although it could, then it is a
> failure of language design.

And at what point do you draw the line in adding rules to identify problems? With problems that occur once in every 100 LOC? Once in every 1 KLOC? Take the case which started this thread: how many times per KLOC will there be reassignment to a local variable? And how many of those reassignments will be a code smell as opposed to a valid and sensible use case?

Every decision in language design has an actual cost, both for the implementers and for those using the language. Adding rules out of a misguided need for completeness just ensures that cost will be higher for both parties whilst delivering little practical benefit.

>> For one thing, shouldn't this be caught during testing?
>
> The checks made during compilation aren't testing? I strongly think
> they are.

Checking syntax and type compatibility is merely a verification that code will compile, which says little about its validity. Therefore if you rely on that as a testing strategy you're building on insecure foundations. Testing in any meaningful sense means executing all code pathways, ensuring interfaces act both as advertised and as intended, and validating against whatever requirements the code needs to satisfy.

>> Go ships with a testing tool that's trivially simple to use and thanks to the current speed of compilation feels wonderfully interactive.
>>
>> Incidentally, Ruby and Smalltalk are both strongly typed languages that just happen to also use dynamic typing. It is a programmer decision to actively support that dynamic typing by implementing method_missing() or doesNotUnderstand respectively. There are cases when dynamic typing is very cool (as your LISPer friends will verify) and I suggest you at least spend a few months trying "duck-typing" before adopting a position either way.
>
> The difference between say Java and Go is that [in order make an
> object to respond to as particular set of messages] the Java language
> enables you to do it only after you explicitly allowed it. There is no
> other distinction. And since in Java the programmer is explicit about
> his/her intentions and the compiler is performing compile-time checks,
> it is impossible for Smalltalk's doesNotUnderstand to happen at run-
> time.

That would be mostly true if Java code existed in isolation. It runs on the JVM and as such may rely on code written in other languages (including assembler) which take a different approach to type management and therefore a Java program can indeed encounter runtime situations where it experiences an exception due to an unknown method. And because many Java developers arrogantly assume this isn't possible, in large part thanks to the same argument you've just made, those kind of errors can and do occur in production systems.

This is doubly indefensible as it also overlooks the ease with which objects can have their type identity altered in Java by casting them as Object or by loading bytecode via a custom class loader. It's five or six years since I've done this sort of thing myself but I know the JRuby team very well and have had some fascinating conversations about some of the tricks they've used as part of full Ruby integration into the Java ecosystem.

Eleanor McHugh

unread,
Feb 15, 2010, 11:41:07 PM2/15/10
to golang-nuts
On 15 Feb 2010, at 09:57, âš› wrote:
> On Feb 14, 1:12 am, Eleanor McHugh <elea...@games-with-brains.com>
> wrote:
>> That's very true, hence why we have integration testing to ensure that when components interact they do what they're supposed to. However it really isn't our responsibility to second-guess how a potentially infinite number of other developers are going to use our code. As long as we publish our interfaces and stick to the contract they specify, those developers can then apply the same standard and everybody will be happy: we get to write more code solving our problems and that code will not be littered with unnecessary type-checking complicating future maintenance and/or enhancement.
>
> What do you mean by "unnecessary type-checking"?
>
> By definition, *if* the program is correct, then all type-checking
> *is* unnecessary. Dynamically typed languages do not perform any
> compile-time analyses, so in effect they are optimistically assuming
> the program is correct. They solve the problem of answering the
> question "Is this program correct?" by never asking it.

As you clearly place a greater emphasis on the ability of type systems to prove or disprove the consistency of a program than is probably good for you I suggest you study Gödel's incompleteness theorems for a while and then consider that if such a type system were to be sufficiently powerful to express the natural numbers it would be incapable of passing final judgement on whether or not any particular given program expressed in terms of that type system were consistent with it.

Basically, the price of being able to do arithmetic is that you have to do actual empirical testing. This is a well accepted fact in most scientific disciplines and even CompSci has it in Turing's halting problem and Rice's Theorem.

Dynamic languages embrace the experimental principle and leave the question of whether or not a program is correct to whether or not it performs as desired. That is the only test which truly matters.

> Yes, it can be
> circumvented by implementing unit tests that do some checks - the
> problem is that majority of those tests are only able to prove that
> the program works for certain inputs, but not for all possible inputs.

The correct response to garbage is to return an error. Garbage can and will happen because machines generally operate in a dirty, noisy, EM-polluted environment in which random bit flips can and sometimes do happen and I/O ports can receive garbled transmissions. Static type checking gives you zero protection against these errors, so yet again you need to test properly.

And testing is about a lot more than just unit testing. Wikipedia has a portal which makes a reasonable starting place (http://en.wikipedia.org/wiki/Portal:Software_Testing) but the net is littered with useful resources. It's a pity that it's something that many programmers often do badly and as an afterthought rather than as a key part of their daily practice - it really is the best way to learn how code actually works.

> Then there is also other use of type information: For example, type
> information is required in places where the compiler needs to make
> decisions about which particular method to invoke. For example, if I
> want to compile "object.method()" and I want to resolve the method at
> compile-time, I have to know the type of the object. Otherwise, the
> compiler would have to treat the word "method" as plain text. If I
> know which particular method I mean, why would I be denying the
> compiler to also have that knowledge? Seriously, why shouldn't the
> compiler be allowed to know some of the things the programmer knows?
> Because somebody was unintelligent when designing the language and
> lazy when implementing the compiler?

There's a sufficiently broad literature available on dynamic language design that if you really want an answer to that question Google is your friend. But as a general rule, only naive implementations use a text string for dynamic method lookup. Outside of very constrained embedded systems or high-performance computing the difference in lookup technique is not generally sufficient to bother most people, hence why Rails is so popular - a complex and slow web framework hosted on a relatively slow dynamic language runtime is still fast enough for production use and is sufficiently better to code for than many of its competitors which lack both limitations to be more effective at delivering web applications on time and to budget.

unread,
Feb 16, 2010, 11:56:45 AM2/16/10
to golang-nuts
On Feb 16, 4:22 am, Eleanor McHugh <elea...@games-with-brains.com>
wrote:

> On 15 Feb 2010, at 08:58, ⚛ wrote:
>
> > On Feb 13, 9:15 pm, Eleanor McHugh <elea...@games-with-brains.com>
> > wrote:
> >> Actually people don't like languages that make them think about how the machine works
>
> > You are "mixing apples with oranges". What has type safety to do with
> > how the machine works.
>
> In recent years I've spent a lot of time mixing with web developers, many of whom consider a type system to be part of how the machine works and often have great difficulty grasping concepts that anyone used to working at a lower level takes for granted.
>
> So whilst you might think I'm mixing apples with oranges, what I'm actually saying is that for those who don't come from a CompSci background a type system isn't necessarily an obvious artefact of abstract human thought but may well be perceived as a rather confusing mish-mash of rules related to how computers manipulate numbers.

Let's consider an example of a construct which we can think of as a
bridge between "the world of numbers" and "the way how web developers
think (whatever that means)": an enumeration. For example:

enumeration Fruit { Apple, Orange, Mango, etc }

No numbers there. Obviously, the compiler has to choose a (numeric)
representation for the individual fruits and has to decide how to
encode members of the type Fruit. Ideally, these matters should be
invisible to the programmer. A conversion between "int" and "a Fruit"
should not be possible because it does not make any sense.
Unfortunately, in certain languages (e.g: C), the implementation of
enumerations is not ideal because the type system is not very good.
For example, expressing the idea "a list of fruits" in C forces the
programmer to *somehow* convert a Fruit into an integer, to be able to
determine the size of the type Fruit, convert from integers to fruits
and vice versa. But what does this tell us? That if language X has a
non-ideal implementation of enumerations then we should conclude that
ideal implementations of enumerations are therefore impossible? I
don't think so.

> >> If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue.


>
> > OK. It is programmer's failure. But if the compiler is unable to tell
> > the programmer that there is a problem although it could, then it is a
> > failure of language design.
>
> And at what point do you draw the line in adding rules to identify problems?

Good question. The answer is: I don't know exactly. But what I do know
is that I want a language's capabilities to grow in concert with the
growth of my own (formal reasoning) abilities. Consequently, I am
*not* saying that children should start programming in e.g. CoQ.

> >> For one thing, shouldn't this be caught during testing?
>
> > The checks made during compilation aren't testing? I strongly think
> > they are.
>
> Checking syntax and type compatibility is merely a verification that code will compile, which says little about its validity.

I disagree with that.

>Therefore if you rely on that as a testing strategy you're building on insecure foundations. Testing in any meaningful sense means executing all code pathways,

How are you proposing to determine that you actually covered all code-
paths?

> That would be mostly true if Java code existed in isolation. It runs on the JVM and as such may rely on code written in other languages (including assembler) which take a different approach to type management and therefore a Java program can indeed encounter runtime situations where it experiences an exception due to an unknown method.

1. OK. But that's "beyond Java".
2. Another question is whether an assembler code can fail because it
called pure Java code.

>And because many Java developers arrogantly assume this isn't possible,

If you aren't trying to find method-objects via reflection, then of
course it is impossible. Once the system classloader loads the
bytecode *and* makes all checks which need to be made, then it is
totally impossible to encounter any kind of "message not understood".
The question is whether the system classloader performs all the
required checks or not. There is no fundamental reason which would
prevent an ideal classloader to successfully load the code only after
it made sure that the code invokes only existing methods.

Finding methods via reflection is a different matter. I wasn't talking
about that.

unread,
Feb 16, 2010, 12:06:10 PM2/16/10
to golang-nuts
On Feb 16, 5:41 am, Eleanor McHugh <elea...@games-with-brains.com>
wrote:

> On 15 Feb 2010, at 09:57, âš› wrote:
>
> > What do you mean by "unnecessary type-checking"?
>
> > By definition, *if* the program is correct, then all type-checking
> > *is* unnecessary. Dynamically typed languages do not perform any
> > compile-time analyses, so in effect they are optimistically assuming
> > the program is correct. They solve the problem of answering the
> > question "Is this program correct?" by never asking it.
>
> As you clearly place a greater emphasis on the ability of type systems to prove or disprove the consistency of a program than is probably good for you I suggest you study Gödel's incompleteness theorems for a while and then consider that if such a type system were to be sufficiently powerful to express the natural numbers it would be incapable of passing final judgement on whether or not any particular given program expressed in terms of that type system were consistent with it.
>
> Basically, the price of being able to do arithmetic is that you have to do actual empirical testing.

I am currently interpreting this problem in a different way. So,
currently, I do not agree with you.

>This is a well accepted fact in most scientific disciplines and even CompSci has it in Turing's halting problem and Rice's Theorem.
>

> The correct response to garbage is to return an error. Garbage can and will happen because machines generally operate in a dirty, noisy, EM-polluted environment in which random bit flips can and sometimes do happen and I/O ports can receive garbled transmissions. Static type checking gives you zero protection against these errors,

Why? Static type checking cannot be used to *ensure* that data
representations and computations contain a certain amount of
redundancy thus compensating for the noise? I am *not* saying that I
know how to implement such a type checking, but I find it very odd
that you *are* saying that such static type checks are impossible. Are
you sure?

Ryanne Dolan

unread,
Feb 16, 2010, 2:13:28 PM2/16/10
to Eleanor McHugh, golang-nuts
the correct response to garbage is to return an error. Garbage can and will happen because machines generally operate in a dirty, noisy, EM-polluted environment in which random bit flips can and sometimes do happen and I/O ports can receive garbled transmissions. Static type checking gives you zero protection against these errors, so yet again you need to test properly.

This is an utterly preposterous argument.  I could smash your laptop with a hammer and make your program fail, and testing offers zero protection against hammer-bugs.  Therefore, you should not waste time testing code in a hammer-polluted environment.  Non sequitur.

Testing doesn't prove anything.  Nor does type-checking.  Both are merely tools to detect code that does something other than what was intended.  To argue for testing and against type-checking is just ridiculous.

That said, I still hold that any minimally invasive compile-time sanity check that might detect a mistake is a good thing.  This is the basis of type checking _and_ unit tests; type checking can just detect some bugs earlier in the tool chain.  Like others have said here, it is debatable _where_ to draw the line tho; I think the "write only" concept discussed here could be justified, but not if it is potentially confusing.  The whole point of discussion improvements to Go (in my opinion) is to find what the community thinks is worth having.

Anyway, I think you stand alone, Eleanor, in thinking that testing is a cure-all and compilers should be dumb as possible.  That is largely the difference between Developers (that use the tools they are given as best they can) and the Programming Language Theory crowd (who try to reason about the best way to describe algorithms, software, and hardware).  It is good that we have both here, since Go needs to appeal to both parties.  But your view that testing is the one-true-method for verification will meet a lot of resistance from PLT, who largely view testing as duck-tape at the end of a weak tool chain.

Thanks.
Ryanne

--
www.ryannedolan.info

Russ Cox

unread,
Feb 16, 2010, 5:00:01 PM2/16/10
to Ryanne Dolan, Eleanor McHugh, golang-nuts
> Anyway, I think you stand alone, Eleanor, in thinking that testing is a
> cure-all and compilers should be dumb as possible.

This seems like an unfair characterization.
I interpreted Eleanor's mail as saying, approximately,
that type systems can't do everything and testing must
therefore pick up the slack. That doesn't imply that
compilers should be as dumb as possible, just that
since you're not going to get to 100% no matter how
complex you make the compiler and type system,
there comes a point of diminishing returns where
it makes more sense to write a test.

If you disagree, please demonstrate a type system
that will eliminate the need to test the implementation
of strconv.Atof64 and strconv.Ftoa64.

Russ

Ryanne Dolan

unread,
Feb 16, 2010, 6:55:42 PM2/16/10
to r...@golang.org, Eleanor McHugh, golang-nuts
Perhaps I am reading too far into her comments, but:

shouldn't [type errors] be caught during testing?

To paraphrase Knuth, the only real proof of a program's validity is whether or not it behaves correctly at runtime.

whether or not it performs as desired. That is the only test which truly matters.

 [writing tests] really is the best way to learn how code actually works

you have to do actual empirical testing 

I am interpreting this all (plus her several allusions to dynamic languages) as :

that testing is a cure-all and compilers should be dumb as possible

I'd love to hear other interpretations, but I am having a hard time reconciling these statements with my prejudices against dynamic languages.

Thanks.
Ryanne

--
www.ryannedolan.info

Ryanne Dolan

unread,
Feb 16, 2010, 7:02:05 PM2/16/10
to r...@golang.org, Eleanor McHugh, golang-nuts
Russ,

please demonstrate a type system that will eliminate the need to test the implementation of strconv.Atof64 and strconv.Ftoa64.
 
I didn't mean to argue against testing (did I say that?).  I'm just arguing for smarter compilers.  I don't see that the existence of a testing framework should be grounds for leaving intelligent sanity checks out of the compiler.  Again, it is debatable what is intelligent.  But Eleanor seems to be dismissing the idea based on the premise that dynamic languages are dumber but testing makes them just as good.

Thanks.
Ryanne

--
www.ryannedolan.info

Marcelo Cantos

unread,
Feb 16, 2010, 7:41:21 PM2/16/10
to golang-nuts
Depending on the language, a compiler may implement 'const int V = 10'
by simply substituting the value 10 in generated code wherever V is
invoked. In such cases, there isn't some cell containing the value 10,
to which all code refers when it needs the value of V, and it
meaningless to talk about a 'variable' and its lifespan.

On Feb 13, 4:05 am, ⚛ <0xe2.0x9a.0...@gmail.com> wrote:
> On Feb 12, 5:48 am, Peter Williams <pwil3...@gmail.com> wrote:
>
> > On 12/02/10 11:42, Ryanne Dolan wrote:
>
> > > Ostsol,
> > > Indeed.  So what is the middle ground?  Is there a way to prevent my
> > > original pitfalls, without condemning the assignment, increment, etc
> > > operators?
>
> > I think that you should weaken your proposal so that it only applies to
> > the local variables of packages other than "main".  You'd have a
> > stronger argument then.  But still weak, as by definition a variable is
> > mutable (and constants are their immutable sibling).  Strictly speaking
> > "immutable variable" is an oxymoron.
>
> You are wrong here. The term "immutable variable" makes perfect sense.
> The only thing left is to explain the *context* in which it makes
> perfects sense:
>
> The lifetime of a variable V spans from timepoint T1 to T2. Inbetween
> T1 and T2, the variable V can be declared immutable for a sub-interval
> of (T1,T2). This *is* how a constant in a computer comes to existence.
> There is *no* other way of how to create constants (=immutable things)
> in a computer! When you declare in a programming language that
> variable V is a constant, such as "const int V = 10", you are in fact
> declaring that the value of the memory cell containing V should be
> *immutable* for the *lifetime* of the program. Before the program
> started, the memory cell V could have had some other value, which
> means that the program loader had to overwrite it with the value "10"
> at program startup. After the program ends, the memory cell containing
> V is mutable again.
>
> ... so, in my opinion, the "natural state of things" is that they are
> mutable. Any kind of immutability is the result of applying certain
> rules to *originally mutable* objects for a limited amount of time.
>
> It is sad that there are no programming languages capable of modelling
> this the way it actually is.

Marcelo Cantos

unread,
Feb 16, 2010, 8:17:11 PM2/16/10
to golang-nuts
A subtle problem emerges in languages that let you mix imperative and
functional idioms (which is most languages these days). It generally
takes the form of a lambda function that makes use of an outer-scope
variable and expects the value of the variable to be frozen for its
purposes. For instance (in Python):

funcs = []
for i in range(10):
funcs += [lambda: i*i]

for f in funcs:
print f()
81
81
81
81
81
81
81
81
81
81

I've seen this behaviour in Python, JavaScript and C# programs (which
is interesting, since C# restricts the scope of the loop variable to
the inside of the block; the cell obviously outlives its scope). It is
particularly pernicious in Python, which lacks block-scope, so you
can't fix it by capturing the value in a block-scope variable for use
in the lambda. I.e., the following trick doesn't work in Python, while
it does in most other languages:

for i in range(10):
local_i = i
funcs += [lambda: local_i*local_i]

This hazard wouldn't exist if variables couldn't be reassigned.

On Feb 12, 12:43 pm, Kevin Ballard <kball...@gmail.com> wrote:
> I still don't see why reassigning a variable is considered a problem.
> In your little snippet there, what exactly is the problem posed by
> modifying those variables?
>
> -Kevin Ballard


>
>
>
> On Thu, Feb 11, 2010 at 5:42 PM, Ryanne Dolan <ryannedo...@gmail.com> wrote:
> > Ostsol,
> > Indeed.  So what is the middle ground?  Is there a way to prevent my
> > original pitfalls, without condemning the assignment, increment, etc
> > operators?

> > I think the typical solution is to allow variables to be read-only within
> > certain scopes.  In particular, I should never need to reassign to x or i in
>
> > for i,x := range a {
> >     i = 1
> >     x = 2
> > }
> > Like I said earlier, I don't advocate pure-functional programming, tho it
> > might look like it from this conversation.  I'm more looking for a smarter
> > compiler.
>
> > Thanks.
> > Ryanne
>
> > --
> >www.ryannedolan.info
>
> > On Thu, Feb 11, 2010 at 7:35 PM, Ostsol <ost...@gmail.com> wrote:
>
> >> That would make something like the following illegal:
>
> >> for i := 0; i < x; i++ {
> >>    // stuff
> >> }
>
> >> -Ostsol
>
> --
> Kevin Ballardhttp://kevin.sb.org
> kball...@gmail.com

Eleanor McHugh

unread,
Feb 16, 2010, 8:25:52 PM2/16/10
to golang-nuts
On 16 Feb 2010, at 23:55, Ryanne Dolan wrote:
> Perhaps I am reading too far into her comments, but:
>
> >> shouldn't [type errors] be caught during testing?
>
> > To paraphrase Knuth, the only real proof of a program's validity is whether or not it behaves correctly at runtime.
>
> > whether or not it performs as desired. That is the only test which truly matters.
>
> > [writing tests] really is the best way to learn how code actually works
>
> > you have to do actual empirical testing
>
> I am interpreting this all (plus her several allusions to dynamic languages) as :
>
> > that testing is a cure-all and compilers should be dumb as possible
>
> I'd love to hear other interpretations, but I am having a hard time reconciling these statements with my prejudices against dynamic languages.

Ryanne,

There's little need for interpretation. I've clearly stated a truth which is often overlooked by language theorists but which derives from fundamental theories of computation.

Rice's Theorem makes it impossible to prove conclusively whether any non-trivial program is correct by application of an algorithmic method (as in static type checking). So there is a fundamental need to test software if you need to know with a high degree of confidence that it is correct.

Does this fact invalidate the use of type systems? Of course not. Dynamic languages such as Smalltalk and Ruby are in fact strongly typed and use this to great advantage, however their design recognises that proving the correctness of a non-trivial program in those languages through algorithmic means is not necessarily possible. Therefore instead of allowing a compiler to impose arbitrary controls on how the language is used - which would prevent many valid programs from being compiled - they instead accept that runtime execution is the true test of conformity between code and programmer intent.

The other half of this discussion - as to whether or not a language should be 'safe' for some collection of properties which are considered unsafe - also falls foul of the same basic maths. However you select the rules to apply, unsafe code will still be possible and worse than that will be compatible with those rules. Belief in such tools leads to a fundamental misunderstanding of the actual risks to the stability of the programs which result from their use and that can have consequences in direct opposition to the intent of applying the rules in the first place.

Speaking for myself I want a language to be expressive. I want it to enable me to think thoughts I might not otherwise of entertained and to use them to derive powerful, simple abstractions with which to build stable and maintainable software. Rules which aid this goal by giving me additional linguistic tools are good, rules which limit it by preventing me from saying certain things are bad. It's a simple philosophy and one I'd defend on exactly the same grounds as I would freedom of speech in other contexts.

The quest to enforce safety is also a fool's errand. Some metrics of code quality might be improved by placing restrictions on developers via compiler action but at the cost of removing the ability to solve certain classes of problem effectively. Also by preventing programmers from writing bad code you remove the opportunities to learn why those code patterns are bad which accompany that freedom. In any event the truly ignorant are capable of being both highly creative and highly motivated when trying to achieve a goal in the face of opposition and the only cure for their condition is education - whether formal or otherwise.

I guess to sum up my view of both discussions: reality is dirty, messy and no respecter of rules. By definition it is the sovereign arbiter of correctness and no matter how good an hypothesis, if it fails to comply with the experimental data it's not correct.

Programs when compiled are just such hypotheses. The compiler ensures they are consistent for some given set of rules and translates them into a form which can be enacted by some approximation of a Turing machine, but it's only when that form is finally executed inside the pulsing core of a processor with it's limited memory and tenuous connection to the outside world that anyone can really see whether the hypothesis is potentially a well-formed theory or its propositions and axioms need to be reconsidered and possibly even abandoned.


Ellie

Eleanor McHugh
Games With Brains

http://feyeleanor.tel

Russ Cox

unread,
Feb 16, 2010, 8:42:03 PM2/16/10
to Marcelo Cantos, golang-nuts
If you want a language that disallows reassignment,
I suggest Erlang or Haskell. They both push that
idea very far and get interesting results.

Go is simply a different language.

Russ

Marcelo Cantos

unread,
Feb 16, 2010, 11:25:07 PM2/16/10
to golang-nuts
I quite agree. Preventing reassignment would have a major impact on the design of an imperative language. (I suspect that looping constructs would be the first to go.)

At the same time, it is important to realise that reassignment semantics are not a simple win-win when mixing it up with functional style, as Go permits. There are pitfalls that one must be aware of in order to avoid some very pernicious bugs.


Cheers,
Marcelo

P.S.: Apologies to Russ for the double send (forgot to reply-all, as I often do).

roger peppe

unread,
Feb 17, 2010, 7:13:02 AM2/17/10
to Marcelo Cantos, golang-nuts
On 17 February 2010 01:17, Marcelo Cantos <marcelo...@gmail.com> wrote:
> for i in range(10):
>    local_i = i
>    funcs += [lambda: local_i*local_i]
>
> This hazard wouldn't exist if variables couldn't be reassigned.

for the particular case of range, this problem could easily be solved
in Go by defining:

for i, v := range x {
foo
}

to be equivalent to:

for i, v := range x {
i := i
v := v
foo
}

there's no loss of generality, AFAICS, because assignments to
the index and/or value are lost each time around the loop anyway.

but if you do this, you've still got the same problem with
the classic loop:

for i := 0; i < N; i++ {
}

so defining the above case for range might seem like an unjustifiable
special case.

unread,
Feb 18, 2010, 7:57:14 AM2/18/10
to golang-nuts
On Feb 16, 8:13 pm, Ryanne Dolan <ryannedo...@gmail.com> wrote:
> Testing doesn't prove anything.

Testing does prove *something*.

The dispute here is that whether that "something" is enough or not
enough, for your own purposes and for your peace of mind.

> Nor does type-checking.

Type checking does prove *something*.

The dispute here is that whether that "something" is enough or not
enough, for your own purposes and for your peace of mind.

> Both are merely
> tools to detect code that does something other than what was intended.

I agree.

But on the other hand, it is a solid fact that "testing" differs from
"validation". My personal understanding is that "validation" is a
stronger term. For example, if I am talking about ensuring that an
array index is *always* within the bounds of an array, and I actually
succeed in showing that it holds, then I prefer to use a word like
"validation", "analysis" or "proof". On the other hand, I prefer to
use the word "testing" for showing that a program runs as expected for
a *particular* set of inputs.

Notice however that validation does *not* imply that all the checking
and computation is *static* and already happened at compile-time. It
is perfectly reasonable to differ some of the necessary checks until
*runtime* (if it is acceptable, and if it is possible to *prove* that
those checks actually ensure that the index is within bounds). What
does this yield us? The assurance that if the index happens to be
outside of bounds then the program will *always* be able to detect it
and somehow abort the computation.

unread,
Feb 18, 2010, 8:53:41 AM2/18/10
to golang-nuts
On Feb 17, 2:25 am, Eleanor McHugh <elea...@games-with-brains.com>
wrote:

> On 16 Feb 2010, at 23:55, Ryanne Dolan wrote:
> Ryanne,
>
> There's little need for interpretation. I've clearly stated a truth which is often overlooked by language theorists but which derives from fundamental theories of computation.

I don't think that "language theorists are overlooking it".

> Rice's Theorem makes it impossible to prove conclusively whether any non-trivial program is correct by application of an algorithmic method (as in static type checking). So there is a fundamental need to test software if you need to know with a high degree of confidence that it is correct.

1. Isn't the test itself written in a programming language? So it is
also an algorithm then.

2. If the test is a real-world test, for example game testing by
letting a human playing the game and testing whether the "game fells
right", then this kind of testing contains an non-mathematical
element. I am *not* talking about these kinds of tests here, since
they cannot be automated or formalized.

> Does this fact invalidate the use of type systems? Of course not. Dynamic languages such as Smalltalk and Ruby are in fact strongly typed and use this to great advantage, however their design recognises that proving the correctness of a non-trivial program in those languages through algorithmic means is not necessarily possible.

Judging from some historical articles I read and some videos I have
seen, I do *not* think the main reason why Smalltalk is untyped is
that its designers consciously recognized the thing you just
described.

> Therefore instead of allowing a compiler to impose arbitrary controls on how the language is used - which would prevent many valid programs from being compiled - they instead accept that runtime execution is the true test of conformity between code and programmer intent.

I agree that runtime execution can be viewed that way, but I am also
seriously suspecting - based on my understanding of what Smalltalk is
about - that you might be misunderstanding some citation or statement
about Smalltalk what you might have seen somewhere. But, I don't know,
maybe I am wrong.

> The other half of this discussion - as to whether or not a language should be 'safe' for some collection of properties which are considered unsafe - also falls foul of the same basic maths.

I agree that the "collection of properties" in itself is (or at least
can be viewed as) unsafe.

> However you select the rules to apply, unsafe code will still be possible and worse than that will be compatible with those rules.

I agree that it is possible.

On the other hand, you have to agree that this kind of flaw is also
possible in the realm of testing that you are advocating here to be
somehow superior to static analysis.

> Belief in such tools leads to a fundamental misunderstanding of the actual risks to the stability of the programs which result from their use and that can have consequences in direct opposition to the intent of applying the rules in the first place.

I never explicitly wrote I have such a belief.

> Speaking for myself I want a language to be expressive.

Well, but from my viewpoint, expressiveness includes the ability to
express [static statements about a program written in the language]
within the language itself.

> I want it to enable me to think thoughts I might not otherwise of entertained and to use them to derive powerful, simple abstractions with which to build stable and maintainable software. Rules which aid this goal by giving me additional linguistic tools are good, rules which limit it by preventing me from saying certain things are bad. It's a simple philosophy and one I'd defend on exactly the same grounds as I would freedom of speech in other contexts.
>
> The quest to enforce safety is also a fool's errand. Some metrics of code quality might be improved by placing restrictions on developers via compiler action but at the cost of removing the ability to solve certain classes of problem effectively.

Look, why don't you provide a concrete example of a problem where an
overly advanced compiler would prevent you from writing an effective
implementation.

>Also by preventing programmers from writing bad code you remove the opportunities to learn why those code patterns are bad which accompany that freedom.

I more disagree with this argument than I agree with it. So, in
summary, I don't agree with you here.

> I guess to sum up my view of both discussions: reality is dirty, messy and no respecter of rules. By definition it is the sovereign arbiter of correctness and no matter how good an hypothesis, if it fails to comply with the experimental data it's not correct.

Well, but I if we take this kind of reasoning the extreme, then there
would be no theories in e.g. physics because the total number of
possible experiments is always infinite. There would only be
hypotheses. Similarly, if a computer game passes the testing phase and
the tester says that it is perfect, even then it is just a hypothesis
that the game is perfect - for example because you did not test it on
your grandma, but if you did then it is possible that your grandma
would tell you that the game is total crap and it cannot be played -
so the statement that "the game is perfect" is a mere hypothesis.

> Programs when compiled are just such hypotheses.

How does testing remove the *possibility* that the program might be
wrong?

>The compiler ensures they are consistent for some given set of rules and translates them into a form which can be enacted by some approximation of a Turing machine, but it's only when that form is finally executed inside the pulsing core of a processor with it's limited memory and tenuous connection to the outside world that anyone can really see whether the hypothesis is potentially a well-formed theory or its propositions and axioms need to be reconsidered and possibly even abandoned.

What are you saying? That it is impossible to prove static properties
of programs? I think you got it all totally mixed up. Here's why:

The program *actually* runs when the compiler is compiling it. No,
really, it *is* actually running within the compiler - the only
difference from a real run is that in a compiler it runs only
"partially". The compiler compiling the program and doing a bunch of
checks is running on *real* hardware - or did you forget about that?
In other words, those static type-checks you are criticizing all over
here are in fact partial *executions* of the program!!!

roger peppe

unread,
Feb 18, 2010, 9:35:32 AM2/18/10
to golang-nuts
On 18 February 2010 13:53, ⚛ <0xe2.0x...@gmail.com> wrote:
> Look, why don't you provide a concrete example of a problem where an
> overly advanced compiler would prevent you from writing an effective
> implementation.

i've had GHC go into an infinite loop on me when type checking before;
i failed to write an effective implementation until i stopped trying
to get the compiler to check so much.

it's pleasant to have a compiler that's guaranteed to halt.

jeremy...@gmail.com

unread,
Mar 11, 2010, 1:41:51 PM3/11/10
to golang-nuts

On Feb 11, 9:08 pm, Ryanne Dolan <ryannedo...@gmail.com> wrote:
> > why reassigning a variable is considered a problem
>

> From a readability standpoint, something like:
>
> a = 4
> ...
> a = 5
>
> indicates to me that the variable 'a' has been poorly named and is being
> abused (used contrary to its original purpose).  Notice that there is a
> subtle difference here between _operating_ on a variable, and outright
> assigning it.  Logically, numberOfDogs++ means "now I have one more dog than
> I had before", whereas "numberOfDogs = 5" means "forget what I said before
> about the number of dogs".

I agree with this. This might sound crazy but what if assignments
using '=' were const by default and a keyword like temp is needed to
do the odd bits like iteration and such? Personally, I prefer
iteration over a list as opposed to iteration by incrementing some
weird variable. It seems messier with the temporary variable.

Part of the issue might be that people are too used to one paradigm of
coding but I personally thing functional constructs will be more
commonplace in the future.

jeremy...@gmail.com

unread,
Mar 11, 2010, 1:45:38 PM3/11/10
to golang-nuts
Oh, and to be fair, I should mention that I'm a baby haskeller so
maybe I'm a little biased? I programmed in C for a while and found
that I am more productive in Haskell (after a few months of playing
with it)

Haskell is great but I find the IO cumbersome and more complicated
than it needs to be. There's a place to program functionally, and a
place to program iteratively/sequentially. Can't Go do both?

On Mar 11, 1:41 pm, "jeremy.c....@gmail.com" <jeremy.c....@gmail.com>
wrote:

> coding but I personally thingfunctionalconstructs will be more
> commonplace in the future.

Reply all
Reply to author
Forward
0 new messages