var i int;
if b {
i = 3;
} else {
i = 5;
}
But this isn't possible with immutable variables (and the lack of a
trinary operator means we can't do this as an expression, at least not
without a rather ugly bit of syntax and a helper function).
I don't see any benefit to trying to cram this "feature" into a
procedural language. It's just not a good fit.
-Kevin Ballard
--
Kevin Ballard
http://kevin.sb.org
kbal...@gmail.com
For example, if you want to change a large block of code to use a new value that's based on the old value, you can just reassign the variable.
It sounds like you're advocating for writing Go in a functional style.
Functional style has its benefits, but using it just for the sake of
using it is a bit silly.
> I don't think your code:
> var i int;
> if b {
> i = 3;
> } else {
> i = 5;
> }
> ...is a fair counter-example, because the variable is still only assigned-to
> once.
Untrue. The variable declaration also assigns it the zero value.
> I'd love to see a good argument for allowing multiple writes to the same
> _local_ variable. If you can come up with one, I'll shut up. I don't think
> I'm being shortsighted, and I know quite a bit about functional languages.
Like I already said, modifying variables inside of pre-existing blocks
of code. I should not be required to refactor the entire block of code
every time I want to change one of the values it uses.
Making variables immutable is a great way to end up with a large
number of oddly-named variables which are intermediate values. For
example, every time you need to have an error return value (e.g. foo,
err = blah()), you'd have to come up with a unique name for the error
variable. Reusing variables is also a great way to "return" values
from inside control structures, e.g.
var err os.Error
for i := 1; i < 10 && err == nil; i++ {
var foo string
foo, err = someMethodThatCouldPotentiallyError(i)
}
if err != nil {
fmt.Println("Error:", err)
}
If variables were immutable we'd have to recreate all control
structures as functions that operate on closures, which would clutter
up programs needlessly.
Your arguments for why variables should not be reused don't seem
particularly relevant. Points 1 and 3 boil down to trying to protect
the programmer from writing bad code, and I don't think the language
should try to protect me, it should enable me to write what I want.
And you haven't given any justification for point 2.
-Kevin Ballard
How about a non-recursive sort implementation?
The variable declaration also assigns it the zero value.
Points 1 and 3 boil down to trying to protect the programmer from writing bad code
It sounds like you're advocating for writing Go in a functional style. Functional style has its benefits, but using it just for the sake of using it is a bit silly.
Making variables immutable is a great way to end up with a large number of oddly-named variables which are intermediate values
for i := 0; i < x; i++ {
// stuff
}
-Ostsol
Reusing the same variable for something completely different is
stupid, but updating a variable doesn't make code confusing.
Also, you shouldn't complain about other peoes examples when your only
justification is reassigning a variable before it's been used. Of
course that's stupid, but that's not how reassignment is normally used
anyway.
-Kevin Ballard
--
why reassigning a variable is considered a problem
These will only be equivalent if you don't take into account
performance (or memory use, which comes down to the same thing), or if
you limit your arguments to builtin-types such as ints that happen to
fit into registers. Just consider what happens when the data type a
becomes big.
It's easy to reason about a language if you disregard the possibility
that it will ever be used to do something hard.
David
Just consider what happens when the data type a becomes big.
Just consider what happens when the data type a becomes big.Instead of passing around large parameters on the stack, you should pass pointers and explicitly copy only the memory you need to copy:func foo (a *BigDataType) {b := a.field + 1....}This is all-around better than allowing me to modify the parameter.
Some programming languages are full of well intentioned but arbitrary rules that only make using the language more complex than it needs to be with little to no actual gain and potentially ruling out perfectly valid use cases.
I think that you should weaken your proposal so that it only applies to
the local variables of packages other than "main". You'd have a
stronger argument then. But still weak, as by definition a variable is
mutable (and constants are their immutable sibling). Strictly speaking
"immutable variable" is an oxymoron.
Peter
Using immutable variables works decently well in functional languages
like Erlang, which are generally built out of a lot of really small
blocks of code and rely on recursion for iteration and the like, but
procedural languages work rather differently.
Hell, the lack of mutable variables would make it rather difficult to
even assign a value to a variable based on a condition. Right now you
can do
var i int;
if b {
i = 3;
} else {
i = 5;
}
But this isn't possible with immutable variables (and the lack of a
trinary operator means we can't do this as an expression, at least not
without a rather ugly bit of syntax and a helper function).
s := "3456"
n := 0
for c := range s {
n = (n * 10) + (c - '0')
}
a := []int{23,5,7,43,6}
max := 0
for n := range a {
if n > max {
max = n
}
}
if x, ok := node.(ParenExpr); ok {
node = node.SubExpr
}
func bar(x interface{}) (a int) {
switch x.(type) {
case int:
a = x
case Foo:
a = x.n
}
return
}
if i == len(buf) {
b := make([]byte, len(buf) * 2)
copy(b, buf)
buf = b
}
buf[i++] = x
var y Foo
switch x := x.(type) {
case T1:
y = x.foo
case T2:
y = x.fooey
case Foo:
y = x
}
dosomething(y)
You are wrong here. The term "immutable variable" makes perfect sense.
The only thing left is to explain the *context* in which it makes
perfects sense:
The lifetime of a variable V spans from timepoint T1 to T2. Inbetween
T1 and T2, the variable V can be declared immutable for a sub-interval
of (T1,T2). This *is* how a constant in a computer comes to existence.
There is *no* other way of how to create constants (=immutable things)
in a computer! When you declare in a programming language that
variable V is a constant, such as "const int V = 10", you are in fact
declaring that the value of the memory cell containing V should be
*immutable* for the *lifetime* of the program. Before the program
started, the memory cell V could have had some other value, which
means that the program loader had to overwrite it with the value "10"
at program startup. After the program ends, the memory cell containing
V is mutable again.
... so, in my opinion, the "natural state of things" is that they are
mutable. Any kind of immutability is the result of applying certain
rules to *originally mutable* objects for a limited amount of time.
It is sad that there are no programming languages capable of modelling
this the way it actually is.
I wish this showed up as the default text when someone starts to write
a message proposing a nanny addition to the language.
How about splitting your data into two different structs? Uppercase
one variable to export it outside the package (the one with fields
that it's okay to modify), and lowercase the other.
Why not just start it with a lowercase letter and then provide an
accessor? It's slightly more clumsy, but doesn't seem *that* bad.
That said, I'd love to have something like the const keyword, only
done better. I know const is used in the golang talks as an example
of a good idea that ends up being a pain, but somehow it seems like
there's got to be a "good" way to do it... maybe by inferring
const/mutable for function parameters, when not declared explicitly?
David
I think this is partially a consequence of the state of (mainstream)
programming languages. People don't expect/like them to be safer,
because it would mean harder work in terms of thinking and in terms of
the number of characters you have to type while programming. As a
sidenote, I personally don't quite understand why are people using
languages such as Python, Smalltalk, Bash or similar languages to
write complex software. If I want an object to be able to respond to a
certain set of messages, I want to declare it explicitly and I want
the compiler to check that "the flow of types in the program" makes
sense.
> But personally I'd love to see
> variables become immutable within certain scopes. In particular, parameters
> should be immutable (within the function body) and iterator variables should
> be immutable (within the loop body).
I doubt Go will ever have something like this. The only way I see here
is to use an extensible programming language, or to design&implement
ones own programming language.
> This wouldn't require any new keywords
> or anything either. I don't think it would be complicated, but everyone
> else here seems to disagree.
The truth is that it is more complicated than the current state.
Namely, currently the rules for using variables in Go are more-or-less
uniform - in the sense that those rules do not change depending on the
context. What you are proposing introduces a feature which breaks this
uniformity, which in effect requires the programmer to learn how the
rules for using variables change depending on what the variable
represents (e.g: parameter, iteration variable, normal variable,
output parameter, etc). Personally, I have nothing against it, but I
am sure many people will (including the Go creators).
> To be specific: in for-loops, any variable declared in the for-statement
> itself should be immutable:
>
> for i := 0; i < N; i++ {
> i = anything is wrong
> i += anything is wrong
>
> }
>
> Sure everyone will give me examples of bad code and clamor about losing
> power;
The resulting programming language would still be a universal Turing
machine, so technically, there is no loss in power.
> however, this way is much clearer:
>
> inc := 1
> for i := 0; i < N; i = i + inc {
> i = anything is wrong
> inc = something is okay
>
> }
>
> It isn't surprising that many standard libraries have undefined behavior
> when you modify iterator variables.
You mean for example something like java.lang.Iterator in Java? In
such a case, I agree. But I think you are *not* making a clear enough
distinction between the "name of an object" and "the object itself".
Those are different concepts. Assignment of a new value to a variable
is more like renaming, i. e. the expression "i=X" means that object X
will from now on be (also) accessible via the name "i". More
precisely, the expression "i=X" modifies the object "tuple of local
variables" (I am here assuming "i" is a local variable). So,
technically, we can reduce this whole immutability problem to
techniques about how a language should be able to enable and enforce
immutability of objects, under the assumption that "tuple of local
vars" is also an object albeit not explicitly visible in most
languages.
Considering what I have written in the above paragraph, it seems very
strange that you are *only* talking about controlling mutability of
variables but you are *not* talking about mutability of any other
objects in the language.
> It isn't a good idea in general. Same
> thing with modifying parameters, but again no one seems to agree on the
> grounds that parameters are just variables and variables can't be immutable.
Well, I have my own programming language in which it is impossible to
assign to parameters. It works nicely, and there aren't many cases
where it poses a problem. So I agree that it can be done for
parameters.
Obviously, it means that parameters cannot be used as e.g. iteration
variables. As a result of this, the resulting code may consume
slightly more memory than an equivalent code in e.g. C++, because it
is necessary to introduce a new variable for doing the iteration while
the parameter sits idle and takes memory of its own. So, I can
understand that if someone wants not to allocate another few bytes of
memory than immutable parameters are indeed an obstacle. Well, but if
this should become a serious obstacle in solving some future problem,
I will no doubts allow parameters to be mutable. Since the default for
parameters should be "immutable" because it *is* the most common case
in real codes, the best way of introducing mutable parameters seems to
be by means of a keyword to explicitly mark a particular parameter as
mutable.
> Thanks.
> Ryanne
>
> --www.ryannedolan.info
On Fri, Feb 12, 2010 at 11:11 AM, chris dollinWhy not just start it with a lowercase letter and then provide an
<ehog....@googlemail.com> wrote:
> Apropos of write-oncer and final and all that, is there any
> chance of an addition to Go that disallows writing into a
> variable or field of a struct outside of the declaring package?
>
> (Not for /all/ variables/fields, he added hastily. Just for ones
> declared, oh I don't know, final or readonly or protected or
> pawsoff or something.)
accessor? It's slightly more clumsy, but doesn't seem *that* bad.
So to make field or variable protected (ie not writable outside this package)
I have to (a) not publish it directly (b) write a method encapsulating the
/harmless/ operation, and (c) make the users of the code have to use a
different syntax to do the variable access?
That doesn't seem to me to be "fun", nor to make intent visible.
--
Chris "allusive" Dollin
Adendum ... if that's to be the idiom, I'd like to be able to demand the compiler inline calls to such accessors.
On Sat, Feb 13, 2010 at 4:32 AM, chris dollin <ehog....@googlemail.com> wrote:So to make field or variable protected (ie not writable outside this package)
I have to (a) not publish it directly (b) write a method encapsulating the
/harmless/ operation, and (c) make the users of the code have to use a
different syntax to do the variable access?
That doesn't seem to me to be "fun", nor to make intent visible.
I don't see what you're issue is here. This is common practice for access control.
And what different syntax? You just have to add the () to call the function,
which makes it clear that its not a variable which you can assign to.
I also don't see how this makes the intent any less visible.
I'd say its more visible than having a keyword because it stays persistent in the usage and not just in the declaration.
You can even use the Get naming convention if it makes you feel better.
What do you need for it to be fun? I could try to make an editor that flashes fun colours whenever you use an accessor if you like...
Go has fast builds, clean syntax, garbage collection, methods for any type, and run-time reflection. It feels like a dynamic language but has the speed and safety of a static language. It's a joy to use.
On Sat, Feb 13, 2010 at 11:02 AM, chris dollin <ehog....@googlemail.com> wrote:
Adendum ... if that's to be the idiom, I'd like to be able to demand the compiler inline calls to such accessors.That would just be a smart compiler. Its unfortunate you aren't able to be demanding, but at least in this case it seems that its not exactly something you need to demand. It seems to me like standard optimization that would be there in a mature compiler and may be there already. Though I'm not an expert, so if there's some reason a simple accessor wouldn't be inlined, I wouldn't know it, but I certainly can't imagine any.
Actually people don't like languages that make them think about how the machine works or rely on a deep comfort with formal logic. What they really want are languages which help them elegantly express the problem domain they're working on.
> As a
> sidenote, I personally don't quite understand why are people using
> languages such as Python, Smalltalk, Bash or similar languages to
> write complex software. If I want an object to be able to respond to a
> certain set of messages, I want to declare it explicitly and I want
> the compiler to check that "the flow of types in the program" makes
> sense.
If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue. For one thing, shouldn't this be caught during testing? Go ships with a testing tool that's trivially simple to use and thanks to the current speed of compilation feels wonderfully interactive.
Incidentally, Ruby and Smalltalk are both strongly typed languages that just happen to also use dynamic typing. It is a programmer decision to actively support that dynamic typing by implementing method_missing() or doesNotUnderstand respectively. There are cases when dynamic typing is very cool (as your LISPer friends will verify) and I suggest you at least spend a few months trying "duck-typing" before adopting a position either way.
Some people it suits, others have panic attacks :)
Ellie
Eleanor McHugh
Games With Brains
http://code.games-with-brains.net
http://slides.games-with-brains.net
----
raise ArgumentError unless @reality.responds_to? :reason
Well you can always use Ruby if you want every statement to be an expression. The question is, does a systems language really benefit from the extra overhead and complexity of supporting that? Personally I don't think so.
Actually people don't like languages that make them think about how the machine works or rely on a deep comfort with formal logic. What they really want are languages which help them elegantly express the problem domain they're working on.
If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue
There are cases when dynamic typing is very cool
I suggest you at least spend a few months trying "duck-typing"
Entities in Go are statically typed only. I don't see any duck typing
in Go.
Well it's my view and I'm a systems programmer. Indeed for eight years of my career I developed embedded systems including aircraft autopilots in assembler and real-time automation systems in C for broadcast video. I've also worked on and maintained million-line codebases in both languages and written more device drivers than I care to remember. Therefore if I take a contrary view to received wisdom it's because I've tackled some very demanding projects and spent a lot of time reflecting on the experience - both successes and failures.
People who believe that language restrictions lead to better code forget that on any complex project the majority of developers will not be experts in the language chosen and that thanks to commercial pressures they will be motivated to use any feature they run across that seems to solve a problem regardless of whether it's appropriate or maintainable. Generally there is also a disconnect on such projects between specification and implementation which results in frequent misunderstandings.
Just look at C++ - originally a perfectly reasonable OO extension to C that is now sufficiently complex to require several thousand pages of documentation if you wish to understand a fair subset of its nuances. Or ANS-Forth, which no implementer ever buys the actual standard for due to its cost and niche market meaning we live with "draft" ANS-94 as the de facto standard.
Forth is in fact a good example because it's a very flexible and dynamic language (almost an RPN Lisp I guess) and has a committed following precisely because of the power it offers and the enjoyment of working with it.
> > If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue
>
> I'm strongly of the opinion that languages should be designed to be as safe as possible, in the sense that they should make it damn near impossible to make mistakes (at least common ones).
If you write tests to exercise your code properly, design your system well (high cohesion, low coupling, expressive code which captures the requirements elegantly) and let every little black box be responsible for its own lifecycle then you generally develop robust code.
> > A lot of novice programmers or programmers that think they are rock-stars will complain in such an environment,
> > because they will feel too constrained,
Maybe, but a lot of programmers who really know what they're doing will find that the pain of developing in such a language outweighs its productivity and will consequently move on to pastures green. I won't touch Java or C++ anymore just because of the volume of ugly, ill-conceived, perfectly compilable code I've met in those communities. And I'm not alone. I know a lot of people with a similar background to me who now write most of their code in Ruby, Python, etc. and a large portion of the infrastructure of the internet is dependent on this languages. I agree it takes discipline, but programmers should have discipline anyway.
What's so exciting about Go is that instead of being another head-in-the-sand systems language which sees this trend as an abomination, it has amongst its expressed goals finding a way to marry the sheer joy of working in those languages with the additional degree of type precision which we sometimes need in systems coding.
Every suggestion which would add additional complexity to the language has to be considered in that light. Will it make Go less fun to work in? Will a developer have to learn additional rules which given commercial pressures they may not have time to fully comprehend? Will the change to existing idioms break the basic premise of least surprise which drives development of dynamic languages?
> > will get too many compile errors, will think they can't be creative or elegant, etc. I think that is all non-sense
> > that comes from inexperience, personally.
I disagree. My experience is that compilers churn out poorly worded and excessive error messages, usually as a result of a typo on my behalf, and that most of the real problems in code are identified as a result of good testing practices and a reasonable understanding of code design in the first place. When I'm coding interactively in C I rely primarily on printf() and a good debugger. In Go I find gotest saves me from doing that and makes me feel just as productive as when I'm using Ruby.
> >>There are cases when dynamic typing is very cool
>
> > I totally dismiss any such language as unacceptable for real code. Or at least systems programming. Sometimes you can't run
> > unit tests or just assume that "a + b" is integer addition and not string concatenation.
That's very true, hence why we have integration testing to ensure that when components interact they do what they're supposed to. However it really isn't our responsibility to second-guess how a potentially infinite number of other developers are going to use our code. As long as we publish our interfaces and stick to the contract they specify, those developers can then apply the same standard and everybody will be happy: we get to write more code solving our problems and that code will not be littered with unnecessary type-checking complicating future maintenance and/or enhancement.
Likewise when we use third-party code it's our responsibility to test it. Because a failure in someone else's library that we rely on is actually our failure to care sufficiently about our system. That might be acceptable for a hobby or academic project, but not for a commercial one.
> > I suggest you at least spend a few months trying "duck-typing"
>
> Go uses duck typing, but it is not dynamic. I am confused why are making an argument for dynamic programming by citing the convenience of duck typing. Duck typing is great; dynamic programming is evil (at least in the domain of systems programming).
Well for a dynamic systems language that supports duck-typing you'd be looking at Objective-C, which like Ruby and Smalltalk allows an object to decide for itself whether or not a message is understood. Go certainly doesn't feel like a duck-typed language in that sense (call it the 'strong' duck-typing proposition).
If anyone can enlighten me on how to replicate this feature without requiring a lot of complex boilerplate I'd be most obliged as I can conceive of several potential use cases in the design of GoLightly. However it's not a feature I need desperately enough to want to see the language changed - exceptions on the other hand...
on any complex project the majority of developers will not be experts in the language chosen and that thanks to commercial pressures they will be motivated to use any feature they run across that seems to solve a problem regardless of whether it's appropriate or maintainable. Generally there is also a disconnect on such projects between specification and implementation which results in frequent misunderstandings.
Just look at C++ - originally a perfectly reasonable OO extension to C that is now sufficiently complex to require several thousand pages of documentation
BTW: I have looked back at much of my own code and have discovered
that I rarely write to previously initialized variables. The major
exception is counters and other accumulators or trackers. I know that
some of these cases can even be eliminated via liberal use of
recursion. . . but I avoid recursion except in cases where iterative
approach is more complex.
-Ostsol
Yeah, I don't think this is something the compiler should be worrying about.
If it's valid syntax the compiler should compile it.
But a gofmt/golint kind of tool to alert people to these kind of things might
be useful but there's no reason to make it part of the compiler.
- jessta
--
=====================
http://jessta.id.au
On 12 Feb 2010, at 10:03, chris dollin wrote:Well you can always use Ruby if you want every statement to be an expression.
> Once upon a time, there was technology that would allow
> switches to return results, either with explicit syntax
> (valof - resultis, oh BCPL we remember you fondly) or
> just because statements were a kind of expression (oh,
> Algol68 we remember you less fondly, or Pop11 raptures!).
>
> I don't suppose the Go authors would consider doing a little
> rummaging in the Constructs attic? Sometimes what looks
> like old rubbish turns out to be rather valuable ...
The question is, does a systems language really benefit from the extra overhead and complexity of supporting that? Personally I don't think so.
On Feb 14, 10:19 am, befelemepeseveze <befelemepesev...@gmail.com>
wrote:
This sounds sort of like duck typing, but it is actually very
different from duck typing in a very important way. You specify the
type of arguments or variables, so the compiler forces them to have
all those methods, even if you don't use them. This means that you've
got much greater safety moving forward than duck typing systems like
C++ templates or python, since you know you can safely use any methods
in the interface you specified with out breaking the compile of any
code that would previously compile, even if you didn't previously use
them. It may sound like a trivial difference, but I think it makes a
huge difference.
--
David Roundy
A basic premise of duck-typing is that the only way to truly determine whether the type of an entity is appropriate is empirically at runtime. For languages like Ruby, Python, Smalltalk and Objective-C this is a very natural fit with their message passing semantics so people often find it early on and get in an awful muddle to start with but experienced developers in those languages find it indispensable.
Viewed from this perspective interfaces have little to do with duck-typing and everything to do with type inference. As such they're a technology that make static typing much less cumbersome without the bluntness of C's void * type (interface{} is much more elegant and still makes me smile every time I use it).
Indeed now that I've given some thought to the combination of interface{} and speculative type assertions it's clear that together they could be used to write duck-typed programs in Go so I withdraw my previous contention that the language doesn't support the functionality. It's just not obvious until you actively look.
Now I have to find some time to hack with the concept and see where it leads :)
To paraphrase Knuth, the only real proof of a program's validity is whether or not it behaves correctly at runtime.
Memory leaks and buffer overruns are historically the two most significant causes of runtime failure and in-process concurrency is rapidly achieving equal status now multicore architectures are in the mainstream. Having a language which sanitises these three problems is a huge productivity gain because it removes a whole class of wheel-reinvention problems from projects which amount to nothing more than basic (and very tedious) accounting. However once a language has garbage collection, bounds checking and a safe concurrency model there're diminishing returns in additional safety features that need to be weighed against the costs they impose on development.
Compilation time is one such cost that has to be considered, as is runtime performance, although the largest cost is actually code maintenance which with any complex software system will be ongoing for many years. Go already boasts good compilation speed, runtime performance will improve as the compilers mature, and judging from the code I've studied so far I have a good feeling about longterm maintainability.
> > Just look at C++ - originally a perfectly reasonable OO extension to C that is now sufficiently complex to require several
> > thousand pages of documentation
>
> Totally agree. I'm definitely not saying that C++ should be a standard of excellence! I would never recommend using C++ for real code; it is not safe, restrictive, or simple. It is exactly the opposite! I think Go is much safer than C++ so far, and I'm only suggesting that we make it safer, not more complicated.
> As to your remarks about "enjoyable", Go already claims to be enjoyable, mostly based on the fact that there is a lot less typing, stuttering, specifying, etc. That's why everyone says Go is like Python and C at the same time. I for one don't find dynamic languages enjoyable at all. I like Lua's syntax a whole lot, but I've been bitten way too many times by dynamic languages to use them for real code. Just because a language is pretty doesn't mean it is good, and just because a language is safe doesn't mean it must be a chore and bother to write.
A language is only 'pretty' if the code it encourages one to write is itself beautiful, elegant, hopefully even sublime. To put that in less emotive language: I believe a language should result in consistent, concise, accessible, maintainable and performant code. Lisp and Ruby both rate very well in this regard, Python too despite its aesthetics being more those of an engineer than an artist.
Go also has that potential, especially if it acquires a runtime code generation facility of some kind [0].
> I happen to like Go a lot, even tho it is not dynamic, is fairly restrictive, safe, etc. Again, I'm only looking for ways to make it more safe, harder to abuse, easier to read... not more complicated and certainly not more dynamic.
Alas the cause of most coding abuses is a combination of unreasonable commercial pressures, lack of programmer motivation, arrogance and good old-fashioned humanity stupidity. Each of these is basically a social problem so whilst a technology fix may act as a bandaid I'm pessimistic of it leading to any real gains.
However there are a set of social fixes which work very well:
get programmers communicating with each other;
get them communicating with the people they're building software for;
refuse to implement any feature which doesn't have a strong use case;
refuse to accept arbitrary deadlines;
ensure everyone understands how to test code well;
check that programmers actually write their tests and validate them against the spec;
and never release software until it passes all the tests;
develop iteratively.
I'd say it's not rocket science, but actually my point is that in terms of process engineering it is very much like rocket science.
Ellie
[0] If you mail me off-list about your bad dynamic language experiences because I'll be happy to share ideas.
On Feb 13, 9:15 pm, Eleanor McHugh <elea...@games-with-brains.com>
wrote:
> On 12 Feb 2010, at 20:18, ⚛ wrote:
>
> > I think this is partially a consequence of the state of (mainstream)
> > programming languages. People don't expect/like them to be safer,
> > because it would mean harder work in terms of thinking and in terms of
> > the number of characters you have to type while programming.
>
> Actually people don't like languages that make them think about how the machine works
You are "mixing apples with oranges". What has type safety to do with
how the machine works.
> or rely on a deep comfort with formal logic. What they really want are languages which help them elegantly express the problem domain they're working on.
Which is exactly what a good type system is about.
> > As a
> > sidenote, I personally don't quite understand why are people using
> > languages such as Python, Smalltalk, Bash or similar languages to
> > write complex software. If I want an object to be able to respond to a
> > certain set of messages, I want to declare it explicitly and I want
> > the compiler to check that "the flow of types in the program" makes
> > sense.
>
> If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue.
OK. It is programmer's failure. But if the compiler is unable to tell
the programmer that there is a problem although it could, then it is a
failure of language design.
> For one thing, shouldn't this be caught during testing?
The checks made during compilation aren't testing? I strongly think
they are.
> Go ships with a testing tool that's trivially simple to use and thanks to the current speed of compilation feels wonderfully interactive.
>
> Incidentally, Ruby and Smalltalk are both strongly typed languages that just happen to also use dynamic typing. It is a programmer decision to actively support that dynamic typing by implementing method_missing() or doesNotUnderstand respectively. There are cases when dynamic typing is very cool (as your LISPer friends will verify) and I suggest you at least spend a few months trying "duck-typing" before adopting a position either way.
The difference between say Java and Go is that [in order make an
object to respond to as particular set of messages] the Java language
enables you to do it only after you explicitly allowed it. There is no
other distinction. And since in Java the programmer is explicit about
his/her intentions and the compiler is performing compile-time checks,
it is impossible for Smalltalk's doesNotUnderstand to happen at run-
time. I think, the distinction is the explicitness, from a technical
viewpoint there does not seem to be any other distinction.
What do you mean by "common ones"? Does this include proving that
every access to a list has a valid index? For example:
vector<char> v;
...
for(int i=0; i<100; i++)
print(v[i]);
> A lot of novice programmers or programmers that think they are rock-stars
> will complain in such an environment, because they will feel too
> constrained, will get too many compile errors, will think they can't be
> creative or elegant, etc. I think that is all non-sense that comes from
> inexperience, personally.
I agree.
Some misconceptions about "highly advanced" type systems seem to be
rooted in not realizing that the types/classes/constraints/etc can be
used to capture the very nature of the program begin built. In other
words, to capture the truth about what the program is. That said, it
should also be mentioned that the "type systems" found in many
mainstream languages (e.g. C) do *not* reflect the truth at all. For
example, substracting two unsigned numbers (a-b) in the C language
yields an unsigned number - which is mathematically totally wrong of
course. The C's type system is in effect *lying*. The result then is
that the whole application contains so MANY lies that any programmer's
attempt to use the type system to express truths about the problem
domain are completely ridiculous. And Go is not an ideal language in
this respect either.
That said, it
should also be mentioned that the "type systems" found in many
mainstream languages (e.g. C) do *not* reflect the truth at all. For
example, substracting two unsigned numbers (a-b) in the C language
yields an unsigned number - which is mathematically totally wrong of
course. The C's type system is in effect *lying*.
What do you mean by "unnecessary type-checking"?
By definition, *if* the program is correct, then all type-checking
*is* unnecessary. Dynamically typed languages do not perform any
compile-time analyses, so in effect they are optimistically assuming
the program is correct. They solve the problem of answering the
question "Is this program correct?" by never asking it. Yes, it can be
circumvented by implementing unit tests that do some checks - the
problem is that majority of those tests are only able to prove that
the program works for certain inputs, but not for all possible inputs.
Then there is also other use of type information: For example, type
information is required in places where the compiler needs to make
decisions about which particular method to invoke. For example, if I
want to compile "object.method()" and I want to resolve the method at
compile-time, I have to know the type of the object. Otherwise, the
compiler would have to treat the word "method" as plain text. If I
know which particular method I mean, why would I be denying the
compiler to also have that knowledge? Seriously, why shouldn't the
compiler be allowed to know some of the things the programmer knows?
Because somebody was unintelligent when designing the language and
lazy when implementing the compiler?
Dynamically typed languages do not perform any
compile-time analyses, so in effect they are optimistically assuming
the program is correct. They solve the problem of answering the
question "Is this program correct?" by never asking it.
Then there is also other use of type information: For example, type
information is required in places where the compiler needs to make
decisions about which particular method to invoke. For example, if I
want to compile "object.method()" and I want to resolve the method at
compile-time, I have to know the type of the object. Otherwise, the
compiler would have to treat the word "method" as plain text. If I
know which particular method I mean, why would I be denying the
compiler to also have that knowledge? Seriously, why shouldn't the
compiler be allowed to know some of the things the programmer knows?
Because somebody was unintelligent when designing the language and
lazy when implementing the compiler?
Yes. Everywhere I use word "correctness" in my posts, I actually meant
"partial correctness". But, the less partial it is the better.
> Memory leaks and buffer overruns are historically the two most significant causes of runtime failure and in-process concurrency is rapidly achieving equal status now multicore architectures are in the mainstream. Having a language which sanitises these three problems is a huge productivity gain because it removes a whole class of wheel-reinvention problems from projects which amount to nothing more than basic (and very tedious) accounting. However once a language has garbage collection, bounds checking and a safe concurrency model there're diminishing returns in additional safety features that need to be weighed against the costs they impose on development.
1. Garbage collection does *not* entirely prevent memory leaks.
2. Putting Go's "bounds checking" and "safe concurrency" on the same
level is not correct. The bounds checking implementation ensures
absolute correctness (at run-time), in the sense that it is impossible
to create Go programs which would be accessing elements outside of an
array. On the other hand, the safety of a concurrent Go program is not
absolute, only optional. For example, Go is unable to ensure/express
that accesses to a shared composite structure should have a particular
ordering. In other words, it is possible to create valid and also
invalid concurrent programs in Go.
Yes, I agree. But it is in contrast with how people are actually using
the operator "-". Most people are using it to do non-modulo
subtraction.
> It's a shame that Go's signed arithmetic doesn't have overflow detection,
> but if it did, there would have to be some way for it to report exceptional
> behaviour. (Orthogonality on its own isn't coherence; every language
> construct is influenced by all the others.)
It can be solved by having arbitrary precision numbers as the default
number type.
I don't understand why it should be forced to reject them. Why don't
you give an example?
m 17
m m 28
m m m 39
m m m m 50
m m m m m 61
m m m m m m 72
m m m m m m m 83
m m m m m m m m 94
m m m m m m m m m 105
m m m m m m m m 95
m m m m m m m m m 106
m m m m m m m m 96
m m m m m m m m m 107
m m m m m m m m 97
m m m m m m m m m 108
m m m m m m m m 98
m m m m m m m m m 109
m m m m m m m m 99
m m m m m m m m m 110
m m m m m m m m 100
m m m m m m m m m 111
m m m m m m m m 101
m m m m m m m 91 (New rule: m m 105 --> 91)
m m m m m m m m 102
m m m m m m m 92
m m m m m m m m 103
m m m m m m m 93
m m m m m m m m 104
m m m m m m m 94 (New rule: m m 94 --> m 94)
m m m m m m 94
m m m m m 94
m m m m 94
m m m 94
m m 94
m 94
m m 105
91
That was fun!
> How clever is the compiler? How expressive is the type system?
> How long are you prepared to wait for the compilation to finish?
I agree those questions are important. I think the best way of making
a practical compiler is for it to resort to the human programmer in
the more complex cases. The programmer can then decide whether to
continue the proof, or leave it unsolved. If continuing, the human
should input the sequence of actions to be used by the compiler to
arrive at the desired conclusion.
What I find rather unfortunate is that all mainstream programming
languages seem to be built with the assumption that it is forbidden
for a compiler to ask the user any questions or to print the message
"I don't know what to do here". Consequently, the stuff performed by
such a compiler needs to be reduced to decidable and quickly
computable statements about the program being compiled.
In recent years I've spent a lot of time mixing with web developers, many of whom consider a type system to be part of how the machine works and often have great difficulty grasping concepts that anyone used to working at a lower level takes for granted.
So whilst you might think I'm mixing apples with oranges, what I'm actually saying is that for those who don't come from a CompSci background a type system isn't necessarily an obvious artefact of abstract human thought but may well be perceived as a rather confusing mish-mash of rules related to how computers manipulate numbers.
>> or rely on a deep comfort with formal logic. What they really want are languages which help them elegantly express the problem domain they're working on.
>
> Which is exactly what a good type system is about.
I don't disagree with you.
>> If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue.
>
> OK. It is programmer's failure. But if the compiler is unable to tell
> the programmer that there is a problem although it could, then it is a
> failure of language design.
And at what point do you draw the line in adding rules to identify problems? With problems that occur once in every 100 LOC? Once in every 1 KLOC? Take the case which started this thread: how many times per KLOC will there be reassignment to a local variable? And how many of those reassignments will be a code smell as opposed to a valid and sensible use case?
Every decision in language design has an actual cost, both for the implementers and for those using the language. Adding rules out of a misguided need for completeness just ensures that cost will be higher for both parties whilst delivering little practical benefit.
>> For one thing, shouldn't this be caught during testing?
>
> The checks made during compilation aren't testing? I strongly think
> they are.
Checking syntax and type compatibility is merely a verification that code will compile, which says little about its validity. Therefore if you rely on that as a testing strategy you're building on insecure foundations. Testing in any meaningful sense means executing all code pathways, ensuring interfaces act both as advertised and as intended, and validating against whatever requirements the code needs to satisfy.
>> Go ships with a testing tool that's trivially simple to use and thanks to the current speed of compilation feels wonderfully interactive.
>>
>> Incidentally, Ruby and Smalltalk are both strongly typed languages that just happen to also use dynamic typing. It is a programmer decision to actively support that dynamic typing by implementing method_missing() or doesNotUnderstand respectively. There are cases when dynamic typing is very cool (as your LISPer friends will verify) and I suggest you at least spend a few months trying "duck-typing" before adopting a position either way.
>
> The difference between say Java and Go is that [in order make an
> object to respond to as particular set of messages] the Java language
> enables you to do it only after you explicitly allowed it. There is no
> other distinction. And since in Java the programmer is explicit about
> his/her intentions and the compiler is performing compile-time checks,
> it is impossible for Smalltalk's doesNotUnderstand to happen at run-
> time.
That would be mostly true if Java code existed in isolation. It runs on the JVM and as such may rely on code written in other languages (including assembler) which take a different approach to type management and therefore a Java program can indeed encounter runtime situations where it experiences an exception due to an unknown method. And because many Java developers arrogantly assume this isn't possible, in large part thanks to the same argument you've just made, those kind of errors can and do occur in production systems.
This is doubly indefensible as it also overlooks the ease with which objects can have their type identity altered in Java by casting them as Object or by loading bytecode via a custom class loader. It's five or six years since I've done this sort of thing myself but I know the JRuby team very well and have had some fascinating conversations about some of the tricks they've used as part of full Ruby integration into the Java ecosystem.
As you clearly place a greater emphasis on the ability of type systems to prove or disprove the consistency of a program than is probably good for you I suggest you study Gödel's incompleteness theorems for a while and then consider that if such a type system were to be sufficiently powerful to express the natural numbers it would be incapable of passing final judgement on whether or not any particular given program expressed in terms of that type system were consistent with it.
Basically, the price of being able to do arithmetic is that you have to do actual empirical testing. This is a well accepted fact in most scientific disciplines and even CompSci has it in Turing's halting problem and Rice's Theorem.
Dynamic languages embrace the experimental principle and leave the question of whether or not a program is correct to whether or not it performs as desired. That is the only test which truly matters.
> Yes, it can be
> circumvented by implementing unit tests that do some checks - the
> problem is that majority of those tests are only able to prove that
> the program works for certain inputs, but not for all possible inputs.
The correct response to garbage is to return an error. Garbage can and will happen because machines generally operate in a dirty, noisy, EM-polluted environment in which random bit flips can and sometimes do happen and I/O ports can receive garbled transmissions. Static type checking gives you zero protection against these errors, so yet again you need to test properly.
And testing is about a lot more than just unit testing. Wikipedia has a portal which makes a reasonable starting place (http://en.wikipedia.org/wiki/Portal:Software_Testing) but the net is littered with useful resources. It's a pity that it's something that many programmers often do badly and as an afterthought rather than as a key part of their daily practice - it really is the best way to learn how code actually works.
> Then there is also other use of type information: For example, type
> information is required in places where the compiler needs to make
> decisions about which particular method to invoke. For example, if I
> want to compile "object.method()" and I want to resolve the method at
> compile-time, I have to know the type of the object. Otherwise, the
> compiler would have to treat the word "method" as plain text. If I
> know which particular method I mean, why would I be denying the
> compiler to also have that knowledge? Seriously, why shouldn't the
> compiler be allowed to know some of the things the programmer knows?
> Because somebody was unintelligent when designing the language and
> lazy when implementing the compiler?
There's a sufficiently broad literature available on dynamic language design that if you really want an answer to that question Google is your friend. But as a general rule, only naive implementations use a text string for dynamic method lookup. Outside of very constrained embedded systems or high-performance computing the difference in lookup technique is not generally sufficient to bother most people, hence why Rails is so popular - a complex and slow web framework hosted on a relatively slow dynamic language runtime is still fast enough for production use and is sufficiently better to code for than many of its competitors which lack both limitations to be more effective at delivering web applications on time and to budget.
Let's consider an example of a construct which we can think of as a
bridge between "the world of numbers" and "the way how web developers
think (whatever that means)": an enumeration. For example:
enumeration Fruit { Apple, Orange, Mango, etc }
No numbers there. Obviously, the compiler has to choose a (numeric)
representation for the individual fruits and has to decide how to
encode members of the type Fruit. Ideally, these matters should be
invisible to the programmer. A conversion between "int" and "a Fruit"
should not be possible because it does not make any sense.
Unfortunately, in certain languages (e.g: C), the implementation of
enumerations is not ideal because the type system is not very good.
For example, expressing the idea "a list of fruits" in C forces the
programmer to *somehow* convert a Fruit into an integer, to be able to
determine the size of the type Fruit, convert from integers to fruits
and vice versa. But what does this tell us? That if language X has a
non-ideal implementation of enumerations then we should conclude that
ideal implementations of enumerations are therefore impossible? I
don't think so.
> >> If the flow of types in a program doesn't make sense, that's a programmer fail and not a language issue.
>
> > OK. It is programmer's failure. But if the compiler is unable to tell
> > the programmer that there is a problem although it could, then it is a
> > failure of language design.
>
> And at what point do you draw the line in adding rules to identify problems?
Good question. The answer is: I don't know exactly. But what I do know
is that I want a language's capabilities to grow in concert with the
growth of my own (formal reasoning) abilities. Consequently, I am
*not* saying that children should start programming in e.g. CoQ.
> >> For one thing, shouldn't this be caught during testing?
>
> > The checks made during compilation aren't testing? I strongly think
> > they are.
>
> Checking syntax and type compatibility is merely a verification that code will compile, which says little about its validity.
I disagree with that.
>Therefore if you rely on that as a testing strategy you're building on insecure foundations. Testing in any meaningful sense means executing all code pathways,
How are you proposing to determine that you actually covered all code-
paths?
> That would be mostly true if Java code existed in isolation. It runs on the JVM and as such may rely on code written in other languages (including assembler) which take a different approach to type management and therefore a Java program can indeed encounter runtime situations where it experiences an exception due to an unknown method.
1. OK. But that's "beyond Java".
2. Another question is whether an assembler code can fail because it
called pure Java code.
>And because many Java developers arrogantly assume this isn't possible,
If you aren't trying to find method-objects via reflection, then of
course it is impossible. Once the system classloader loads the
bytecode *and* makes all checks which need to be made, then it is
totally impossible to encounter any kind of "message not understood".
The question is whether the system classloader performs all the
required checks or not. There is no fundamental reason which would
prevent an ideal classloader to successfully load the code only after
it made sure that the code invokes only existing methods.
Finding methods via reflection is a different matter. I wasn't talking
about that.
I am currently interpreting this problem in a different way. So,
currently, I do not agree with you.
>This is a well accepted fact in most scientific disciplines and even CompSci has it in Turing's halting problem and Rice's Theorem.
>
> The correct response to garbage is to return an error. Garbage can and will happen because machines generally operate in a dirty, noisy, EM-polluted environment in which random bit flips can and sometimes do happen and I/O ports can receive garbled transmissions. Static type checking gives you zero protection against these errors,
Why? Static type checking cannot be used to *ensure* that data
representations and computations contain a certain amount of
redundancy thus compensating for the noise? I am *not* saying that I
know how to implement such a type checking, but I find it very odd
that you *are* saying that such static type checks are impossible. Are
you sure?
the correct response to garbage is to return an error. Garbage can and will happen because machines generally operate in a dirty, noisy, EM-polluted environment in which random bit flips can and sometimes do happen and I/O ports can receive garbled transmissions. Static type checking gives you zero protection against these errors, so yet again you need to test properly.
This seems like an unfair characterization.
I interpreted Eleanor's mail as saying, approximately,
that type systems can't do everything and testing must
therefore pick up the slack. That doesn't imply that
compilers should be as dumb as possible, just that
since you're not going to get to 100% no matter how
complex you make the compiler and type system,
there comes a point of diminishing returns where
it makes more sense to write a test.
If you disagree, please demonstrate a type system
that will eliminate the need to test the implementation
of strconv.Atof64 and strconv.Ftoa64.
Russ
shouldn't [type errors] be caught during testing?
To paraphrase Knuth, the only real proof of a program's validity is whether or not it behaves correctly at runtime.
whether or not it performs as desired. That is the only test which truly matters.
[writing tests] really is the best way to learn how code actually works
you have to do actual empirical testing
that testing is a cure-all and compilers should be dumb as possible
please demonstrate a type system that will eliminate the need to test the implementation of strconv.Atof64 and strconv.Ftoa64.
On Feb 13, 4:05 am, ⚛ <0xe2.0x9a.0...@gmail.com> wrote:
> On Feb 12, 5:48 am, Peter Williams <pwil3...@gmail.com> wrote:
>
> > On 12/02/10 11:42, Ryanne Dolan wrote:
>
> > > Ostsol,
> > > Indeed. So what is the middle ground? Is there a way to prevent my
> > > original pitfalls, without condemning the assignment, increment, etc
> > > operators?
>
> > I think that you should weaken your proposal so that it only applies to
> > the local variables of packages other than "main". You'd have a
> > stronger argument then. But still weak, as by definition a variable is
> > mutable (and constants are their immutable sibling). Strictly speaking
> > "immutable variable" is an oxymoron.
>
> You are wrong here. The term "immutable variable" makes perfect sense.
> The only thing left is to explain the *context* in which it makes
> perfects sense:
>
> The lifetime of a variable V spans from timepoint T1 to T2. Inbetween
> T1 and T2, the variable V can be declared immutable for a sub-interval
> of (T1,T2). This *is* how a constant in a computer comes to existence.
> There is *no* other way of how to create constants (=immutable things)
> in a computer! When you declare in a programming language that
> variable V is a constant, such as "const int V = 10", you are in fact
> declaring that the value of the memory cell containing V should be
> *immutable* for the *lifetime* of the program. Before the program
> started, the memory cell V could have had some other value, which
> means that the program loader had to overwrite it with the value "10"
> at program startup. After the program ends, the memory cell containing
> V is mutable again.
>
> ... so, in my opinion, the "natural state of things" is that they are
> mutable. Any kind of immutability is the result of applying certain
> rules to *originally mutable* objects for a limited amount of time.
>
> It is sad that there are no programming languages capable of modelling
> this the way it actually is.
funcs = []
for i in range(10):
funcs += [lambda: i*i]
for f in funcs:
print f()
81
81
81
81
81
81
81
81
81
81
I've seen this behaviour in Python, JavaScript and C# programs (which
is interesting, since C# restricts the scope of the loop variable to
the inside of the block; the cell obviously outlives its scope). It is
particularly pernicious in Python, which lacks block-scope, so you
can't fix it by capturing the value in a block-scope variable for use
in the lambda. I.e., the following trick doesn't work in Python, while
it does in most other languages:
for i in range(10):
local_i = i
funcs += [lambda: local_i*local_i]
This hazard wouldn't exist if variables couldn't be reassigned.
On Feb 12, 12:43 pm, Kevin Ballard <kball...@gmail.com> wrote:
> I still don't see why reassigning a variable is considered a problem.
> In your little snippet there, what exactly is the problem posed by
> modifying those variables?
>
> -Kevin Ballard
>
>
>
> On Thu, Feb 11, 2010 at 5:42 PM, Ryanne Dolan <ryannedo...@gmail.com> wrote:
> > Ostsol,
> > Indeed. So what is the middle ground? Is there a way to prevent my
> > original pitfalls, without condemning the assignment, increment, etc
> > operators?
> > I think the typical solution is to allow variables to be read-only within
> > certain scopes. In particular, I should never need to reassign to x or i in
>
> > for i,x := range a {
> > i = 1
> > x = 2
> > }
> > Like I said earlier, I don't advocate pure-functional programming, tho it
> > might look like it from this conversation. I'm more looking for a smarter
> > compiler.
>
> > Thanks.
> > Ryanne
>
> > --
> >www.ryannedolan.info
>
> > On Thu, Feb 11, 2010 at 7:35 PM, Ostsol <ost...@gmail.com> wrote:
>
> >> That would make something like the following illegal:
>
> >> for i := 0; i < x; i++ {
> >> // stuff
> >> }
>
> >> -Ostsol
>
> --
> Kevin Ballardhttp://kevin.sb.org
> kball...@gmail.com
Ryanne,
There's little need for interpretation. I've clearly stated a truth which is often overlooked by language theorists but which derives from fundamental theories of computation.
Rice's Theorem makes it impossible to prove conclusively whether any non-trivial program is correct by application of an algorithmic method (as in static type checking). So there is a fundamental need to test software if you need to know with a high degree of confidence that it is correct.
Does this fact invalidate the use of type systems? Of course not. Dynamic languages such as Smalltalk and Ruby are in fact strongly typed and use this to great advantage, however their design recognises that proving the correctness of a non-trivial program in those languages through algorithmic means is not necessarily possible. Therefore instead of allowing a compiler to impose arbitrary controls on how the language is used - which would prevent many valid programs from being compiled - they instead accept that runtime execution is the true test of conformity between code and programmer intent.
The other half of this discussion - as to whether or not a language should be 'safe' for some collection of properties which are considered unsafe - also falls foul of the same basic maths. However you select the rules to apply, unsafe code will still be possible and worse than that will be compatible with those rules. Belief in such tools leads to a fundamental misunderstanding of the actual risks to the stability of the programs which result from their use and that can have consequences in direct opposition to the intent of applying the rules in the first place.
Speaking for myself I want a language to be expressive. I want it to enable me to think thoughts I might not otherwise of entertained and to use them to derive powerful, simple abstractions with which to build stable and maintainable software. Rules which aid this goal by giving me additional linguistic tools are good, rules which limit it by preventing me from saying certain things are bad. It's a simple philosophy and one I'd defend on exactly the same grounds as I would freedom of speech in other contexts.
The quest to enforce safety is also a fool's errand. Some metrics of code quality might be improved by placing restrictions on developers via compiler action but at the cost of removing the ability to solve certain classes of problem effectively. Also by preventing programmers from writing bad code you remove the opportunities to learn why those code patterns are bad which accompany that freedom. In any event the truly ignorant are capable of being both highly creative and highly motivated when trying to achieve a goal in the face of opposition and the only cure for their condition is education - whether formal or otherwise.
I guess to sum up my view of both discussions: reality is dirty, messy and no respecter of rules. By definition it is the sovereign arbiter of correctness and no matter how good an hypothesis, if it fails to comply with the experimental data it's not correct.
Programs when compiled are just such hypotheses. The compiler ensures they are consistent for some given set of rules and translates them into a form which can be enacted by some approximation of a Turing machine, but it's only when that form is finally executed inside the pulsing core of a processor with it's limited memory and tenuous connection to the outside world that anyone can really see whether the hypothesis is potentially a well-formed theory or its propositions and axioms need to be reconsidered and possibly even abandoned.
Ellie
Eleanor McHugh
Games With Brains
Go is simply a different language.
Russ
for the particular case of range, this problem could easily be solved
in Go by defining:
for i, v := range x {
foo
}
to be equivalent to:
for i, v := range x {
i := i
v := v
foo
}
there's no loss of generality, AFAICS, because assignments to
the index and/or value are lost each time around the loop anyway.
but if you do this, you've still got the same problem with
the classic loop:
for i := 0; i < N; i++ {
}
so defining the above case for range might seem like an unjustifiable
special case.
Testing does prove *something*.
The dispute here is that whether that "something" is enough or not
enough, for your own purposes and for your peace of mind.
> Nor does type-checking.
Type checking does prove *something*.
The dispute here is that whether that "something" is enough or not
enough, for your own purposes and for your peace of mind.
> Both are merely
> tools to detect code that does something other than what was intended.
I agree.
But on the other hand, it is a solid fact that "testing" differs from
"validation". My personal understanding is that "validation" is a
stronger term. For example, if I am talking about ensuring that an
array index is *always* within the bounds of an array, and I actually
succeed in showing that it holds, then I prefer to use a word like
"validation", "analysis" or "proof". On the other hand, I prefer to
use the word "testing" for showing that a program runs as expected for
a *particular* set of inputs.
Notice however that validation does *not* imply that all the checking
and computation is *static* and already happened at compile-time. It
is perfectly reasonable to differ some of the necessary checks until
*runtime* (if it is acceptable, and if it is possible to *prove* that
those checks actually ensure that the index is within bounds). What
does this yield us? The assurance that if the index happens to be
outside of bounds then the program will *always* be able to detect it
and somehow abort the computation.
I don't think that "language theorists are overlooking it".
> Rice's Theorem makes it impossible to prove conclusively whether any non-trivial program is correct by application of an algorithmic method (as in static type checking). So there is a fundamental need to test software if you need to know with a high degree of confidence that it is correct.
1. Isn't the test itself written in a programming language? So it is
also an algorithm then.
2. If the test is a real-world test, for example game testing by
letting a human playing the game and testing whether the "game fells
right", then this kind of testing contains an non-mathematical
element. I am *not* talking about these kinds of tests here, since
they cannot be automated or formalized.
> Does this fact invalidate the use of type systems? Of course not. Dynamic languages such as Smalltalk and Ruby are in fact strongly typed and use this to great advantage, however their design recognises that proving the correctness of a non-trivial program in those languages through algorithmic means is not necessarily possible.
Judging from some historical articles I read and some videos I have
seen, I do *not* think the main reason why Smalltalk is untyped is
that its designers consciously recognized the thing you just
described.
> Therefore instead of allowing a compiler to impose arbitrary controls on how the language is used - which would prevent many valid programs from being compiled - they instead accept that runtime execution is the true test of conformity between code and programmer intent.
I agree that runtime execution can be viewed that way, but I am also
seriously suspecting - based on my understanding of what Smalltalk is
about - that you might be misunderstanding some citation or statement
about Smalltalk what you might have seen somewhere. But, I don't know,
maybe I am wrong.
> The other half of this discussion - as to whether or not a language should be 'safe' for some collection of properties which are considered unsafe - also falls foul of the same basic maths.
I agree that the "collection of properties" in itself is (or at least
can be viewed as) unsafe.
> However you select the rules to apply, unsafe code will still be possible and worse than that will be compatible with those rules.
I agree that it is possible.
On the other hand, you have to agree that this kind of flaw is also
possible in the realm of testing that you are advocating here to be
somehow superior to static analysis.
> Belief in such tools leads to a fundamental misunderstanding of the actual risks to the stability of the programs which result from their use and that can have consequences in direct opposition to the intent of applying the rules in the first place.
I never explicitly wrote I have such a belief.
> Speaking for myself I want a language to be expressive.
Well, but from my viewpoint, expressiveness includes the ability to
express [static statements about a program written in the language]
within the language itself.
> I want it to enable me to think thoughts I might not otherwise of entertained and to use them to derive powerful, simple abstractions with which to build stable and maintainable software. Rules which aid this goal by giving me additional linguistic tools are good, rules which limit it by preventing me from saying certain things are bad. It's a simple philosophy and one I'd defend on exactly the same grounds as I would freedom of speech in other contexts.
>
> The quest to enforce safety is also a fool's errand. Some metrics of code quality might be improved by placing restrictions on developers via compiler action but at the cost of removing the ability to solve certain classes of problem effectively.
Look, why don't you provide a concrete example of a problem where an
overly advanced compiler would prevent you from writing an effective
implementation.
>Also by preventing programmers from writing bad code you remove the opportunities to learn why those code patterns are bad which accompany that freedom.
I more disagree with this argument than I agree with it. So, in
summary, I don't agree with you here.
> I guess to sum up my view of both discussions: reality is dirty, messy and no respecter of rules. By definition it is the sovereign arbiter of correctness and no matter how good an hypothesis, if it fails to comply with the experimental data it's not correct.
Well, but I if we take this kind of reasoning the extreme, then there
would be no theories in e.g. physics because the total number of
possible experiments is always infinite. There would only be
hypotheses. Similarly, if a computer game passes the testing phase and
the tester says that it is perfect, even then it is just a hypothesis
that the game is perfect - for example because you did not test it on
your grandma, but if you did then it is possible that your grandma
would tell you that the game is total crap and it cannot be played -
so the statement that "the game is perfect" is a mere hypothesis.
> Programs when compiled are just such hypotheses.
How does testing remove the *possibility* that the program might be
wrong?
>The compiler ensures they are consistent for some given set of rules and translates them into a form which can be enacted by some approximation of a Turing machine, but it's only when that form is finally executed inside the pulsing core of a processor with it's limited memory and tenuous connection to the outside world that anyone can really see whether the hypothesis is potentially a well-formed theory or its propositions and axioms need to be reconsidered and possibly even abandoned.
What are you saying? That it is impossible to prove static properties
of programs? I think you got it all totally mixed up. Here's why:
The program *actually* runs when the compiler is compiling it. No,
really, it *is* actually running within the compiler - the only
difference from a real run is that in a compiler it runs only
"partially". The compiler compiling the program and doing a bunch of
checks is running on *real* hardware - or did you forget about that?
In other words, those static type-checks you are criticizing all over
here are in fact partial *executions* of the program!!!
i've had GHC go into an infinite loop on me when type checking before;
i failed to write an effective implementation until i stopped trying
to get the compiler to check so much.
it's pleasant to have a compiler that's guaranteed to halt.
On Feb 11, 9:08 pm, Ryanne Dolan <ryannedo...@gmail.com> wrote:
> > why reassigning a variable is considered a problem
>
> From a readability standpoint, something like:
>
> a = 4
> ...
> a = 5
>
> indicates to me that the variable 'a' has been poorly named and is being
> abused (used contrary to its original purpose). Notice that there is a
> subtle difference here between _operating_ on a variable, and outright
> assigning it. Logically, numberOfDogs++ means "now I have one more dog than
> I had before", whereas "numberOfDogs = 5" means "forget what I said before
> about the number of dogs".
I agree with this. This might sound crazy but what if assignments
using '=' were const by default and a keyword like temp is needed to
do the odd bits like iteration and such? Personally, I prefer
iteration over a list as opposed to iteration by incrementing some
weird variable. It seems messier with the temporary variable.
Part of the issue might be that people are too used to one paradigm of
coding but I personally thing functional constructs will be more
commonplace in the future.
Haskell is great but I find the IO cumbersome and more complicated
than it needs to be. There's a place to program functionally, and a
place to program iteratively/sequentially. Can't Go do both?
On Mar 11, 1:41 pm, "jeremy.c....@gmail.com" <jeremy.c....@gmail.com>
wrote:
> coding but I personally thingfunctionalconstructs will be more
> commonplace in the future.