Java as a first language "considered harmful"

51 views
Skip to first unread message

Didier Verna

unread,
Jan 8, 2008, 8:09:45 AM1/8/08
to

Guys,

this article might be of interest to you:

http://www.stsc.hill.af.mil/CrossTalk/2008/01/0801DewarSchonberg.html

--
Resistance is futile. You will be jazzimilated.

Didier Verna, did...@lrde.epita.fr, http://www.lrde.epita.fr/~didier

EPITA / LRDE, 14-16 rue Voltaire Tel.+33 (0)1 44 08 01 85
94276 Le Kremlin-Bicętre, France Fax.+33 (0)1 53 14 59 22 did...@xemacs.org

Jon Harrop

unread,
Jan 8, 2008, 10:01:03 AM1/8/08
to
Didier Verna wrote:
> Guys,
>
> this article might be of interest to you:
>
> http://www.stsc.hill.af.mil/CrossTalk/2008/01/0801DewarSchonberg.html

I'm more worried about the future of programming implementations myself.
Who's going to create the industrial-strength FPL implementations that
we're currently missing?

--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u

Robert Uhl

unread,
Jan 8, 2008, 2:31:23 PM1/8/08
to
My first thought on reading the article is that Lisp has a lot more to
offer than functional programming. Lisp offers encapsulation with
classes; it offers namespaces (called packages, but they're the same
thing); it does offer information hiding if you make it; it offers
programming by extension. It also offers generic functions and
programmable syntax.

In other words, Lisp is a lot more valuable than the authors seem to
know.

--
Robert Uhl <http://public.xdi.org/=ruhl>
If Ace Books ever came out with an edition of The Bible, both books
would be edited down to 40,000 words, and they'd be renamed 'Master of
Chaos' and 'The Thing With Three Souls.' --Terry Carr

Edi Weitz

unread,
Jan 8, 2008, 3:03:19 PM1/8/08
to
On Tue, 08 Jan 2008 12:31:23 -0700, Robert Uhl <eadm...@NOSPAMgmail.com> wrote:

> In other words, Lisp is a lot more valuable than the authors seem to
> know.

Not very surprising. Almost always if you read something about Lisp
in the "mainstream media", the authors seem to have some preconceived
notions of what Lisp is or of what makes Lisp different, and almost
always they're totally wrong or way off the mark.

Sigh...

Edi.

--

European Common Lisp Meeting, Amsterdam, April 19/20, 2008

http://weitz.de/eclm2008/

Real email: (replace (subseq "spam...@agharta.de" 5) "edi")

Xah Lee

unread,
Jan 8, 2008, 3:36:01 PM1/8/08
to
Robert Uhl <eadmun...@NOSPAMgmail.com> wrote:
「In other words, Lisp is a lot more valuable than the authors seem to
know.」

Edi Weitz <spamt...@agharta.de> wrote:

「Not very surprising. Almost always if you read something about Lisp


in the "mainstream media", the authors seem to have some preconceived
notions of what Lisp is or of what makes Lisp different, and almost
always they're totally wrong or way off the mark.

Sigh...」


Sigh?

Sigh ur Common Lisper's ass.

Do you think, that any of Perl and Java morons, would sigh the same
sigh you sigh? I mean, if you are a perl expert as your are a lisp
expert. Then, everytime you read something on the media, you sigh,
because they don't understand sigils and the difference of arrays and
lists and the notion of contexts.

Try to cross post your sigh to perl, java newsgroups. So that, you can
see, other human animals, don't concur with you. In fact, the majority
won't concur. You are the odd ball by definition.

Your sigh is masturbatory gesture. A subconscious gesture that pleases
kinds of your own.

Now, what if i tell you, that you and the mainstream media are both
morons, because you have absolutely no understanding, not even
awareness, of the most powerful language on earth: Mathematica?

Sigh...

Xah
x...@xahlee.org
http://xahlee.org/

2008-01-08

George Neuner

unread,
Jan 8, 2008, 10:50:18 PM1/8/08
to
On Tue, 8 Jan 2008 12:36:01 -0800 (PST), Xah Lee <x...@xahlee.org>
wrote:

>Now, what if i tell you, that you and the mainstream media are both
>morons, because you have absolutely no understanding, not even
>awareness, of the most powerful language on earth: Mathematica?

After we see your super duper text editor implemented in Mathematica
then you can talk to us about how powerful it is. Or better yet, why
don't you implement John Harrop's dream FPL compiler.

George
--
for email reply remove "/" from address

Ray Dillinger

unread,
Jan 9, 2008, 4:04:09 AM1/9/08
to
Jon Harrop wrote:

> I'm more worried about the future of programming implementations myself.
> Who's going to create the industrial-strength FPL implementations that
> we're currently missing?

What, in your opinion, are the characteristics of these implementations?

My list includes:

* Creation of native-format executables requiring no linkables
or libraries unlikely to be found on end-user systems
(user expectation, deliverability)

* Profiling and profile-based optimization tools.

* Code analysis tools integrated with source control
and identity-based package management and keyword
control. ie, when setting up source control, the lead
engineer should be able to specify that the "graphics"
group should not be able to modify code in the "math"
library and the newbie programmers should not be able
to check in code that uses anything off a list of
"dangerous or ill-concieved" constructs (meta-macros,
data not subject to GC, finalizers that can bring
things back to life, call/cc, goto, routines with
nonstandard argument evaluation disciplines, etc) until
they've had the relevant training, and certain constructs
are absolutely forbidden in the "air-traffic control"
project (which is subject to a legal requirement of
proofs of certain properties such as termination, hard
realtime constraints, and finite memory usage).
The source control system should include calls to
code analysis tools to help enforce these policies.

* Constructs (possibly including "dangerous or
ill-concieved" ones, see above) that can serve as the
direct translation target for all or nearly-all extant
computer languages. IOW, it should be simple to
translate a particular set of routines (perhaps
originally written in smalltalk) preserving their
call-by-need semantics, and still have them
interoperate nicely with the rest of the system,
even if mixing call-by-need with call-by-value
semantics is considered confusing or dangerous.

* If we have analysis tools that recognize dangerous or
ill-concieved constructs, and means of transforming or
eliminating these constructs are known, then we should
have code transformation tools that do exactly that,
automatically, insofar as possible. For example,
eliminating call/cc by using continuation-passing style.

* System settings that choose between whole-program
optimization, fully separate file-based static
compilation, and an interactive runtime.

Bear

Paul Rubin

unread,
Jan 9, 2008, 4:32:41 AM1/9/08
to
Ray Dillinger <be...@sonic.net> writes:
> are absolutely forbidden in the "air-traffic control"
> project (which is subject to a legal requirement of
> proofs of certain properties such as termination, hard
> realtime constraints, and finite memory usage).

I've seen some papers mentioning termination proofs but the methods
described didn't seem of much use for air traffic control. For
example, David Turner's "Total Functional Programming" describes a
language where all programs terminate, but the running time can be
anything describable in Peano arithmetic, i.e. it could be a tower of
exponentials of arbitrary height. So proofs of termination without
practical complexity bounds seem not so useful.

Abdulaziz Ghuloum

unread,
Jan 9, 2008, 6:38:24 AM1/9/08
to
Ray Dillinger wrote:
> Jon Harrop wrote:
>
>> I'm more worried about the future of programming implementations myself.
>> Who's going to create the industrial-strength FPL implementations that
>> we're currently missing?
>
> What, in your opinion, are the characteristics of these implementations?
>
> My list includes:

You had a list, so let me ask this:
Who's going to create the tools that you said you are missing if it's
not you? You're not waiting for kids/programmers of the future to do it
for you, right?

Jon Harrop

unread,
Jan 9, 2008, 7:12:45 AM1/9/08
to
Ray Dillinger wrote:
> Jon Harrop wrote:
>> I'm more worried about the future of programming implementations myself.
>> Who's going to create the industrial-strength FPL implementations that
>> we're currently missing?
>
> What, in your opinion, are the characteristics of these implementations?
>
> My list includes:
>
> * Creation of native-format executables requiring no linkables
> or libraries unlikely to be found on end-user systems
> (user expectation, deliverability)

I would like a cross-platform open source common language run-time with good
support for functional programming (e.g. tail calls and fast allocation)
and commerce (so I can sell shared run-time DLLs to its users).

That is essentially the .NET CLR but open source and cross platform.

> * Profiling and profile-based optimization tools.

Yes.

> * Code analysis tools integrated with source control
> and identity-based package management and keyword
> control. ie, when setting up source control, the lead
> engineer should be able to specify that the "graphics"
> group should not be able to modify code in the "math"
> library and the newbie programmers should not be able
> to check in code that uses anything off a list of
> "dangerous or ill-concieved" constructs (meta-macros,
> data not subject to GC, finalizers that can bring
> things back to life, call/cc, goto, routines with
> nonstandard argument evaluation disciplines, etc) until
> they've had the relevant training, and certain constructs
> are absolutely forbidden in the "air-traffic control"
> project (which is subject to a legal requirement of
> proofs of certain properties such as termination, hard
> realtime constraints, and finite memory usage).
> The source control system should include calls to
> code analysis tools to help enforce these policies.

Interesting.

> * System settings that choose between whole-program
> optimization, fully separate file-based static
> compilation, and an interactive runtime.

Yes.

I would add:

. High-performance numerics, like OCaml and C++.
. High-performance symbolics, like MLton.
. Tested bindings to core libraries like OpenGL 2.
. Documentation.

Ray Dillinger

unread,
Jan 9, 2008, 2:44:42 PM1/9/08
to
Abdulaziz Ghuloum wrote:

> Ray Dillinger wrote:

>> My list includes:
>
> You had a list, so let me ask this:
> Who's going to create the tools that you said you are missing if it's
> not you? You're not waiting for kids/programmers of the future to do it
> for you, right?

Heh. got me in one.

Yes, I have a lisp implementation. No, it's not finished yet.
Yes, I'm planning and doing the stuff on my list with it and was
hoping to see what other people's lists include.

The ideas involved have been evolving for a long time, starting with
some theoretical frustration with the absence of callable objects
that are both first-class (like functions in Lispy languages) and
first-order (like macros in Lispy languages).

It took me several years and a bunch of iterations to get to this,
and I repeated a lot of work (fexprs, etc) that smarter people had
already found dead-ends in first. But eventually I came up with
theory both sound and semi-practical about how to keep environments
and their arguments straight through multiple layers of calls
and code transformations.

And the immediate result was that all traces of "macros" vanished.

Once I'd finally figured out a way to support first-class "macros",
they weren't macros anymore: they were functions with argument
evaluation under control of the called function, like fexprs, but
an environment (both static and dynamic) packaged with each
argument expression. The calling function's responsibility is
just to make sure that the called function has environment info
for each and every argument and for the call-site. And the same
calling discipline could handle functions, whether call-by-need,
call-by-value, call-by-name (as in denotational semantics)
or whatever else.

And that in turn gave me the idea that what I was working on here
was sort of fundamental in some way; a language wherein every calling
discipline you've ever heard of can coexist, plus you can effectively
"roll your own" calling discipline on an individual basis for things
you'd normally use macrology for in Lisp -- means it can be a very
easy, straightforward translation target for code written in every
computer language extant. A nice compiler intermediate language,
for those who don't want to use it directly.

My main interests are semantics and theory; and these cool semantics
have come with a substantial performance and memory cost, since I
wind up basically keeping the whole source in memory at runtime.
Also, I'm much more interested in "correct" numerics than "fast"
numerics: I like MatLab's format for keeping exact representations
wherever possible, Scheme's distinction between exact and inexact
values, COBOL's requirement for exact base-ten mathematics, and
interval arithmetic for documenting possible error ranges. I'm not
so much into the idea of blazing speed as I'm into the idea of
mathematical results that you can prove are exact, or at least
correct within a known error limit.

I think that finding ways to compile it so it runs as efficiently
as other dialects (at least when you don't do things impossible in
those dialects) will occupy years to come.

Bear

Xah Lee

unread,
Jan 9, 2008, 3:20:47 PM1/9/08
to
Adding to Ray Dillinger and Joh Harrop's post about a ideal functional
lang system, my wishes are:

★ The language will be absolutely high-level, meaning in particular:

☆ The language will not have cons.

☆ The language's documentation, will not need to have mentioning any
of the following words: pointer, reference, memory allocation, stacks,
hash.

☆ The language's variables must not have types. Its values can have
types. And, these types must be mathematics, not computer engineering
inventions. Specifically: The language will not have any concepts or
terms of: float, double, int, long, etc. (it can, however, have
concepts like exact number, n-bit approximate number aka machine-
precision number (See below on optimization declaration).)

☆ The numbers in the language can have the following types: integer,
rational, real, complex number. (and possibly extention from these,
such as algebraic/roots number, etc.)

☆ The language's computational model should be simple, high-level,
mathematics based as possible, as opposed to being a model of some
abstract machine, implementation, or computational model (e.g. OOP,
Lisp Machine). In particular: the language will not have concepts of
“reference” or “object” (as in Java), or “object” (as in lisp). (Here,
few of today's language that qualify includes: Mathematica, PHP,
Javascript)

☆ The language will not use extraneous computer-science-geeking
moronic jargons. (which in part exist as a side-effect of human
animal's power struggle in academia) More specifically, the language
will not have anything named “lambda”. (name it “Function” or simply
“Subroutine”) It should not have anything named or discused as
“Currying”. (when appropriate, explain with precision such as:
applying a function to a function, or decompose multi-valued function,
etc as appropriate. If necessary, invent new terms that is
communicative: e.g. Function Arity Reduction) The lang should not have
a name or doc about tail-fucking-recursion. (if necessary, use
something like this: “the compiler does automatic optimization of
recursions when ...” ) ... etc.

☆ The language will not have concept of binary bits, bit operator,
bytes, etc. (see the optimization section below)

★ The language can have many implementation/compiling necessateted/
related concepts. (which are examplified in the above occuring in
other langs which we forbid here) However, these concepts, must be
designed as special declaration constructs, whose purpose to the
programer is _clearly_ understood as instructions/hints for the
compiler that will create fast code, yet has no purpose what-so-ever
as far as his program is concerned. So, for example, let's say the
typical concept of “int” in most langs. In our ideal lang, it would be
like this:

myNumber= 123
DeclareForOptimization(myNumber, (number, (8-bits,no-decimals)))

For another example, suppose you need a hash table. But in our high
level concept there's no such stupid computer-engineering concept, nor
such a stupid term “hash”. However, because we are great
mathematicians, we know that compilers cannot in theory 100% determine
user's use of lists for the purpose of optimization. e.g. hash table.
So, the lang will have a usage-declaration for the purpose of
compiling/optimization, something like this:

myKeydList= ((k1 v1) (k2 v2) ...)
DeclareForOptimization(myKeydList, (list, ((fastLookup true),
(beginNumOfSlots 4000),(extendBy, 500), ...)))

----------------------

The above criterions are basically all satisfied by Mathematica,
except the last item about a complete, systematic, declarative
statements for the purpose of optimization/compiling.

In general, the above criterions are more and more satisfied by modern
high-level languages, in particular PHP and Javascript. (except in
places where they can't help, such as the widely-established int,
long, double fuck created (or propergated) by the unix/C fuckheads.)

Languages in the past 2 decades are already more and more observing
the above criterions. Roughly in order of history and high-level-ness:
C, shell, Perl, Java, Python, Ruby, Javascript, PHP.

2 major exceptions to this progression are lisp and Mathematica. Lisp
is extremely high-level, however, due to it's 40-years-old age,
inevitably today it has many socially un-fixable baggages. (such as
the cons business) Mathematica, which was born about the same time
Perl was, is simply well designed, by a certified genius, who happens
to be a rich kid too to boot, and happens to be a not a tech-geeking
moron and in fact is a gifted entrepreneur and started a exceedingly
successful company, who's social contributions to the world has made
revolutionary impact. (besides his scientific contributions, e.g. in
physics.)

Some lispers conceive that langs are just more and more converging to
lisp. No, they are no converging to no fucking lisp. They are simply
getting higher-level, which lisp is the grand-daddy of high-level,
intelligent, computing and design.

Note: I had some ambiguity about compiled langs like Haskel (or F#
etc) They are extremely high-level, but meanwhile has the compilation/
types thingy... I dont have much experience with functional compiled
languages (dont have much experience with compiled lang, period.) so i
don't have much idea how they fit into the high-level measured for
real-world ease-of-use practicality.
)

----------------------

in the above writing, i'm in a bit frenzy to fuck the tech geeking
morons that are bountiful here. But in hindsight, i think my criterion
for a ideal functional lang, two of the rather unusual ideas of high-
level-ness, can be summarized thus:

★ Do not use any computer science or computer engineering terms who's
names does not convey its meaning. (e.g. float, int, lambda, tail-
recursion, currying)

★ Separation of compiler/optimization needed constructs or concepts
clearly out of the language. (no more int, float, double, hash,
vector, array types. Perhaps no more even types for variables.)

----------------------

many functional programers are acquainted with the idea, that language
influence thought. (tech geeking morons will invariably invoke the
“Sapir-Whorf” lingo fuck.) However, these average people (arbeit
elite), do not have the independent thinking to realize that,
terminology, or, the naming of things, has a major impact on the thing
and the social aspects of the thing. More concretely, how is a thing/
concept named, has major impact on education, communication,
popularity/spread, and actual use of the thing/concept. As a actual
example, many high-level functional languages are invented by
academicians, who, in general, do not have much social-science related
knowledge or understand of its practicality. And, being high-powered
computer engineers, are basically conditioned in their thought
patterns around dense, stupid, computer jargons. And thus in their
languages, these jargons permeate throughout the language. The
practical effect is that, these languages, are perpetually swirling
around these academic communities with a viral tranmission so powerful
that sucks in anyone who tried to touch/study/use the language into
mumble jumble. (most examplary: Scheme, Haskell) This basically seals
the fate of these languages.

(of course, the one notable exception, is Mathematica. It is still to
be seen, if Microsoft's f#'s team will avoid stupid jargons.)

----------------------

Further readings:

★ Math Terminology and Naming of Things
http://xahlee.org/cmaci/notation/math_namings.html

★ What are OOP's Jargons and Complexities
http://xahlee.org/Periodic_dosage_dir/t2/oop.html

★ Jargons of Info Tech industry
http://xahlee.org/UnixResource_dir/writ/jargons.html

★ Politics and the English Language
http://xahlee.org/p/george_orwell_english.html

★ Lisp's List Problem
http://xahlee.org/emacs/lisp_list_problem.html

Xah
x...@xahlee.org
http://xahlee.org/


David B. Benson

unread,
Jan 10, 2008, 7:40:24 PM1/10/08
to
On Jan 9, 12:20 pm, Xah Lee <x...@xahlee.org> wrote:
(four letter word, repeatedly, I suppose to make up for his inability
to comprehend, let alone understand.)

Yawn.

Ray Dillinger

unread,
Jan 10, 2008, 9:01:09 PM1/10/08
to
What we hate isn't relevant. If I thought that what I hated were
relevant, I'd have left out most loop syntax. If I thought that what
you (or any random person on the Internet) hated was relevant, I'd
quit using the internet and go pet the cat some more (the cat likes
that).

Good languages are designed by making it easy to do things right and
do them in ways that leave their meanings well-defined, not by avoiding
things that someone hates.

Bear

Ray Blaak

unread,
Jan 11, 2008, 2:21:37 AM1/11/08
to
> The ideas involved have been evolving for a long time, starting with
> some theoretical frustration with the absence of callable objects
> that are both first-class (like functions in Lispy languages) and
> first-order (like macros in Lispy languages).
>
> It took me several years and a bunch of iterations to get to this,
> and I repeated a lot of work (fexprs, etc) that smarter people had
> already found dead-ends in first. But eventually I came up with
> theory both sound and semi-practical about how to keep environments
> and their arguments straight through multiple layers of calls
> and code transformations.
[...]

> And that in turn gave me the idea that what I was working on here
> was sort of fundamental in some way; a language wherein every calling
> discipline you've ever heard of can coexist, plus you can effectively
> "roll your own" calling discipline on an individual basis for things
> you'd normally use macrology for in Lisp -- means it can be a very
> easy, straightforward translation target for code written in every
> computer language extant. A nice compiler intermediate language,
> for those who don't want to use it directly.

Have you written this up somewhere, or do you plan to? This is the kind of
thing that would make a good paper.

[Followups not obeyed -- I read this in c.l.s and want to continue to do
so. What you're talking about transcends scheme, lisp, and impacts any
functional language. This discussion should be interesting to all the
newsgroups mentioned.]

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
rAYb...@STRIPCAPStelus.net The Rhythm has my soul.

maximinus

unread,
Jan 11, 2008, 4:49:40 AM1/11/08
to
> ★ Do not use any computer science or computer engineering terms who's
> names does not convey its meaning. (e.g. float, int, lambda, tail-
> recursion, currying)

You know what? All of those terms convey meaning to me, and to most
people here I expect. I could say 'a number composed of a mantissa
with integer range 0 - 2^x, an exponent of integer range 0 - 2^y, and
flag S to indicate the sign of the value, the whole taking up (x+y)+1
bits of computer memory', or I could just say 'float'. Should I also
drop math jargon whilst doing math? Cos' the term 'C-infinite
functions' really tells me whats going on there...

I'm sorry. I'll stop feeding the trolls right now.

> rAYbl...@STRIPCAPStelus.net The Rhythm has my soul.

Ulf Wiger

unread,
Jan 11, 2008, 6:38:48 AM1/11/08
to
Jon Harrop skrev:

> Didier Verna wrote:
>> Guys,
>>
>> this article might be of interest to you:
>>
>> http://www.stsc.hill.af.mil/CrossTalk/2008/01/0801DewarSchonberg.html
>
> I'm more worried about the future of programming implementations
> myself. Who's going to create the industrial-strength FPL
> implementations that we're currently missing?

While this is in many respects a very valid point, I think
it's important to stress that there /are/ FPL implementations
that are industrial-strength within a certain niche. Even
so, these implementations face roughly the same resistance
in their niche, and many people decide to hold out until
their own favorite language reaches reasonable industrial-
strength.

We experienced this with Erlang, which obviously meets
the requirements on industrial-strength within telecoms
(obviously, since competitive telecom products based on
Erlang have been shipped to customers for 10 years now,
and have also exhibited excellent life-cycle economy.)

Java implementations really haven't been serious
contenders, from a technical point of view, until
recently - and then mainly when there is an expressed
requirement to support J2EE for customer-specific
adaptation. Even so, we've seen several projects rather
choose Java (and face the consequences) than look at
Erlang, even in the face of studies showing that it
would be at least 5x more expensive to do so, and would
result in worse performance and less flexibility(*).

So there is a big measure of opportunism in the
argumentation against FPLs: if there happen to be
reasonable mainstream alternatives that can be described
as "industrial-strength", then this becomes the main
argument; if not, then the "mainstream" argument is used,
to roughly the same effect.

I think there seems to be a trend now, that FP concepts
make their way into "conventional" programming languages
(Python, C#, ...). This may serve to create a broader
interest in higher-order functions, strong type systems,
etc., which in turn, may feed an interest in actually
asking for industrial-strength tools that have strong FP
support.

But as long as there is no broad consensus that FP is
even useful, no one will care if your FP implementation
is industrial-strength or not.

BR,
Ulf W

(*) In this particular case, surprisingly many readers
seemed to interpret the report as "it's not impossible
to do this in Java", and that was seen as the good news.
That doing it in Erlang would be much better was not
interesting, since it wasn't seen as a viable alternative
for other reasons (proof or clarification not needed).

Joachim Durchholz

unread,
Jan 12, 2008, 2:52:34 PM1/12/08
to
Ray Dillinger schrieb:
> My list includes:

>
> * Code analysis tools integrated with source control

What's source control?
Revision control? In that case: why integrated?

> and identity-based package management

Borderline case: bugfix updates.

> and keyword control.

What's that?

> ie, when setting up source control, the lead
> engineer should be able to specify that the "graphics"
> group should not be able to modify code in the "math"
> library and the newbie programmers should not be able
> to check in code that uses anything off a list of
> "dangerous or ill-concieved" constructs (meta-macros,
> data not subject to GC, finalizers that can bring
> things back to life, call/cc, goto, routines with
> nonstandard argument evaluation disciplines, etc) until
> they've had the relevant training,

This kind of stuff is better controlled via policy.
Software-enforced policy is usually far too rigid to be really useful.

> and certain constructs
> are absolutely forbidden in the "air-traffic control"
> project (which is subject to a legal requirement of
> proofs of certain properties such as termination, hard
> realtime constraints, and finite memory usage).

It's better to enable the compiler to check this kind of property directly.
Forbidding constructs is suppressing symptoms instead of applying a cure
IMNSHO.

> * Constructs (possibly including "dangerous or
> ill-concieved" ones, see above) that can serve as the
> direct translation target for all or nearly-all extant
> computer languages. IOW, it should be simple to
> translate a particular set of routines (perhaps
> originally written in smalltalk) preserving their
> call-by-need semantics, and still have them
> interoperate nicely with the rest of the system,
> even if mixing call-by-need with call-by-value
> semantics is considered confusing or dangerous.

What's dangerous is actually the combination of nonstrict evaluation
strategies and impurity.
I'd also expect that mixing evaluation strategies makes the actual flow
of control even more nonobvious than with lazy evaluation alone.

> * If we have analysis tools that recognize dangerous or
> ill-concieved constructs, and means of transforming or
> eliminating these constructs are known, then we should
> have code transformation tools that do exactly that,
> automatically, insofar as possible. For example,
> eliminating call/cc by using continuation-passing style.

If call/cc is used in a way that can be replaced with CPS, then you
don't need to transform it because it is already safe (assuming for the
moment that CPS is safe).

Regards,
Jo

Ray Blaak

unread,
Jan 12, 2008, 11:01:38 PM1/12/08
to
Joachim Durchholz <j...@durchholz.org> writes:
> Ray Dillinger schrieb:

> > ie, when setting up source control, the lead engineer should be able to
> > specify that the "graphics" group should not be able to modify code in
> > the "math" library and the newbie programmers should not be able to
> > check in code that uses anything off a list of "dangerous or
> > ill-concieved" constructs (meta-macros, data not subject to GC,
> > finalizers that can bring things back to life, call/cc, goto, routines
> > with nonstandard argument evaluation disciplines, etc) until they've had
> > the relevant training,
>
> This kind of stuff is better controlled via policy.
> Software-enforced policy is usually far too rigid to be really useful.

I agree, strongly so, in fact. Just defining what is dangerous is a moving
target, let alone implementing any tools to enforce it. "Common sense" with
decent code reviews is the cheapest way to acheive sanity at a reasonable
cost. How would a tool measure one's trained ability?

How would you write such a tool set?

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,

rAYb...@STRIPCAPStelus.net The Rhythm has my soul.

Ray Dillinger

unread,
Jan 13, 2008, 2:44:08 PM1/13/08
to
Joachim Durchholz wrote:

> Ray Dillinger schrieb:
>> My list includes:
>>
>> * Code analysis tools integrated with source control
>
> What's source control?
> Revision control? In that case: why integrated?

Yes, revision control. Integrated to give additional tools
for code policy enforcement to organizations with policies
and coding standards.

Integrated to prevent projects that are supposed to have
specific coding goals or runtime models from using or depending
on code that does not.

Integrated to prevent people from ignoring a requirement of
code review when using constructs they may or may not understand.
And so on.

Whether it's harmful or not depends on whether it's done with
understanding of the coding process; a pointy-haired boss could
use this tool the way pointy-haired bosses use any software
engineering management tool. There is no help for that but
the eventual bankruptcy of their companies.

But assuming for a moment or two that the people we care about
in software development are mostly smart, and set up their
software-enforced policies with some understanding of the process
and the work it creates, it should be at worst a speed bump to
someone checking it in himself ("oh yeah, the stupid tool's
worried about infinite precision rationals exploding in memory,
but I don't need that level of precision so I should go back and
use float values instead..."), and at best a useful tool to prompt
a code review ("Hey Jack? I want to check in a meta-macro, but my
key doesn't have that level of authority in the Foonly project.
Can you look it over and vet it with your key?").

> > and keyword control.
>
> What's that?

Sorry. That's me using words together that don't mean anything
together except to me. I've developed, and have been using,
this idea as a guide in trying to design a lisp dialect whose
primitives have very clean design.

I have a perception that in a language of clean design, most
specific concepts, ideas, or extensions are associated with
particular keywords. For example, if you want to do a project
using immutable CONS cells (which allows the compiler to do all
sorts of optimizations it otherwise could not do, pretty much
including eliminating CONS cells as a distinct entity in memory
all together) then you need to scan the code for uses of RPLACA
and RPLACD. Because nothing else mutates cons cells, these
keywords control the idea or extension of mutable cons cells.

Code that doesn't use them (including the transitive closure of
code it calls) will not violate the constraint against using immutable
CONS cells. Thus RPLACA and RPLACD are the "keywords" you need to
control if your policy is that a particular project should benefit
from the optimizations available only to code that uses immutable
CONS cells. Likewise if you want your project to benefit from
optimizations that can happen only in the absence of reified
continuations (like, say, using a hardware stack discipline) then
you need to control the keyword CALL/CC (or
call-with-current-continuation if you're using the spectacularly
verbose version of the keyword from Scheme).

> This kind of stuff is better controlled via policy.
> Software-enforced policy is usually far too rigid to be really useful.

Speed limits are enforced by policy. That's no reason to shirk
the manufacture of mechanical speedometers. A speedometer can
tell you when you're in compliance with the policy even in cases
where you otherwise might have to guess. And hey, if I could get
a device that used GPS to determine exactly where I was, looked
up the local speed limit in a table, and notified me every time
I was exceeding the speed limit, I'd happily buy it. If there
were a hardware switch I had to specifically flip ON in order to
exceed the speed limit, I think I'd have left if OFF for at
least the last three years.

> It's better to enable the compiler to check this kind of property
> directly. Forbidding constructs is suppressing symptoms instead of
> applying a cure IMNSHO.

Yeah. In my model/plan/system I'm trying to make, the serious hardcore
analysis of code is mostly done by the compiler. It checks this kind of
property and outputs a file of analysis results that client programs can
read and check. Other code analysis tools can append expressions to the
analysis results files too, but mostly if you can do it programmatically
I'm thinking you should be doing it in the compiler. The client
programs, in turn, would be called at need by the revision-control system.

Bear

Ray Blaak

unread,
Jan 14, 2008, 2:58:28 AM1/14/08
to
Ray Dillinger <be...@sonic.net> writes:
> But assuming for a moment or two that the people we care about in software
> development are mostly smart, and set up their software-enforced policies
> with some understanding of the process and the work it creates, it should be
> at worst a speed bump to someone checking it in himself ("oh yeah, the
> stupid tool's worried about infinite precision rationals exploding in
> memory, but I don't need that level of precision so I should go back and use
> float values instead..."), and at best a useful tool to prompt a code review
> ("Hey Jack? I want to check in a meta-macro, but my key doesn't have that
> level of authority in the Foonly project. Can you look it over and vet it
> with your key?").

I stand against this. First of all, if your people are mostly smart, the solves
most of the problems right there.

Second, I don't think such tools can be practically written. Fiercly
difficult, they would be.

Third, such tools cannot properly understand the reason for any violation, so
in the end it equates to getting a human involved to vet things. And if that's
the case, it is far far simpler and easier to enforce that all commits simply
get reviewed, period.

Forth, it is far more effective and efficient to track things as opposed to
prevent possible bad things. In a proper revision system you can always
recover and correct mistakes. Halting commits will be guaranteed to cause
problems and unexpected delays. If Jack gets hit by a bus, you need to still
proceed.

[Followups *not* obeyed, again. Please don't do that. This is a meta level
discussion, and I submit, interesting to all the groups]

> For example, if you want to do a project using immutable CONS cells...then


> you need to scan the code for uses of RPLACA and RPLACD. Because nothing
> else mutates cons cells, these keywords control the idea or extension of
> mutable cons cells.

Just fix your language/compiler instead to not have them present. Then no
scanning is needed.

Would you also have to fix libraries as well? If you can't then you have no
guarantees anyway.

gavino

unread,
Jan 14, 2008, 3:34:47 AM1/14/08
to

I thought SBCL and haskell are here?

gavino

unread,
Jan 14, 2008, 3:35:49 AM1/14/08
to
On Jan 8, 12:03 pm, Edi Weitz <spamt...@agharta.de> wrote:

> On Tue, 08 Jan 2008 12:31:23 -0700, Robert Uhl <eadmun...@NOSPAMgmail.com> wrote:
> > In other words, Lisp is a lot more valuable than the authors seem to
> > know.
>
> Not very surprising. Almost always if you read something about Lisp
> in the "mainstream media", the authors seem to have some preconceived
> notions of what Lisp is or of what makes Lisp different, and almost
> always they're totally wrong or way off the mark.
>
> Sigh...
>
> Edi.
>
> --
>
> European Common Lisp Meeting, Amsterdam, April 19/20, 2008
>
> http://weitz.de/eclm2008/
>
> Real email: (replace (subseq "spamt...@agharta.de" 5) "edi")

I think lisp needs to be shown to kick some ass. Showing is seeing is
believeing. Demosthenese demo-nstrated things

gavino

unread,
Jan 14, 2008, 3:36:37 AM1/14/08
to
mathematica?
wtf?

gavino

unread,
Jan 14, 2008, 3:37:53 AM1/14/08
to

a kick butt appserver/webserver

Joachim Durchholz

unread,
Jan 14, 2008, 7:13:56 AM1/14/08
to
Ray Dillinger schrieb:

> Joachim Durchholz wrote:
>
>> Ray Dillinger schrieb:
>>> My list includes:
>>>
>>> * Code analysis tools integrated with source control
>> What's source control?
>> Revision control? In that case: why integrated?
>
> Yes, revision control. Integrated to give additional tools
> for code policy enforcement to organizations with policies
> and coding standards.
>
> Integrated to prevent projects that are supposed to have
> specific coding goals or runtime models from using or depending
> on code that does not.
>
> Integrated to prevent people from ignoring a requirement of
> code review when using constructs they may or may not understand.
> And so on.

I have agree that this is not harmful in itself.
However, it makes pointy-haired decisions easy and does almost nothing
for smart decisions.

Simply make sure that everything is on revision control, with a working
blame functionality. If anybody makes a mistake, it's easy to attribute
it to him, and he won't do it again if proper policies are in place.

One thing where revision control could become smarter is about making
diffs. Most diff programs are smart enough to ignore indentation and
other whitespace differences, but they could improve if they were able
to ignore language-specific irrelevancies (changed code formatting, for
example, or for language like Perl a change from if-then to do-unless).
However, you don't need to integrate revision control into the
environment to do that, a plugin interface for the revision control is
more salient.

There's another thing: if you do your own revision control, your
language will collide with established revision control policies. You'll
have to answer the question what to do with files that belong to the
project but are written in other languages (documentation, XML files,
configuration data etc.).

> Sorry. That's me using words together that don't mean anything
> together except to me. I've developed, and have been using,
> this idea as a guide in trying to design a lisp dialect whose
> primitives have very clean design.
>
> I have a perception that in a language of clean design, most
> specific concepts, ideas, or extensions are associated with
> particular keywords.

Actually, I see that concepts and keywords tend to get dissociated.

> For example, if you want to do a project
> using immutable CONS cells (which allows the compiler to do all
> sorts of optimizations it otherwise could not do, pretty much
> including eliminating CONS cells as a distinct entity in memory
> all together) then you need to scan the code for uses of RPLACA
> and RPLACD. Because nothing else mutates cons cells, these
> keywords control the idea or extension of mutable cons cells.

RPLACA and RPLACD may be the only functions that mutate cons cells, but
to optimize, you need to know which functions mutate *any* data.

> Code that doesn't use them (including the transitive closure of
> code it calls) will not violate the constraint against using immutable
> CONS cells.

The transitive closure is usually a problem. You usually find that most
functions somehow, somewhere call a mutating operator.
Did you consider non-mutating higher-order functions that get a mutating
function as a parameter?

>> This kind of stuff is better controlled via policy.
>> Software-enforced policy is usually far too rigid to be really useful.
>
> Speed limits are enforced by policy. That's no reason to shirk
> the manufacture of mechanical speedometers. A speedometer can
> tell you when you're in compliance with the policy even in cases
> where you otherwise might have to guess.

A speedometer can be useful, yes. However, what you're aiming at is a
device that throttles the car whenever it exceeds the speed limit,
something that isn't installed in cars for excellend reasons.

Regards,
Jo
(please don't restrict followup-tos, I'm not reading c.l.l.)

Ray Dillinger

unread,
Jan 14, 2008, 12:09:22 PM1/14/08
to
Ray Blaak wrote:


> I stand against this. First of all, if your people are mostly smart, the
> solves most of the problems right there.

'deed it does. :-)

> Second, I don't think such tools can be practically written. Fiercly
> difficult, they would be.

I'm not seeing it, really. The fundamental level of difficulty is
in doing the code analysis and proving properties, and your compiler
has to do that anyway if it's going to be any good. The rest is
a bunch of details, with no particularly crunchy bits to stop you.
Good compilers are fiercely difficult; the rest of this, I believe,
will just take patient and competent work of no particularly great
complexity.



> Forth, it is far more effective and efficient to track things as opposed
> to prevent possible bad things. In a proper revision system you can always
> recover and correct mistakes. Halting commits will be guaranteed to cause
> problems and unexpected delays. If Jack gets hit by a bus, you need to
> still proceed.

Generally, the problem is that the modern generation of programmers
has vastly varying levels and areas of core competence. I think it
makes sense for revision control to be configurable by someone aware
of the differences, so that different programmers are "trusted" for
different things. And yes, where Jack gets hit by a bus you still
have to proceed. That's when you make your own fork of the project
with different privilege assignments, check in your meta-macro, and
go - with all responsibility for the new branch on your own head.

Also, I think the logical conclusion to revision control is that
it should be possible to mark some branch as "deployed" relative
to a particular server so that check-in to that branch can
instantly changing a running program on a server that faces
tens of thousands of customers. In that case tracking changes
is not enough, and you have to be pretty careful about who can
check in changes to that particular branch.

> [Followups *not* obeyed, again. Please don't do that. This is a meta level
> discussion, and I submit, interesting to all the groups]

Ergh. Sorry, I've just switched to Knode and it appears to have some kind
of bug where it does not follow-up to all groups unless you specifically
notice them and tell it to by adding your own followup-to line. I didn't
know it was doing it the first time, forgot to check the second time. I
will be watching out for it in the future.

Bear

Didier Verna

unread,
Jan 14, 2008, 12:16:16 PM1/14/08
to
Robert Uhl <eadm...@NOSPAMgmail.com> wrote:

> My first thought on reading the article is that Lisp has a lot more to
> offer than functional programming. Lisp offers encapsulation with
> classes; it offers namespaces (called packages, but they're the same
> thing); it does offer information hiding if you make it; it offers
> programming by extension. It also offers generic functions and
> programmable syntax.
>
> In other words, Lisp is a lot more valuable than the authors seem to
> know.

That was my impression too. Even more, the choice of Lisp as a typical
example of functional language is somewhat weird (and talk about
referential transparency !). I would have chosen something else,
probably pure (Haskell or so) closer to a mathematical formalism, with
native currying etc.

Actually, Lisp is probably not typical of anything because it is so
permissive that it can be typical of everything :-) But at least,
there's a couple of professors that haven't completely forgotten all
about Lisp :-)

A couple of other weird things also, like C++ as a good illustration of
information hiding through protected and private data ?!? wtf ?!?

--
Resistance is futile. You will be jazzimilated.

Didier Verna, did...@lrde.epita.fr, http://www.lrde.epita.fr/~didier

EPITA / LRDE, 14-16 rue Voltaire Tel.+33 (0)1 44 08 01 85
94276 Le Kremlin-Bicętre, France Fax.+33 (0)1 53 14 59 22 did...@xemacs.org

Ray Blaak

unread,
Jan 14, 2008, 1:04:38 PM1/14/08
to
Ray Dillinger <be...@sonic.net> writes:

> Ray Blaak wrote:
> > Second, I don't think such tools can be practically written. Fiercly
> > difficult, they would be.
>
> I'm not seeing it, really. The fundamental level of difficulty is
> in doing the code analysis and proving properties, and your compiler
> has to do that anyway if it's going to be any good. The rest is
> a bunch of details, with no particularly crunchy bits to stop you.
> Good compilers are fiercely difficult; the rest of this, I believe,
> will just take patient and competent work of no particularly great
> complexity.

I should clarify that my objection here is one of practicality. It might very
well be possible to make tools like this. I believe though that they are
simply not worth the effort. I think that you can get the same benefits for
much cheaper simply by having people pay attention, possibly with some (far
easier) tool support to help people pay attention.

Paul Donnelly

unread,
Jan 15, 2008, 12:26:54 AM1/15/08
to
On Mon, 14 Jan 2008 18:16:16 +0100, Didier Verna wrote:

> Robert Uhl <eadm...@NOSPAMgmail.com> wrote:
>
>> My first thought on reading the article is that Lisp has a lot more to
>> offer than functional programming. Lisp offers encapsulation with
>> classes; it offers namespaces (called packages, but they're the same
>> thing); it does offer information hiding if you make it; it offers
>> programming by extension. It also offers generic functions and
>> programmable syntax.
>>
>> In other words, Lisp is a lot more valuable than the authors seem to
>> know.
>
> That was my impression too. Even more, the choice of Lisp as a typical
> example of functional language is somewhat weird (and talk about
> referential transparency !). I would have chosen something else,
> probably pure (Haskell or so) closer to a mathematical formalism, with
> native currying etc.

Now there's a pet peeve of mine! I mention writing something in Lisp, and
I get, "Why would you write that in a functional language?" It's like
they're trying to push my lecture button or something.

Sohail Somani

unread,
Jan 15, 2008, 1:02:20 AM1/15/08
to
On Mon, 14 Jan 2008 23:26:54 -0600, Paul Donnelly wrote:

> Now there's a pet peeve of mine! I mention writing something in Lisp,
> and I get, "Why would you write that in a functional language?" It's
> like they're trying to push my lecture button or something.

How can you expect someone who is ignorant to say the right thing? You
should look at it as an opportunity to educate rather than lecture. True
education would require a remapping (+ extension) of things that they
understand. A favourite of mine is CLOS where the discussion goes like
this (paraphrasing a real encounter):

Person: Lisp? You must love parentheses! Anyway, isn't it a purely
academic language b/c it is functional?
Me: Well even though I haven't been using it that long, I would say it is
one of the most powerful of the modern languages. For example, CL's
support for object-oriented programming (CLOS) is much better than the
mainstream languages.
Person: Really? Howcome?
Me: Well, are you familiar with the visitor pattern? This is a result of
systems that only support single-dispatch (i.e., the method called
depends only on one type at run-time.) By design, CLOS is multiple-
dispatch so all those ugly hacks go out the window.
Person: Wow.
Me: Yeah.
Person: But it is slow isn't it?

And it goes on...

You want to engage, not shut them down. So lecturing is the wrong way to
think about it.

</lecture> ;-)

--
Sohail Somani
http://uint32t.blogspot.com

Didier Verna

unread,
Jan 15, 2008, 3:13:25 AM1/15/08
to
Sohail Somani <soh...@taggedtype.net> wrote:

> How can you expect someone who is ignorant to say the right thing?

Yup. And we're all ignorant (I think the authors of the article I
mentioned are mostly ignorant about half the languages they cite). I
personally wish I had more time to learn the languages I know better,
and also more time to learn the languages I don't know at all.


> You want to engage, not shut them down. So lecturing is the wrong way
> to think about it.
>
> </lecture> ;-)

Amen ! :-)

Sohail Somani

unread,
Jan 15, 2008, 1:02:55 PM1/15/08
to
On Tue, 15 Jan 2008 09:13:25 +0100, Didier Verna wrote:

> Sohail Somani <soh...@taggedtype.net> wrote:
>
>> How can you expect someone who is ignorant to say the right thing?
>
> Yup. And we're all ignorant (I think the authors of the article I
> mentioned are mostly ignorant about half the languages they cite). I
> personally wish I had more time to learn the languages I know better,
> and also more time to learn the languages I don't know at all.

Well the sentence was meant to be read as: "How can you expect someone
who is ignorant about Lisp to say the right things about what features it
supports?" I am ignorant about lots of things which gives me the courage
to try stupid things!

>> You want to engage, not shut them down. So lecturing is the wrong way
>> to think about it.
>>
>> </lecture> ;-)
>
> Amen ! :-)

:-D

Paul Donnelly

unread,
Jan 15, 2008, 5:16:32 PM1/15/08
to
On Tue, 15 Jan 2008 06:02:20 +0000, Sohail Somani wrote:

> On Mon, 14 Jan 2008 23:26:54 -0600, Paul Donnelly wrote:
>
>> Now there's a pet peeve of mine! I mention writing something in Lisp,
>> and I get, "Why would you write that in a functional language?" It's
>> like they're trying to push my lecture button or something.
>
> How can you expect someone who is ignorant to say the right thing? You
> should look at it as an opportunity to educate rather than lecture. True
> education would require a remapping (+ extension) of things that they
> understand. A favourite of mine is CLOS where the discussion goes like
> this (paraphrasing a real encounter):
>

> <snip>


>
> You want to engage, not shut them down. So lecturing is the wrong way to
> think about it.

Well, I think we just disagree on the educational value of a lecture. I
meant "possibly lengthy explanation" mode. Maybe I've had better
experiences with lectures than you have. :)

Sohail Somani

unread,
Jan 15, 2008, 5:38:41 PM1/15/08
to
On Tue, 15 Jan 2008 16:16:32 -0600, Paul Donnelly wrote:

> Well, I think we just disagree on the educational value of a lecture. I
> meant "possibly lengthy explanation" mode. Maybe I've had better
> experiences with lectures than you have.

There is a difference between an educator and a know-it-all.

Sohail Somani

unread,
Jan 15, 2008, 5:39:12 PM1/15/08
to
On Tue, 15 Jan 2008 22:38:41 +0000, Sohail Somani wrote:

> On Tue, 15 Jan 2008 16:16:32 -0600, Paul Donnelly wrote:
>
>> Well, I think we just disagree on the educational value of a lecture. I
>> meant "possibly lengthy explanation" mode. Maybe I've had better
>> experiences with lectures than you have.
>
> There is a difference between an educator and a know-it-all.

Yes, I recognize the irony :-)

Paul Donnelly

unread,
Jan 16, 2008, 12:23:21 AM1/16/08
to
On Tue, 15 Jan 2008 22:38:41 +0000, Sohail Somani wrote:

> On Tue, 15 Jan 2008 16:16:32 -0600, Paul Donnelly wrote:
>
>> Well, I think we just disagree on the educational value of a lecture. I
>> meant "possibly lengthy explanation" mode. Maybe I've had better
>> experiences with lectures than you have.
>
> There is a difference between an educator and a know-it-all.

Indeed. My experience is that people enjoy (or at least tolerate without
visible signs of discomfort) hearing me talk... so take that to mean what
you will. Perhaps you would have liked it better if I had originally said,
"It's like they're trying to push my caring personal Lisp tutor with a
knack for involving the educatee button." Indeed, that is what I
originally typed, but it got cut down in editing.

Sohail Somani

unread,
Jan 16, 2008, 12:27:16 AM1/16/08
to
On Wed, 16 Jan 2008 05:23:21 +0000, Paul Donnelly wrote:

[snip]


> Indeed. My experience is that people enjoy (or at least tolerate without
> visible signs of discomfort) hearing me talk...

[snip]

That is great! Fight the good fight.

Damien Kick

unread,
Jan 20, 2008, 6:18:03 PM1/20/08
to
Ray Blaak wrote:
> Ray Dillinger <be...@sonic.net> writes:
>> Ray Blaak wrote:
>>> Second, I don't think such tools can be practically written. Fiercly
>>> difficult, they would be.
>> I'm not seeing it, really. The fundamental level of difficulty is
>> in doing the code analysis and proving properties, and your compiler
>> has to do that anyway if it's going to be any good. [...]

>
> I should clarify that my objection here is one of practicality. It might very
> well be possible to make tools like this. [...]

I'm jumping into this a bit mid-thread so I'm not sure if I am making
the proper association but when I read about "tools like this" it gets
me to start thinking about tools like Klocwork, which does source code
analysis of C/C++ code. I used to work for a company which had lots of
heavy weight process enforcement. We used ClearCase for SCM and had an
entire team of people devoted to writing scripts for hooks on SCM
actions. Such and such script was run when a change was checked in.
Such and such script was run when a revision on one branch was merged to
another branch. We also had teams of people studying tools like
Klocwork and advocating its being included as a mandatory part of the
development process, i.e. everybody had to run Klocwork on their code.
I hated Klocwork. It generated such much garbage, warnings about things
which might be problems but it turned out they never were, that nobody
ever even bothered looking at the reports. In all the time I was there,
I don't think I ever once heard of somebody using Klocwork (or lint or
whatever) to actually find a bug. But we did have somebody who had to
work on writing scripts to make sure that the reports were generated
even though nobody ever looked at them. People did spend a lot of time
making pointless casts in the code to shut the tool up and to get the QA
folks off their backs, who would keep track of the numbers of
diagnostics generated by the tool, a practice which actually decreased
code quality, in my opinion.

Even better was when the tool was buggy. For example, I used a version
of one such tool that still seemed to think that the C++ new operator
returned a null pointer when it failed to allocate memory, and so it
would complain about someone possibly using a null pointer because they
hadn't checked for it. We also had a lot of coders who had not bothered
to keep current with C++ and so they too thought that the new operator
might return a null pointer. Because the tool generated a diagnostic,
they would write an "if (!p)", even though this was completely pointless.

We also had this wonderful team of people who collected metrics on code
reviews, indicating average numbers of bugs found per kLOC reviewed,
etc. The SCM scripts would actually check the results of a code review,
the results of which had to be entered into a database, and it would
require an SQA sign off if one failed to find the expected number of
issues for the amount of code reviewed, etc. What it typically meant
was that you had to go find the SQA person to ask them to sign it off.
As they had absolutely no clue as to what the code was that you had just
inspected, they only information they had to use to decide whether or
not to allow the change to be accepted was what the people in the code
review told them about it. So it was a completely useless waste of
time, as opposed to a mostly useless waste of time or even a somewhat
fruitful waste of time.

I've come to hate the term "manage by data". My academic adviser in
college specialized in the mathematical modeling of biological systems.
I remember him saying once that it is very easy to get numbers but
very hard to get numbers that mean anything. Number of diagnostics
issues by Klockwork. Number of defects per thousand lines of code.
This is all data. These are all numbers. But that don't really mean
very much. To use these numbers as data points to inform a decision is
a meaningful exercise. But to try and formulate something like this
into an automated process enforced by triggers and controls is a
mistake, in my opinion.

vanekl

unread,
Jan 20, 2008, 7:03:30 PM1/20/08
to
Damien Kick wrote:
> Ray Blaak wrote:
...

> Even better was when the tool was buggy. For example, I used a version
> of one such tool that still seemed to think that the C++ new operator
> returned a null pointer when it failed to allocate memory, and so it
> would complain about someone possibly using a null pointer because they
> hadn't checked for it. We also had a lot of coders who had not bothered
> to keep current with C++ and so they too thought that the new operator
> might return a null pointer. Because the tool generated a diagnostic,
> they would write an "if (!p)", even though this was completely pointless.

I agree with all you've said, but count me in as one of those clueless bastards
who still checks for null pointers. What is malloc, et al, supposed to return
if it cannot allocate memory? Is it supposed to throw an error? Can that be
trusted to work on all of the different malloc libraries? I guess you can
wrap malloc in a DEFINE to ensure a throw. (If you wanna just dump a link,
that would be fine.)
--
Nuke it from orbit. It's the only way to be sure.

Sohail Somani

unread,
Jan 20, 2008, 7:06:25 PM1/20/08
to
On Mon, 21 Jan 2008 00:03:30 +0000, vanekl wrote:

> I agree with all you've said, but count me in as one of those clueless
> bastards who still checks for null pointers. What is malloc, et al,
> supposed to return if it cannot allocate memory? Is it supposed to throw
> an error? Can that be trusted to work on all of the different malloc
> libraries? I guess you can wrap malloc in a DEFINE to ensure a throw.
> (If you wanna just dump a link, that would be fine.)

He said *new* not *malloc*. Operator new throws a bad_alloc exception on
standard conforming compilers (when it is appropriate to do so,
obviously!)

tim Josling

unread,
Jan 20, 2008, 10:24:19 PM1/20/08
to
On Sun, 20 Jan 2008 17:18:03 -0600, Damien Kick wrote:

> Ray Blaak wrote:
> I've come to hate the term "manage by data". My academic adviser in
> college specialized in the mathematical modeling of biological systems.
> I remember him saying once that it is very easy to get numbers but
> very hard to get numbers that mean anything. Number of diagnostics
> issues by Klockwork. Number of defects per thousand lines of code. This
> is all data. These are all numbers. But that don't really mean very
> much. To use these numbers as data points to inform a decision is a
> meaningful exercise. But to try and formulate something like this into
> an automated process enforced by triggers and controls is a mistake, in
> my opinion.

People who are incompetent tend to migrates into areas such as process
management, as in your example. I suspect this is because it is a good way
to avoid accountability for anything.

As a result the processes in a large organisation are often managed by
idiots. Typically they have no interest in results, outcomes, the degree
to which their metrics actually mean anything, or anything else that is
relevant. One of John Cleese's training videos was about this exact
phenomenon. "My job is to ensure that the correct processes are followed".
Cleese's question was where does the customer, or profit, fit into this
view of success.

Metrics can be useful at times, but not when slavishly followed. They are
often only valid to a zero-th or first approximation only. Example: lines
of code as a measure of work done. Valid to the 1 1/2th approximation. But
what about the day you spend cleaning up the code and reduce the LOC count
by 400?

There is a well-known story about this told by Bill Gates about IBM. At
the end of the project the MSFT people were cleaning up the code, which
had the effect of reducing LOC. The IBM people were horrified because it
would impact on their productivity stats.

Tim Josling

George Neuner

unread,
Jan 20, 2008, 11:00:45 PM1/20/08
to
On Sun, 20 Jan 2008 17:18:03 -0600, Damien Kick <dk...@earthlink.net>
wrote:

>Even better was when the tool was buggy. For example, I used a version
>of one such tool that still seemed to think that the C++ new operator
>returned a null pointer when it failed to allocate memory, and so it
>would complain about someone possibly using a null pointer because they
>hadn't checked for it. We also had a lot of coders who had not bothered
>to keep current with C++ and so they too thought that the new operator
>might return a null pointer. Because the tool generated a diagnostic,
>they would write an "if (!p)", even though this was completely pointless.

You're correct that the normal behavior of new() is to throw an
exception, but it can be overridden by using the "new(nothrow)"
syntax. See:

http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=170

Granted the tool is still buggy because it should recognize the unique
non-throwing form. There isn't any standard way to set "nothrow"
globally, but a macro could be used to transparently redefine new() to
new(nothrow) everywhere.

George
--
for email reply remove "/" from address

Slobodan Blazeski

unread,
Jan 21, 2008, 4:36:13 AM1/21/08
to
Nothing useful to add to this discussion just remmberring the good-ol'
days with c++
http://www.xkcd.com/371/

cheers
Slobodan

Sohail Somani

unread,
Jan 21, 2008, 4:47:40 AM1/21/08
to
On Mon, 21 Jan 2008 01:36:13 -0800, Slobodan Blazeski wrote:

> Nothing useful to add to this discussion just remmberring the good-ol'
> days with c++
> http://www.xkcd.com/371/

I truly enjoy C++ at times. My real aha moment was when I realized that
templates use pattern matching. That was pretty awesome.

Sigh...

Reply all
Reply to author
Forward
0 new messages