For example, a simple but well known idiom is the "swap" idiom:
temp = a a ^= b
a = b or b ^= a
b = temp a ^= b
Obviously this a very simple idiom, however if you have other
idioms that you find you use a lot, I would be very interested in hearing
about them.
Note that this is not about generic programming (STL), programming
patterns (Gang of Four etc.), or even methodologies like Object Oriented
Design. What I'm really interested in, are small implementation
idioms that you have used several times in the past. They don't have
to be complicated, just chunks of code that you often find usefull.
If you would like to share any programming idioms with me, feel free to
post here in the news gruop, or reply directly to me mga...@nortelnetworks.com.
If you are interested in finding out more about the research I'm involved with,
or would like to know how it turns out, just let me know.
Thanks for your time!
mike.
--
Tell my tale to those who ask. Tell it truly - the ill deeds, along with
the good -- and let me be judged accordingly. The rest ... is silence.
-- Dinobot
. . . . . . . . . . . . . . . . . . . . . . . .
Guru Tools|Team Leader
ESN 393-8309|mga...@nortelnetworks.com
Your investigation is not topical in any of these newsgroups. It is not
appropriate to solicit for test subject this way.
>As part of that, I'm looking at what some common programming idioms are. In
>an effort to bring some practical (real) programming idioms into the list of
>idioms I'm looking at, I was wondering if you have any personal programming
>idioms you would like to share.
>
>For example, a simple but well known idiom is the "swap" idiom:
>
> temp = a a ^= b
> a = b or b ^= a
> b = temp a ^= b
The one on the right conjures up the word ``idiot'' rather than ``idiom''.
Consider that it does not work if a and b are the same object. Assuming
we are talking about C or C++ here, it is only maximally portable if
a and b are unsigned types (that do not promote to int).
--
Any hyperlinks appearing in this article were inserted by the unscrupulous
operators of a Usenet-to-web gateway, without obtaining the proper permission
of the author, who does not endorse any of the linked-to products or services.
Off-Topic for news:comp.lang.c
> As part of that, I'm looking at what some common programming idioms are.
In
> an effort to bring some practical (real) programming idioms into the list
of
> idioms I'm looking at, I was wondering if you have any personal
programming
> idioms you would like to share.
>
> For example, a simple but well known idiom is the "swap" idiom:
>
> temp = a a ^= b
> a = b or b ^= a
> b = temp a ^= b
This particular idiom is brain-damaged. In the case that a == b, it's
el-destructo time.
Some other equally useless idiot ideas -- I mean idiom ideas are:
XOR EAX,EAX
to zero a register, etc. It was cute in its day, but that day is thankfully
behind us.
There are gag constructs like:
9["This is really idiotic"]
which are useful for things like IOCCC but little else.
Some things are actually useful for the terminally lazy such as myself.
For a pointer p:
if (p) {d_stuff();}
is often used in lieu of:
if (p != NULL) {d_stuff();}
Depending upon how you define idiom, either everything is an idiom or
nothing is.
Every construct will be recognized by an expert of the language, and all
will seem confusing to a complete novice. When something is more mainstream
than the alternatives, is it an idiom or not?
> Obviously this a very simple idiom, however if you have other
> idioms that you find you use a lot, I would be very interested in hearing
> about them.
>
> Note that this is not about generic programming (STL), programming
> patterns (Gang of Four etc.), or even methodologies like Object Oriented
> Design. What I'm really interested in, are small implementation
> idioms that you have used several times in the past. They don't have
> to be complicated, just chunks of code that you often find usefull.
Usually idioms are just annoying, unless they are easily recognizable.
> If you would like to share any programming idioms with me, feel free to
> post here in the news gruop, or reply directly to me
mga...@nortelnetworks.com.
> If you are interested in finding out more about the research I'm involved
with,
> or would like to know how it turns out, just let me know.
I suspect that people should post most answers via email, as this thread
will quickly become annoying in the arenas where it is not topical (which is
probably all of them).
--
C-FAQ: http://www.eskimo.com/~scs/C-faq/top.html
"The C-FAQ Book" ISBN 0-201-84519-9
C.A.P. Newsgroup http://www.dejanews.com/~c_a_p
C.A.P. FAQ: ftp://38.168.214.175/pub/Chess%20Analysis%20Project%20FAQ.htm
-- Adam
"Garvin, Michael (EXCHANGE:WDLN:C28G)" wrote:
> Hello, I'm investigating some questions around the psychology of programming.
> As part of that, I'm looking at what some common programming idioms are. In
> an effort to bring some practical (real) programming idioms into the list of
> idioms I'm looking at, I was wondering if you have any personal programming
> idioms you would like to share.
>
> For example, a simple but well known idiom is the "swap" idiom:
>
> temp = a a ^= b
> a = b or b ^= a
> b = temp a ^= b
>
> Obviously this a very simple idiom, however if you have other
> idioms that you find you use a lot, I would be very interested in hearing
> about them.
>
> Note that this is not about generic programming (STL), programming
> patterns (Gang of Four etc.), or even methodologies like Object Oriented
> Design. What I'm really interested in, are small implementation
> idioms that you have used several times in the past. They don't have
> to be complicated, just chunks of code that you often find usefull.
>
> If you would like to share any programming idioms with me, feel free to
> post here in the news gruop, or reply directly to me mga...@nortelnetworks.com.
> If you are interested in finding out more about the research I'm involved with,
> or would like to know how it turns out, just let me know.
>
"Garvin, Michael (EXCHANGE:WDLN:C28G)" wrote:
>
> Hello, I'm investigating some questions around the psychology of programming.
> As part of that, I'm looking at what some common programming idioms are. In
> an effort to bring some practical (real) programming idioms into the list of
> idioms I'm looking at, I was wondering if you have any personal programming
> idioms you would like to share.
>
> For example, a simple but well known idiom is the "swap" idiom:
>
> temp = a a ^= b
> a = b or b ^= a
> b = temp a ^= b
It's odd that the code on the right shows up so often as a sample of an
"idiom". As far as I can tell, it's an extreme technique whose
applicability is limited to the following combination of circumstances:
1. Temporary storage is VERY expensive, inhibiting use of the obvious
and usually more efficient code on the left.
2. a and b are definitely different variables. If they were aliases for
each other, the very first step would destroy all the data.
3. Either there is no possibility of another thread or process looking
at a or b during the swap, or it is acceptable for such a thread to see
the variable containing a value that is neither its old value nor its
new value.
Although I've been aware of the xor trick for about 20 years, I have yet
to encounter a situation in which all these conditions were met
simultaneously, so I've never used it. In my experience it is more a
curiosity than an idiom. Calling it an idiom might mislead people into
thinking they should use it when they shouldn't, which is almost always.
Patricia
p
--
"Kaz Kylheku" <k...@ashi.footprints.net> wrote in message
news:slrn8npc1...@ashi.FootPrints.net...
> On Mon, 24 Jul 2000 16:43:14 -0400, Garvin, Michael (EXCHANGE:WDLN:C28G)
> <mga...@nortelnetworks.com> wrote:
> >Hello, I'm investigating some questions around the psychology of
programming.
>
> Some other equally useless idiot ideas -- I mean idiom ideas are:
>
> XOR EAX,EAX
>
> to zero a register, etc. It was cute in its day, but that day is thankfully
> behind us.
I looked at the assembler produced by my gcc,
It still uses the xor to zero a register
when optimization switch is set...
Régis
Definitely off-topic, but on some processors, the XOR is used instead of
a CLR instruction because XOR will set the condition flags and CLR
doesn't.
--
--- Dale King
Resume: http://www.cs.iupui.edu/~dking/resume.htm
Recruiters: I am currently not willing to relocate!
> Dann Corbit wrote:
>
> > Some other equally useless idiot ideas -- I mean idiom ideas are:
> >
> > XOR EAX,EAX
> >
> > to zero a register, etc. It was cute in its day, but that day is thankfully
> > behind us.
>
> I looked at the assembler produced by my gcc,
> It still uses the xor to zero a register
> when optimization switch is set...
Sure. Compilers are supposed to micro-optimise for us. We're not
supposed to do it ourselves unless we know our compilers are crap.
Usually, one can trust the compiler writer.
Richard
That's to be expected.
While it is no better than the more obvious [to non-assembly programmers]:
MOV EAX, 0
That 'idiom' is so ingrained in assembly programmers that they will probably
never give it up.
Actually I suspect XOR EAX,EAX is fewer bytes and faster. I don't have a
reference on x86, which this example is using, but on some CISC
processors MOV would require an immediate operand while xor does not.
(68000 appears to be an exception because it uses 16-bit instructions
and has the MOVEQ opcode).
actually, on 486+, xor'ing a register with itself is the fastest way, it
pairs in both pipes and involves no data bus or cache activity. Its also
250% smaller than mov eax,0 so keeps the instruction piplines dense.
for (int lcv = 0; lcv < 10; lcv++); //whatever, whatever . . .
And my second is always mcv and then, I just proceed down the
alphabet. LCV stands for "Loop Control Variable". . . .
I also always indent with three spaces in all my code except for in
C/C++ header files where I use two hard spaces (not tabs).
Whenever I program Java, I refuse to use the double-slash comments
(//). I always use the C-style comments ( /* */ ).
Whenever I program BASIC (which is rare these days) I capitolize a
variable the first time I type it and type it in lower case the second
time and on the third time, I capitolize it again and so on, and so on.
And the comments I put at the beginning of my source are always silly
and usually not related to the program itself at all.
In article <397CAA62...@nortelnetworks.com>,
"Garvin, Michael (EXCHANGE:WDLN:C28G)" <mga...@nortelnetworks.com>
wrote:
> Hello, I'm investigating some questions around the psychology of
programming.
> As part of that, I'm looking at what some common programming idioms
are. In
> an effort to bring some practical (real) programming idioms into the
list of
> idioms I'm looking at, I was wondering if you have any personal
programming
> idioms you would like to share.
>
> For example, a simple but well known idiom is the "swap" idiom:
>
> temp = a a ^= b
> a = b or b ^= a
> b = temp a ^= b
>
> Obviously this a very simple idiom, however if you have other
> idioms that you find you use a lot, I would be very interested in
hearing
> about them.
>
> Note that this is not about generic programming (STL), programming
> patterns (Gang of Four etc.), or even methodologies like Object
Oriented
> Design. What I'm really interested in, are small implementation
> idioms that you have used several times in the past. They don't
have
> to be complicated, just chunks of code that you often find usefull.
>
> If you would like to share any programming idioms with me, feel free
to
> post here in the news gruop, or reply directly to me
mga...@nortelnetworks.com.
> If you are interested in finding out more about the research I'm
involved with,
> or would like to know how it turns out, just let me know.
>
> Thanks for your time!
> mike.
Sent via Deja.com http://www.deja.com/
Before you buy.
I tHink your proggramming style makes sense, but mine is even
more loggical. My first fore loop in a proggram looks like:
for (int zcv = 0; zcv < 10; zcv++); //whateva, whateva . . .
then my next one is like tHis:
for (int ycv = 0; ycv < 11; ycv++); //whateva, whateva . . .
My style is obbviously better since I donut runout of alphabet
letters as fast :+). If I do runout of letters, then I just break
the code into more subrouttins. This is all what I learned good
about structured proggramming.
I hope you don't mind too much, but I have to put a flametHrower
to your bogositty indentattion stylee. I prefer too much
indenting 5 spaces (sommetimes 7 if the programm is short). All
proggramers should be like me.
Hope this helps you too much.
-----------------------------------------------------------
Got questions? Get answers over the phone at Keen.com.
Up to 100 minutes free!
http://www.keen.com
See:
ftp://download.intel.com/design/pro/MANUALS/24281603.pdf
1. It's not faster (at least on a Pentium p6 type or above CPU it takes one
cycle to move zero and one cycle to xor).
2. His point is that HLL programmers needn't worry about it.
The reason that xor EAX,EAX is an idiom, is because you are not doing the
"natural" operation of moving a zero into the register, but using the
mathematical properties of xor. While this operation will be familiar to
anyone who has programmed in assembly language, it will look strange to
someone who has never had to do it.
Long ago, mov was slow and xor was fast. There are still reasons an
assembly language programmer might want to use xor (save the space of the
zero, I suppose) but it's not nearly as important as it used to be.
80% of the cost of software is maintenance. If we make the software harder
to understand to shave a few cycles, it's probably a mistake. Now, there
are times (e.g. you are writing an optimizing compiler for instance) when
you need to shave every hair of speed out of something. But most of the
time, that sort of microoptimization is not just a waste of time, it's a
waste of money.
Suggestion:
Read Mike Lee's optimization page. It's rather a good one:
http://www.ontek.com/mikey/optimization.html
For those interested in squeezing the last cycle out of an Intel chip, look
here:
http://www.agner.org/assem/pentopt.htm
You might be surprised at the following quotation from Agner Fog's page:
"A common way of setting a register to zero is XOR EAX,EAX or SUB EAX,EAX.
These instructions are not recognized as independent of the previous value
of the register. If you want to remove the dependency on slow preceding
instructions then use MOV EAX,0."
It's an example. It used to be a great idea, but now maybe so and maybe
not. The point is that this kind of cycle shaving is usually idiotic. The
way to make something go faster is to choose a better algorithm.
I don't see any assembly programming listings in the newsgroups. The reason
that I point that out is that such a construct might be easily
understandable to assembly programmers, the general technique of
microoptimization for higher level languages is actually counter-productive
most of the time.
You might be amazed (horrified, whatever) at how many questions we get in
news:comp.lang.c that are of the format:
Which is faster,
i++;
++i;
i = i + 1;
i + = 1;
?
With the intention that the programmer intends to use the "fastest
construct" when programming.
Instead, write code that is clear and understandable. Use optimal
algorithms for the domain of the problem. Use profilers and identify the
hot spots. If something still does not meet specifications, then (and only
then) look at icky[tm] tinkering tricks to squeeze that last little bit of
speed out of it.
[snip]
In article <397D7359...@info.unicaen.fr>, re...@info.unicaen.fr writes:
>Dann Corbit wrote:
>>Some other equally useless idiot ideas -- I mean idiom ideas are:
>> XOR EAX,EAX
>> to zero a register, etc. It was cute in its day, but that day is
>> thankfully behind us.
>
> I looked at the assembler produced by my gcc, It still uses the
> xor to zero a register when optimization switch is set...
Well, sure. But what does that tell us about the question of
whether the average programmer should need to know or use that
idiot^Hm today?
Steve Summit
s...@eskimo.com
--
Programming Challenge #6: Don't just fix the bug.
See http://www.eskimo.com/~scs/challenge/.
I use third one. Cool?
>>>Some other equally useless idiot ideas -- I mean idiom ideas are:
>>> XOR EAX,EAX
>>> to zero a register, etc. It was cute in its day, but that day is
>>> thankfully behind us.
>> I looked at the assembler produced by my gcc, It still uses the
>> xor to zero a register when optimization switch is set...
>Well, sure. But what does that tell us about the question of
>whether the average programmer should need to know or use that
>idiot^Hm today?
If you do assembly programming, chances are you are doing it for reasons
pertaining to speed, so when this idiom is faster then why not use it?
--
A still tongue makes a happy life.
>> >>>Some other equally useless idiot ideas -- I mean idiom ideas are:
>> >>> XOR EAX,EAX
>> >>> to zero a register, etc. It was cute in its day, but that day is
>> >>> thankfully behind us.
>>
>> >> I looked at the assembler produced by my gcc, It still uses the
>> >> xor to zero a register when optimization switch is set...
>>
>> >Well, sure. But what does that tell us about the question of
>> >whether the average programmer should need to know or use that
>> >idiot^Hm today?
>>
>> If you do assembly programming, chances are you are doing it for reasons
>> pertaining to speed, so when this idiom is faster then why not use it?
>See:
>ftp://download.intel.com/design/pro/MANUALS/24281603.pdf
>
>1. It's not faster (at least on a Pentium p6 type or above CPU it takes one
>cycle to move zero and one cycle to xor).
Cycle counts can be deceiving.
>2. His point is that HLL programmers needn't worry about it.
The average HLL programmer, no, but assembly programmers, yes. The average
HLL programmer usually doesn't have to touch assembly, and as such doesn't
have to worry about XOR EAX, EAX being an idiom or not.
>The reason that xor EAX,EAX is an idiom, is because you are not doing the
>"natural" operation of moving a zero into the register, but using the
>mathematical properties of xor. While this operation will be familiar to
>anyone who has programmed in assembly language, it will look strange to
>someone who has never had to do it.
The first time you see it, yes. If you can't recognize what it does pretty
much instantly you shouldn't be doing assembly language programming.
>80% of the cost of software is maintenance. If we make the software harder
>to understand to shave a few cycles, it's probably a mistake. Now, there
>are times (e.g. you are writing an optimizing compiler for instance) when
>you need to shave every hair of speed out of something. But most of the
>time, that sort of microoptimization is not just a waste of time, it's a
>waste of money.
Granted there are less and less reasons where you have to sit down and
actually think about optimizing, it doesn't hurt to do it when it doesn't
cost you any time and it doesn't make it harder to understand. XOR EAX, EAX
shouldn't make a good programmer stop dead in his tracks and scratch his
head.
>You might be surprised at the following quotation from Agner Fog's page:
>
>"A common way of setting a register to zero is XOR EAX,EAX or SUB EAX,EAX.
>These instructions are not recognized as independent of the previous value
>of the register. If you want to remove the dependency on slow preceding
>instructions then use MOV EAX,0."
Like I said, cycle counts can be deceiving.
>> Granted there are less and less reasons where you have to sit down and
>> actually think about optimizing, it doesn't hurt to do it when it doesn't
>> cost you any time and it doesn't make it harder to understand. XOR EAX, EAX
>> shouldn't make a good programmer stop dead in his tracks and scratch his
>> head.
>It's an example. It used to be a great idea, but now maybe so and maybe
>not. The point is that this kind of cycle shaving is usually idiotic. The
>way to make something go faster is to choose a better algorithm.
That depends on what you're doing. If you're doing computer graphics, like I
do, shaving off some cycles for each pixel is _very_ noticeable.
But I agree with what you're saying, that in 99.9% percent of cases it is
not necessary, and many optimizations that used to be very smart are not at
all smart on newer processors.
>I don't see any assembly programming listings in the newsgroups. The reason
>that I point that out is that such a construct might be easily
>understandable to assembly programmers, the general technique of
>microoptimization for higher level languages is actually counter-productive
>most of the time.
>
>You might be amazed (horrified, whatever) at how many questions we get in
>news:comp.lang.c that are of the format:
>
>Which is faster,
> i++;
> ++i;
> i = i + 1;
> i + = 1;
>?
>With the intention that the programmer intends to use the "fastest
>construct" when programming.
Cute. :)
>Instead, write code that is clear and understandable. Use optimal
>algorithms for the domain of the problem. Use profilers and identify the
>hot spots. If something still does not meet specifications, then (and only
>then) look at icky[tm] tinkering tricks to squeeze that last little bit of
>speed out of it.
I agree completely, I'm just saying that there is a time and a place for
everything -- even low level optimizations. Most of the time it's not
necessary, but saying that an idiom such as XOR EAX, EAX is not needed
anymore (presumably because processors are becoming so fast that
optimization is not necessary) is wrong. Idioms like that are needed, where
they can be used to improve speed and it is necessary to do so. If it is the
wrong thing to do for the processor you're working on, don't use it.
What does your wife think of that?
--
-hs- Tabs out, spaces in.
CLC-FAQ: http://www.eskimo.com/~scs/C-faq/top.html
ISO-C Library: http://www.dinkum.com/htm_cl
FAQ de FCLC : http://www.isty-info.uvsq.fr/~rumeau/fclc
C-tips: http://jackklein.home.att.net
Sorry, but this isn't quite true.
"xor reg, reg" and "sub reg, reg" are recognized by PII/PIII
processors in order to avoid partial register stalls, which can occur
when reading larger part of register after write to smaller part of it and
vice versa.
That means that they are specially designed to prevent hardware to
assume that there is dependence in instructions which operates on
current register; and that is not case in mov instruction, for sure.
Secondly on early pentiums, xor instruction is used to avoid slow
movzx instruction which zero extends short int and was useful to
split zero extending part in front of the loop. That was quite handy
,because, xor instruction was pairable with other instructions while
movzx was not. But, on new pentiums, that does not counts any more,
since movzx instruction is recommended.
You can find this information on Intel site , I guess.
Greetings, Bane.
There is one idiom which I positively detest, although many people
defend it.
When doing an equality test (using ==) between two values, one of which
may have just previously changed and the other of which is either
constant or has been unchanged for a relatively long time, there is
(IMHO) a definite comprehensibility advantage in having the just-
changed one on the left and the relatively-constant one on the right.
Just think how you would say it in English (or most human languages)
"if x is equal to three" not "if three is equal to x".
But, time and again I see:
x = SomeFunction();
if(3 == x) // as opposed to if(x == 3)
{
// blah
}
Now I know that the justification for this is that if you accidently
type '=' instead of '==' the hideous form will give a compile error,
but surely that is not enough justification for the reduction in
readability.
Anyway even the less-readable form is better that the form used by
programmers whose main objective in life is to type as few characters
as possible and who write
if(3==(x=SomeFunction()))
{
// blah
}
Another commonly-used idiom (I approve of this one BTW), if you can
call it that, involves putting place-holder comments in a for
statement, if any of the 3 portions is not required e.g.
for(i = 0; i < 256; /* do nothing */)
{
// blah
}
where i is modified in the loop
or
for(i = 0; /* no limit */; i++)
{
// blah
}
where the loop is terminate by 'break'
> On 26 Jul 2000 02:28:46 GMT, s...@eskimo.com (Steve Summit) wrote:
>
> >>>Some other equally useless idiot ideas -- I mean idiom ideas are:
> >>> XOR EAX,EAX
> >>> to zero a register, etc. It was cute in its day, but that day is
> >>> thankfully behind us.
>
> >> I looked at the assembler produced by my gcc, It still uses the
> >> xor to zero a register when optimization switch is set...
>
> >Well, sure. But what does that tell us about the question of
> >whether the average programmer should need to know or use that
> >idiot^Hm today?
>
> If you do assembly programming, chances are you are doing it for reasons
> pertaining to speed, so when this idiom is faster then why not use it?
Well, yeah, _if_. But none of the groups this is cross-posted to is an
assembly group. That kind of bit-management should be left to assembler
writers or compilers.
Richard
Nicely corresponds to OED definition of
idiom n. 1. a group of words established by usage and having a meaning not
deducible from those of the individual words...
Yes, an idiom.
Dann Corbit wrote:
....
> You might be surprised at the following quotation from Agner Fog's page:
>
> "A common way of setting a register to zero is XOR EAX,EAX or SUB EAX,EAX.
> These instructions are not recognized as independent of the previous value
> of the register. If you want to remove the dependency on slow preceding
> instructions then use MOV EAX,0."
...
This is a problem with many of the so-called idioms. They take a local
performance consideration, applicable to some particular situation at a
particular time, and enshrine it in source code, or worse still in the
minds of programmers who will then propagate it into still more source
code. As processors and compilers get more sophisticated they can often
make the dumb, simple, obvious code run faster, but not the tricky code.
In the short term, a trick may be faster, or take less space. In the
long term, it will probably be slower.
If what is going on isn't absolutely obvious to a programmer who knows
the relevant language but no "idioms", it won't be obvious to the
compiler or processor either, hindering their large scale efforts to
make the code faster.
If code is going to use these tricks, it is not enough to evaluate them
at the time of initial coding. Maintainance of the program should
include regular re-evaluation, or you may end up with unnecessarily slow
code in a particularly critical block.
Patricia
for (int Vcv=0;Vcv > -3;Vcv++ )
burnToDeath(heretic);
In article <05f96f28...@usw-ex0107-055.remarq.com>,
--
I want my lemmrick, damnit!
It's not that much harder to understand. It's an arbitrary detail. Do
you like your coffee with or without cream?
>Thore B. Karlsen a écrit dans le message
><39857351....@news.cis.dfn.de>...
>>
>>--
>>A still tongue makes a happy life.
>
>What does your wife think of that?
s/wife/SO/
not conclusions to jumping be.
Actually, the justification for this style (which I don't usually use
myself) is to improve readability, sometimes. For example,
if (IDYES == MessageBox(/* yadda yadda yadda */)
helps people see the purpose of the conditional more clearly than if the
function call came first and the test-against-value came at the end.
Back in the really old days (1963) we came up with 17 different ways to
clear a register to zero on the 360.
--
Gary, MCT, MCP, MCSD
home site http://www.enter.net/~garyl
Mark Wilden <ma...@pagm.com> wrote in message
news:snugipm...@news.supernews.com...
In particular, it doesn't make sense in Java, where it would only apply
to booleans - and I personally write
if (!condition)
or
if (condition)
rather than
if (condition==false)
or
if (condition==true)
anyway, so there's no advantage to my mind.
--
Jon Skeet - sk...@pobox.com
http://www.pobox.com/~skeet/
"Garvin, Michael (EXCHANGE:WDLN:C28G)" wrote:
>
> Hello, I'm investigating some questions around the psychology of programming.
> As part of that, I'm looking at what some common programming idioms are.
How about the tendency of programmers to name their first loop control
variable i, and then the subsequent ones j, k, etc. For example:
for (i = 0; i < 10; i++) {
for (j = 0; j < 20; j++) {
...
}
}
AFAIK this practice originated in FORTRAN, where variables were implicitly
declared and automatically assigned a type based on the first letter of
their name. Any variable starting with the letters I, J, K, L, M, or N was
implicitly declared as integer. All other variables were assumed to be
real. Since most loop control variables are simple integral counters, it
was natural to use integer variables for them.
This idiom is not limited to C or FORTRAN -- it's popular in BASIC, C++,
Java, and nearly every other popular imperative language with which I've
had experience.
Regards,
Tristan
> How about the tendency of programmers to name their first loop control
> variable i, and then the subsequent ones j, k, etc. For example:
>
> for (i = 0; i < 10; i++) {
> for (j = 0; j < 20; j++) {
> ...
> }
> }
>
> AFAIK this practice originated in FORTRAN,
Not _just_ Fortran. It's pretty common in maths, too, to name variables
with one-letter somewhat-mnemonic names (l is length; n is number; i is
integer). And it does make it more easy to remember what you were using
this one-letter name for.
I frequently use i for single-character input (i=getc()).
Richard
Actually, I believe that started with mathematicians. Mathematicians
typically use i through n for naming index or summation variables and
letters like a, b, c, t, v, x, y, and z for quantities that can be any
value. Its widespread use by mathematicians was probably the basis for
the implicit variable typing rule, in order to make it easier for
mathematicians to translate thier formulas (FORTRAN = FORmula
TRANslation).
Dale King wrote:
> Actually, I believe that started with mathematicians. Mathematicians
> typically use i through n for naming index or summation variables and
> letters like a, b, c, t, v, x, y, and z for quantities that can be any
> value.
Yes, if you go far back enough, the i..n naming convention does come from
mathematics. One point about what you wrote, though -- in my experience,
a, b, c, etc. tend to be used for constants (constant, at least, relative
to the formula under discussion), while x, y, and z usually represent
variables.
To keep this halfway topical, let's contrast this to a popular C and C++
idiom in which constant identifiers (either const variables or preprocessor
#defines) are spelled entirely in uppercase, while variable identifiers are
in lower or mixed case. Furthermore, many style guides suggest beginning
local-scope variables with a lowercase letter, reserving uppercase-initial
names for global variables.
Regards,
Tristan
Yes, but constant or not a, b, and c are usually not constrained to be
integers, which was the point.
> To keep this halfway topical, let's contrast this to a popular C and C++
> idiom in which constant identifiers (either const variables or preprocessor
> #defines) are spelled entirely in uppercase, while variable identifiers are
> in lower or mixed case. Furthermore, many style guides suggest beginning
> local-scope variables with a lowercase letter, reserving uppercase-initial
> names for global variables.
I won't go there since I have not found valid justification for this
convention particularly in Java (I am reading this widely crossposted
thread in a Java group).
...and originates, no doubt, in the common practice in mathematics to
use i, j and k as dummy variables in summations and the like.
Followups set; watch where you post!
--
Hong Ooi | Centre for Maths and its Applications/
hong...@maths.anu.edu.au | Research School of Inf. Science & Engn.
Ph: (02) 6267 4140 | Australian National University
| ACT 0200 Australia
Hence the old FORTRAN saw: "God is real (unless implicitly declared or
explicitly defined as integer)."
<snip>
--
Richard Heathfield
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
65 K&R Answers: http://users.powernet.co.uk/eton/kandr2/index.html (32
to go)
God would be real if implicitly declared. So shouldn't that be:
God is real (unless explicitly defined as integer).
It seemed so to me. Not being a FORTRAN programmer, however, I chose to
reproduce the exact phrasing as I first encountered it. I am quite happy
to accept the correction.
Didn't originally think this was relevant to the newsgroups but probably
shouldn't let people get misconceptions (even in jokes).
So
implicit integer(a-z)
God = 1
has God implicitly integer -- (and I thank Him that there is still
one language which is case-insensitive :)
but
integer God
God = 1
is an explicit declaration.
Sigh... Too many pedant points today.
Tom McGlynn
t...@lheapop.gsfc.nasa.gov
You can say something like "implicit integer g" (I forget the exact
syntax) and then all variables beginning with `g' will be integers, just
as for i-n.
I am equally happy to reject the correction - I don't mind either way.
:-)
Don't be so sure. I see this idiom used in PIC microcontroller code
surprisingly often -- considering that the PIC has a "clear register"
opcode, which executes in 1 cycle just like the XOR does! After
reading this thread, I thought maybe one affects condition flags that
the other doesn't, but no. As far as I can tell, this is purely a
matter of cruft accumulation in the brains of assembly programmers.
--ben
Yes, but FORTRAN (the FORmula TRANslator) got this from someplace, itself:
mathematics. In expressions involving summation or integration, it seems
to be traditional to use "i" for the index variable. I always figured
this was taken from the first letter of "index", or perhaps "iterate".
Then when more indices are needed for multi dimensions or whatever, they
just continued with "j", "k", etc.. My guess is that the FORmula TRANslator
language picked this up in its implicit typing to make it "easier" for
mathematicians to type in their FORmulae. Easier, like having a laser
sight makes it easier to hit your foot squarely in the middle with your
pistol.
Remember, God is real (unless declared integer).
--ben (IMPLICIT NONE, use strict, -ansi -pedantic -Wall, etc....)
+ Yes, but FORTRAN (the FORmula TRANslator) got this from someplace, itself:
+ mathematics. In expressions involving summation or integration, it seems
+ to be traditional to use "i" for the index variable.
If I am iterating over "whatever things I happen to have at hand" it makes
a hell of a lot more sense to me to do a "for i=0 to n ..." (or start at
1 if you are a FORTRAN junkie! :-)) than to invent some preposterously
"appropriate" name like "for what_I_am_counting_now_is_quatloos=0 to n..."
The _point_ of mathematical idioms like "i from 0 to 10" (or infinity or
whatever...) or "let x = 3.1415926535..." is that one emphasizes that
what is going on is a counting (or a substitution, or whatever). The
_structure_ of the code becomes infinitely clearer than if it is obfus-
cated with long and pointless (because completely constrained within an
obvious local context) names. Now, passing such an "i" to a proc in which
it is then refered to as "n" or similar moves can destroy the clarity of
idiom on which mathematics relies (or tries to, in fits and starts.) If
such abstract "names" escape from a clear idiomatic (and local) context,
there may be trouble brewing, and compromise names may be wanted...
Of course, with Tcl, the idiom is more likely the entirely perspicuous
"foreach item $list { ... }" -- better even than the mathematical counting
or indexing idiom (indeed roughly the same level as the mathematical logic
use of universal quantifiers.)
--
Michael L. Siemon We must know the truth, and we must
m...@panix.com love the truth we know, and we must act
according to the measure of our love.
-- Thomas Merton
In comp.lang.tcl Dale King <Ki...@tce.com> wrote:
> Actually I suspect XOR EAX,EAX is fewer bytes and faster. I don't
> have a reference on x86, which this example is using, but on some
> CISC processors MOV would require an immediate operand while xor
> does not. (68000 appears to be an exception because it uses 16-bit
> instructions and has the MOVEQ opcode).
the "AMD-K6Ž Processor Code Optimization Application Note"
(http://www.amd.com/K6/k6docs/pdf/21924.pdf) says on page 64:
--- snip ---
Clear registers using MOV reg, 0 instead of XOR reg, reg.
Executing XOR reg, reg requires additional overhead due to register
dependency checking and flag generation. Using MOV reg, 0 produces a
limm (load immediate) RISC86 operation that is completed when placed
in the scheduler and does not consume execution resources.
--- snap ---
cu
Reinhard
--
If you put garbage in a computer nothing comes out but garbage.
But this garbage, having passed through a very expensive machine,
is somehow enobled and none dare criticize it.
But not really, since it is harder to optimise on a pipelined and
superscalar processors unless someone feels like explicitly putting
silicon in there to deal with the case (effectively turning it
internally into an assignment of zero to the register.) This is because
you can get a dependency on the register which can really bite if you've
got an assignment to the register somewhere further on in the
pipeline(s).
>pairs in both pipes and involves no data bus or cache activity. Its also
>250% smaller than mov eax,0 so keeps the instruction piplines dense.
That last bit is certainly true. *However*, if you're optimising for
that you had better be completely desperate to get code density up; most
of the time it is likely to be measurably cheaper to get a faster
computer than to employ a programmer to make the optimisation...
Donal.
--
Donal K. Fellows (at home)
--
FOOLED you! Absorb EGO SHATTERING impulse rays, polyester poltroon!!
(WARNING: There is precisely one error in this message.)
If you are writing mass-market software, it is not cheaper to get
_everyone_ a faster computer.
Hugo
But MS seems to take this attitude all the time! :^)
Mass-market software has a different problem; time-to-market. If your
competitors ship something sooner and get a lock on the market, your
tighter code is in danger of sinking like a lead balloon. Plus, mass-
-market hardware is so widely variable that tuning for it is HARD.
(How many different Pentium/etc. revisions are there out there? :^)
Donal.
--
Donal K. Fellows http://www.cs.man.ac.uk/~fellowsd/ fell...@cs.man.ac.uk
-- I may seem more arrogant, but I think that's just because you didn't
realize how arrogant I was before. :^)
-- Jeffrey Hobbs <jeffre...@ajubasolutions.com>
> Yes, but FORTRAN (the FORmula TRANslator) got this from someplace,
> itself: mathematics. In expressions involving summation or
> integration, it seems to be traditional to use "i" for the index
> variable. I always figured this was taken from the first letter of
> "index", or perhaps "iterate".
> Then when more indices are needed for multi dimensions or whatever,
> they just continued with "j", "k", etc.. My guess is that the FORmula
> TRANslator language picked this up in its implicit typing to make it
> "easier" for mathematicians
I suspect physicists
> to type in their FORmulae. Easier, like having a laser sight makes
> it easier to hit your foot squarely in the middle with your pistol.
how, at the time, were people shooting themselves in the foot with
FORTRAN? The idea of an HLL was brilliant. Even the implementation
wasn't that bad given the lack of prior art. Do we sneer at the Wright
Brothers? The question as to why we are still using it is another
issue...
--
Software, regardless of the language or OS, is being used to handle
real-world, life-or-death problems. THAT should cause fear, except
that the alternative is for every single emergency to be handled
entirely by humans...
language of choice for VMS still (VMS has some of the best Fortran compilers as
far as I know). Also, I know it is still somewhat commonly used by scientists
who probably like some of the default mathematical basis (I've never learned the
language my self).
-Jim
being a physicist myself, I followed discussions about Fortran here to a point where
I cannot refrain from posting. ;-)
Fortran is still used in science today, that is true. Where I studied (not long ago,
actually), Fortran is still a compulsory course. The main reasons, as I see them are
two-fold:
1) Backward compatibility with existing simulation packages. A certain simulation in
say solid state physics or whatever may be very tricky to set up. Some physical
observables are simply out of reach in
terms of processing power, so the simulation or key algorithms thereof are kept,
until computers have improved a couple of years later.
That way a number of rather huge computation packages have been created over time.
The user interfaces are crude of course; it is the formulae themselves have
aesthetic appeal ;-)
2) Fortran mathematical algorithms are trusted. The physicist does not want to cope
with bugs of any kind for fear that it may cause an otherwise perfect work to
produce entirely false results. Think of the loss of reputation...
The only thing trusted even more, I guess, are pen and paper, but I reckon, that the
C standard library has gained some confidence over the past few years.
Hope this was worthy input for discussing the psychology of programming.
Stefan
--
________________________________________________
|_______________________________________________|
DI Stefan Weichselbaum
Software Engineer
AUSTRIAN AEROSPACE GmbH
Stachegasse 16
A-1120 Vienna, AUSTRIA / EUROPE
Internet: http://www.space.at
Tel. : +43-1-80199/5594
Fax : +43-1-80199/5577
E-Mail: stefan.we...@space.at
________________________________________________
This email is for information only!
________________________________________________
|_______________________________________________|
Actual multidimensional arrays, not just arrays of arrays.
Complex as a fully supported built-in arithmetic type. Because it is a
language type, it is easier to do compiler optimization of complex than
it would be if complex were merely a user defined type. This makes it
even better than C++, which can have complex as user arithmetic type.
Good scope for optimization.
Good optimization in practice. There is a circular effect. Fortran users
care about performance, so Fortran optimization gets lots of attention,
so Fortran tends to have good performance, so people who are strongly
affected by performance tend to use Fortran, so Fortan users care about
performance...
Existing code, including very extensive libraries.
Patricia
> <snip> how, at the time, were people shooting themselves in the foot with
>
> FORTRAN? The idea of an HLL was brilliant. Even the implementation
> wasn't that bad given the lack of prior art. Do we sneer at the Wright
> Brothers? The question as to why we are still using it is another
> issue...
>
<snip>
I've been programming for the Physics Dept. here for the last 1-1/2 years,
and we make extensive use of existing FORTRAN code. There are many reasons
for this, most of which will be obvious, but one of the more compelling
reasons is that these libraries are *fast*! The curve-fitting libraries we
use, for example, were written for slow machines with very little memory
and were heavily optimized. It also makes the code all but unreadable, but
that's ok.
However, we don't write any new routines in FORTRAN; we use C and C++.
-- Adam
I would say, quite a bit more then "somewhat commonly used".
Next time you fly, you can rest assure that all calculations and
simulations and finite element analysis were done in mammoth size
Fortran programs. Same goes for space craft, automobiles, boats and
probably trains.
Why is it still being used ? Because it works, and it works well, and
over many years Fortran has accumulated a very impressive library. But
then again, it started out as a mathematical language and for a long
time concentrated on that, and that only.
Java on the other hand, seems to me to be concentrating on serving the
commerce community, so may be those two are complementary to each other.
I think I must have left something out, maybe C, C++ ?:)
[Concerning why people use Fortran...]
> Actual multidimensional arrays, not just arrays of arrays.
In practice, there is little difference between Fortran's
multidimensional arrays and the arrays of arrays in C, C++, Pascal,
Modula, Ada...
In C and C++, of course, you have the problem that arrays are
implicitly converted into pointers at the drop of a hat, after which,
potential aliasing removes most of the optimization potential. (This
affects one dimensional arrays as well.) In theory, a compiler could
track pointer assignments and determine that certain pointers were
actually aliases for arrays, but in practice, the problem is complex
enough that no commercial compilers do it for anything more than local
variables.)
> Complex as a fully supported built-in arithmetic type. Because it is
> a language type, it is easier to do compiler optimization of complex
> than it would be if complex were merely a user defined type. This
> makes it even better than C++, which can have complex as user
> arithmetic type.
In C++, complex is a standard template class. In C, it is a fully
supported built-in arithmetic type, but only since the last revision
of the standard. In both cases, the fact that it is fully defined by
the standard. (The effects of instantiating the complex template
class for types other than float and double are undefined.) This
means that a compiler does know the full semantics, and could optimize
accordingly.
In both languages, the type is recent enough that this hasn't happened
yet. In C, given the stability of current compilers, and the limited
number of new features to implement, I expect that we will see such
optimization in the next few years. In C++, most compiler
implementers are still struggling to get the required features, and do
not yet have time to worry about optimization.
--
James Kanze mailto:ka...@gabi-soft.de
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
Ziegelhüttenweg 17a, 60598 Frankfurt, Germany Tel. +49(069)63198627
> Why is it still being used ? Because it works, and it works well,
> and over many years Fortran has accumulated a very impressive
> library. But then again, it started out as a mathematical language
> and for a long time concentrated on that, and that only.
> Java on the other hand, seems to me to be concentrating on serving
> the commerce community, so may be those two are complementary to
> each other. I think I must have left something out, maybe C, C++
> ?:)
I think that the C family (including Java) are more oriented to
certain types of programming, rather than certain application
domains. C, for example, (and to a lesser degree, C++) insists on the
possibilities of low level access and programmer control, regardless
of the application domain where it is used. (About the only domain
where this is absolutely needed is in operating systems, but C/C++ are
used in many other domains.) Java has always put an accent on
distributed computing -- portability, security, etc. Again,
regardless of the application domain.
Fortran is not so bad. The main problem I had was that my compiler
didn't verify the type of the parameters (everything is passed by
reference so
there is no problem) and when I swapped two parameters
I had to spend a lot of type with the debugger to locate the error.
A classical problem of fortran is that it (in my memory) doesn't allow
recursive functions like in language which pass their arguments by value
but since most recursive function can be optimized in terminal recursive
or iterative functions it is no big deal. An other is the dynamic
allocation
but it can be solved by linking with C or some other way.
For those who don't know, there is now a fortran 90 and 95 that support
OOP.
I have never used it but it looked fine.
James Stapleton wrote:
>
> > how, at the time, were people shooting themselves in the foot with
> > FORTRAN? The idea of an HLL was brilliant. Even the implementation
> > wasn't that bad given the lack of prior art. Do we sneer at the Wright
> > Brothers? The question as to why we are still using it is another
> > issue...
>
> language of choice for VMS still (VMS has some of the best Fortran compilers as
> far as I know). Also, I know it is still somewhat commonly used by scientists
> who probably like some of the default mathematical basis (I've never learned the
> language my self).
>
> -Jim
--
Michael DOUBEZ
A publisher support programme?
Donal.
--
"Understanding leads to tolerance, which in turn leads to acceptance. And from
there, it's just a quick hop to speeding in Ohio, chewing peyote, and
frottage in the woods with a family of moose. And I just want to claim my
part of the credit." -- bunnythor <bunn...@uswest.net>
In article <39A53564...@aston.ac.uk>, M DOUBEZ
<dou...@aston.ac.uk> writes
>Fortran is not so bad. The main problem I had was that my compiler
>didn't verify the type of the parameters (everything is passed by
>reference so
>there is no problem) and when I swapped two parameters
>I had to spend a lot of type with the debugger to locate the error.
This reminds me of the nastiest piece of code I've ever seen. When I
was an undergraduate, I can remember hearing a plaintive call of "My
code doesn't work!" from someone in the same room I was working in, and
being young and brash, I blundered in to try and fix it. The FORTRAN
function[*] was adapted from some code out of a standard example in a
handout, and had the value 0 as a *formal* parameter. At least one
compiler that we tried it with choked utterly, and another generated
*something* but I'd hesitate to call it object code. And goodness knows
if a value was assigned to it or not in the calling code; some details
are better not recorded...
All in all, no C code I've ever seen has been so synapse-bustingly bad.
That usually compiles and runs (with some degree of correctness) on at
least one machine.
Donal.
[* Or is the preferred term "procedure"? So many years ago. ]
Back to the question, how were people shooting themselves in the foot: the
same ways as today: reference vs value arguments, over- and under-indexing
arrays, off-by-one indexes, functions with side effects... it wasn't
pointer math (no pointers in 66 or 77 versions). Those were the days! Von
Neumann rules!
ed