int funcname (int arg1, ...)
{
...
}
This is often used...
int
funcname (int arg1, ...)
{
...
}
Haven't seen this on the DOS/Windows side of things. I'm guessing it's for
search purposes, perhaps in vi... ??
/^funcname
... or is there another reason?
Thanks in advance!
-MW-
I'm pretty sure that's the reason - I'm sure I remember seeing it
recommended for just that reason. Not one I use, or like, myself.
--
Online waterways route planner: http://canalplan.org.uk
development version: http://canalplan.eu
Some of us prefer the function name to start in column zero, especially
if the return type has a long name.
--
Ian Collins
I place the return type on a separate line because that is where I
document the return value:
int /* number of newts found */
newtcount (
const t_newt *newtlist /* list of newts to count, terminated by
** nextptr=NULL */
);
--
Thad
> -MW-
Thanks all. It's a style I've adopted, as I tend toward vi.
OT: I've seen C go through a lot of changes, some that weren't even style
issues, but "lack of standards" issues. When I first learned C, one
typically wrote a fn like this...
/* FtoC - accepts a Fahrenheit value and
returns the Celsius equivalent */
double FtoC (fah)
double fah;
{
return (fah - 32.0) * 5.0 / 9.0;
}
... and I still have a DOS compiler that'll accept it (Borland's
Turbo C v1.0 --- c. 1987, IIRC).
And for int and void functions, types were usually omitted altogether.
-MW-
Contemporary compilers still accept it, because it's still
valid C. The style has been marked as "obsolescent," though,
and I for one can see no advantage to using it.
> And for int and void functions, types were usually omitted altogether.
That became non-standard ten years ago, but was permitted
under the rules of the first standard, now twenty years old.
Quite a lot of compilers still follow the twenty-year-old rules,
or have not yet made a complete transition to the more recent
standard. Again, I can see no advantage to omitting the return
type.
--
Eric Sosman
eso...@ieee-dot-org.invalid
> Contemporary compilers still accept it, because it's still
> valid C. The style has been marked as "obsolescent," though, and I for
> one can see no advantage to using it.
> ... Again, I can see no advantage to omitting the return type.
Agreed on both counts. Interesting to see the subtle changes in style
that have taken place over the decades, though.
-MW-
Why not just re-factor your code so there are fewer functions per
file. Limiting yourself to one exported function per file is handy to
help there. Of course there are static functions, but in reality,
most source files should be less than 500 lines at most anyways
[common exception being machine generated code].
I find the
return_type
function_name(...)
{
...code ...
}
style annoying, but not because it's harder to read [it's not] but
just because it uses more lines and is not what I'm used to [I know I
know ... I'm not the centre of the universe hehehe]. :-)
Tom
> Why not just re-factor your code so there are fewer functions per
> file. Limiting yourself to one exported function per file is handy to
> help there. Of course there are static functions, but in reality,
> most source files should be less than 500 lines at most anyways
Why? Isn't having up to ten times as many source files lying around (even
forgetting the problems with private functions, variables and namespaces),
going to be more hassle than longer source files (and half the time you're
hardly aware of the size of the file).
--
Bartc
Tom appears to be stuck in the ark with regard to his toolsets. The size
of a file should not be consideration within normal limits related to
todays HW unless it severely impacts compilation for example. When
navigating around code I rarely bother noticing which file its in for
example.
--
"Avoid hyperbole at all costs, its the most destructive argument on
the planet" - Mark McIntyre in comp.lang.c
Generally you want fewer functions per file for numerous reasons
1. Makes it easier to work with others in a version control system,
as you lock a smaller percentage of the code at any given time.
2. It speeds up build/rebuild times while testing new code
3. It makes it easier to "smart" link code as not all linker can do
per function linking, they're usually per object file.
4. I find it generally easier to work on smaller files, specially
when what I'm looking for isn't hidden in the middle of a 3,000 line
file... but that's just MHO.
Usually, the smart thing to do is sort your source tree with these
things called directories. So finding files should be easy.
Tom
Spoken like someone who either works alone or without a content/
version control system. Suppose you have 20 people on a team and all
of your source is locked in 2 files. What do the 18 other people
do?
Also, I work on a quad-core AMD box with 4GB of ram. I still
appreciate faster turn-around on build/rebuild cycles. You'd be an
idiot not to.
Tom
> On Dec 15, 6:55 am, Richard <rgrd...@gmail.com> wrote:
>> "bartc" <ba...@freeuk.com> writes:
>> > Tom St Denis wrote:
>> >> On Dec 12, 3:31 pm, Nick <3-nos...@temporary-address.org.uk> wrote:
>>
>> >> Why not just re-factor your code so there are fewer functions per
>> >> file. Limiting yourself to one exported function per file is handy to
>> >> help there. Of course there are static functions, but in reality,
>> >> most source files should be less than 500 lines at most anyways
>>
>> > Why? Isn't having up to ten times as many source files lying around (even
>> > forgetting the problems with private functions, variables and namespaces),
>> > going to be more hassle than longer source files (and half the time you're
>> > hardly aware of the size of the file).
>>
>> Tom appears to be stuck in the ark with regard to his toolsets. The size
>> of a file should not be consideration within normal limits related to
>> todays HW unless it severely impacts compilation for example. When
>> navigating around code I rarely bother noticing which file its in for
>> example.
>
> Spoken like someone who either works alone or without a content/
> version control system. Suppose you have 20 people on a team and all
> of your source is locked in 2 files. What do the 18 other people
> do?
Yet again your assumptions are totally wrong.
And who said anything about ALL functions locked up in 2 files? Also,
did you never bother to investigate more modern RCS which can handle
hunks from within a file? A file is nothing more than a user view of
data anyway in more advanced set ups....
>
> Also, I work on a quad-core AMD box with 4GB of ram. I still
That's nice.
> appreciate faster turn-around on build/rebuild cycles. You'd be an
> idiot not to.
Yes, but its another one of your straw men. My point is that your
arbitrary 500 line limit is bullshit.
>
> Tom
I've worked with git, svn, cvs, and even clearcase. Collisions happen
all the time and they're nasty. That's why file locks exist. The
fewer resources you lock the better.
> > appreciate faster turn-around on build/rebuild cycles. You'd be an
> > idiot not to.
>
> Yes, but its another one of your straw men. My point is that your
> arbitrary 500 line limit is bullshit.
First, let me do my impression of you. "ZOMG TOM SAID SOMETHING THAT
I CAN DISAGREE WITH, *drool*, *wipe face*, I SIMPLY HAVE TO POST A
REPLY!!!!"
Then, I never said it was a hard written in stone limit. I have hand
written files that span into 6-7-8 hundred lines long. As a general
rule though if you're writing something [by hand] that gets over 500
lines, there is very likely [but not always] a chance to re-factor the
code to make it easier to work with [and/or a chance for code re-
use].
That's the difference between people like me [with experience] and
people like you [think they know everything]. We can say things like
"most files shouldn't be longer than 500 lines" and understand that it
means "most files shouldn't be super long because you'll probably be
able to factor the code better and achieve code reuse." Whereas you,
with little experience didn't know about that sort of development
strategy and just assumed that I meant "all files must be less than
500 lines because compilers can't handle 501 lines."
tl;dr, sometimes you just have to know when to shut up.
Tom
Advocate switching to git.
-s
--
Copyright 2009, all wrongs reversed. Peter Seebach / usenet...@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
No matter the tool, if two people are working on the same bit of code
nothing good will result. If you have 50 functions in one file and
you're not all working on the same functions, sure collisions might
not happen...
Tom
There is a cost associated with what you're advocating. Shorter files
implies more files, which adds complexity. At the very least, you'll
need more "glue" in the form of extern declarations and prototypes,
which makes code harder for your coworkers to follow and maintain.
Maybe my experience is unique, but I have found that "judicious
refactoring" of code that a large teams works on causes more problems
than it solves, as it forces everyone to become reacquainted with the
design du jour. Obviously, code reuse is good, but it is only
achieved if the other members of your team know where the code is and
what the functions are called. Getting (re)acquainted with code takes
time and effort.
When I'm writing something alone, I do find that the benefits of heavy
refactoring outweigh the added complexity.
What does "tl;dr" mean?
Well presumably your exported functions will have corresponding
entries in header files anyways. So yeah there are more entries in
your makefile but that's more than made up for in the quicker builds
and easier content management.
I actually find it easier to have separate functions in separate files
because I tend to think of files as ideas or algorithms. One file may
perform elliptic curve point addition, while another does doubling.
That way if I'm in the mood to review the point addition code I just
look at that. As opposed to opening a 3000 line file and finding the
function I need and having to see the other code in the process...
Even when writing applications [I tend to work on libraries more] I
separate out the re-usable code into libraries inside the source tree
for the application. So in essence the application is merely a user
interface or driver for the functionality provided by the libraries.
Like we recently wrote an app with crypto and DB functionality. So we
had a libcert.a for crypto and libdb.a for our DB retrieve/store
functions. The actual app customers used was a relatively short piece
of code that parses command line options and makes use of the two
libraries. In this case we re-used the crypto lib for another part of
the project.
The trick is to have a solid design to start with, then document as
you go. In our group, we're all expected to contribute and READ the
SDK user guides. So "keeping up" with development is just part of the
job. Of course if you don't document anything I can see how your
coworkers can get lost...
> What does "tl;dr" mean?
"too long, didn't read." It's used for when you ramble on and want to
sum it up for the impatient.
Tom
>On Dec 15, 6:55=A0am, Richard <rgrd...@gmail.com> wrote:
>> "bartc" <ba...@freeuk.com> writes:
>> > Tom St Denis wrote:
>> >> On Dec 12, 3:31 pm, Nick <3-nos...@temporary-address.org.uk> wrote:
>>
>> >> Why not just re-factor your code so there are fewer functions per
>> >> file. =A0Limiting yourself to one exported function per file is handy =
>to
>> >> help there. =A0Of course there are static functions, but in reality,
>> >> most source files should be less than 500 lines at most anyways
>>
>> > Why? Isn't having up to ten times as many source files lying around (ev=
>en
>> > forgetting the problems with private functions, variables and namespace=
>s),
>> > going to be more hassle than longer source files (and half the time you=
>'re
>> > hardly aware of the size of the file).
>>
>> Tom appears to be stuck in the ark with regard to his toolsets. The size
>> of a file should not be consideration within normal limits related to
>> todays HW unless it severely impacts compilation for example. When
>> navigating around code I rarely bother noticing which file its in for
>> example.
>
>Spoken like someone who either works alone or without a content/
>version control system. Suppose you have 20 people on a team and all
>of your source is locked in 2 files. What do the 18 other people
>do?
That would be a very peculiar project. The code base for a 20
person team would be on the order of 100,000 lines of code. Few
projects keep their code in two 50,000 line files.
More to the point, many small files, few larger files is a
natural consequence of orthogonality of file purpose. That is,
in good software practice (IMNSHO) each file implements a well
defined functionality and each such functionality goes in its own
file.
Decent version control systems let you generate reports that tell
you how often each file is altered. The results are fairly
consistent across a wide variety of projects - most of the change
action is in a small minority of files. Sometimes, of course, a
file may be revised many times because it is badly written.
However, most projects seem to have hot spot functionality, i.e.,
places that are strongly impacted as the project grows and
changes shape.
What it comes down to is that maxims like "most source files
should be less than 500 lines" have it the wrong way around,
rather like trying to steer a donkey by grabbing its tail. Take
care of your software structure and your file sizes will take
care of themselves.
>
>Also, I work on a quad-core AMD box with 4GB of ram. I still
>appreciate faster turn-around on build/rebuild cycles. You'd be an
>idiot not to.
>
>Tom
Richard Harter, c...@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Infinity is one of those things that keep philosophers busy when they
could be more profitably spending their time weeding their garden.
Does it really take that much longer to compile 5000 lines instead of 500?
(My files are typically 5000 lines and take a fraction of a second to
compile.)
And how does it affect building other than making it take longer because of
having to deal with hundreds of files instead of dozens?
(I don't know how these things work in teams; maybe only one member has
build/run privileges? Or can anyone build and test a project that includes
half-finished modules from other team members? Or does each person just
test, independently, a small portion of the project in a test setup. That
still doesn't explain this arbitrary file line-limit.)
--
Bartc
Most modern version control systems don't lock files; they allow
two different people to work on the same file at the same time.
But typically the last person to check in the file has to merge
the changes. If the changes are isolated from each other, this is
straightforward; if not, you'd have the same merging problem with
one function per file.
> 2. It speeds up build/rebuild times while testing new code
> 3. It makes it easier to "smart" link code as not all linker can do
> per function linking, they're usually per object file.
> 4. I find it generally easier to work on smaller files, specially
> when what I'm looking for isn't hidden in the middle of a 3,000 line
> file... but that's just MHO.
Some text editors (Emacs in particular) have a mode in which you
can temporarily narrow the visible portion of a file, working on
a subset as if it were an entire file. (And I just learned
that there's a vim plugin that does the same thing.)
> Usually, the smart thing to do is sort your source tree with these
> things called directories. So finding files should be easy.
Unless you've got thousands of them.
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
A little like your views on Macros your views seem based on nothing
more than your own gut feeling which in turn are based on your own
limited experiences.
I have worked on huge projects where certain key files might be
branched 4 or 5 times.
Please, learn about the tools before commenting.
> On Dec 15, 9:12 am, Richard <rgrd...@gmail.com> wrote:
>> Yet again your assumptions are totally wrong.
>>
>> And who said anything about ALL functions locked up in 2 files? Also,
>> did you never bother to investigate more modern RCS which can handle
>> hunks from within a file? A file is nothing more than a user view of
>> data anyway in more advanced set ups....
>
> I've worked with git, svn, cvs, and even clearcase. Collisions happen
> all the time and they're nasty. That's why file locks exist. The
> fewer resources you lock the better.
Yet you dont seem to know how they work. It is quite usual for many
people to work on the same files. Its why these systems exist.
>
>> > appreciate faster turn-around on build/rebuild cycles. You'd be an
>> > idiot not to.
>>
>> Yes, but its another one of your straw men. My point is that your
>> arbitrary 500 line limit is bullshit.
>
> First, let me do my impression of you. "ZOMG TOM SAID SOMETHING THAT
> I CAN DISAGREE WITH, *drool*, *wipe face*, I SIMPLY HAVE TO POST A
> REPLY!!!!"
Well, it's a poor impression. Yes, I disagree with you. Because you're
wrong.
>
> Then, I never said it was a hard written in stone limit. I have hand
But you still came out with it.
> written files that span into 6-7-8 hundred lines long. As a general
That's nice.
> rule though if you're writing something [by hand] that gets over 500
> lines, there is very likely [but not always] a chance to re-factor the
> code to make it easier to work with [and/or a chance for code re-
> use].
Extra files can also introduce extra complexity.
>
> That's the difference between people like me [with experience] and
> people like you [think they know everything]. We can say things like
It wasn't me making all sorts of rules : it was you. I'm a programmer
for a long time and have worked on a lot of systems. But it's you
showing off about your experience and you making the rules I see. First
we have you condemning inline functions, and now large files. Yeah, I am
playing devils advocate a bit - but primarily because I find your
arguments weak and similar to those small minded people who insits only
their coding style will suffice.
> "most files shouldn't be longer than 500 lines" and understand that it
> means "most files shouldn't be super long because you'll probably be
> able to factor the code better and achieve code reuse." Whereas you,
> with little experience didn't know about that sort of development
You dont know anything about my experience. It is, needless to say,
considerably more than you assume.
> strategy and just assumed that I meant "all files must be less than
> 500 lines because compilers can't handle 501 lines."
I never said compilers couldnt handle 501. You produce yet another straw
man.
Files can be split based on functional partitioning. To split BECAUSE
its more than 500 lines is ridiculous with modern RCS and navigation
tools.
>
> tl;dr, sometimes you just have to know when to shut up.
I suspect you should take your own advice as you are clearly starting to
shout and yell and stomp your oh so experienced little feet.
I knew this would be your base reason. And its ridiculous. A function
name is more than enough partitioning. How you think of functions is
silly. I suspect it comes from some antiquated notion of how RCS should
work.
What next? C++ member functions in separate files in subdirs
representing their class.
Like most of Tom's arguments it is based on shouting loudly and
exaggeration.
>
> More to the point, many small files, few larger files is a
> natural consequence of orthogonality of file purpose. That is,
> in good software practice (IMNSHO) each file implements a well
> defined functionality and each such functionality goes in its own
> file.
>
> Decent version control systems let you generate reports that tell
> you how often each file is altered. The results are fairly
> consistent across a wide variety of projects - most of the change
> action is in a small minority of files. Sometimes, of course, a
> file may be revised many times because it is badly written.
> However, most projects seem to have hot spot functionality, i.e.,
> places that are strongly impacted as the project grows and
> changes shape.
>
> What it comes down to is that maxims like "most source files
> should be less than 500 lines" have it the wrong way around,
> rather like trying to steer a donkey by grabbing its tail. Take
> care of your software structure and your file sizes will take
> care of themselves.
Exactly. Noe that I am not suggesting all files should be larger than
500 lines ... that is yet another straw man from Tom to defend his
rather antiquated stance.
>
>>
>>Also, I work on a quad-core AMD box with 4GB of ram. I still
>>appreciate faster turn-around on build/rebuild cycles. You'd be an
>>idiot not to.
>>
>>Tom
>
> Richard Harter, c...@tiac.net
> http://home.tiac.net/~cri, http://www.varinoma.com
> Infinity is one of those things that keep philosophers busy when they
> could be more profitably spending their time weeding their garden.
--
> Spoken like someone who either works alone or without a content/
> version control system. Suppose you have 20 people on a team and all
> of your source is locked in 2 files. What do the 18 other people
> do?
The other 18 people have plenty of time to learn how to break up
a program into modules and to use and maintain a version control
system.
--
Ben Pfaff
http://benpfaff.org
Provided they don't touch overlapping sections of code. Yeah I agree
there won't be problems. But if people are doing a code review and
touching up things here and there it's easy to collide. That's why
locking a file is easier, it prevents this problem. Now if you lock a
file with huge segments of your project your boned.
Not to forget to mention the build time speedups, which is usually
fairly invaluable.
> > Then, I never said it was a hard written in stone limit. I have hand
>
> But you still came out with it.
It's HOW you reacted to it that is important. You threw away any
possible reasonable interpretation and went directly for "he must mean
the compiler can't handle large files." And you didn't do that
because you're being difficult, you did that because you don't know
better.
> Extra files can also introduce extra complexity.
Not really. It takes me all of 5 seconds to add a source file to a
makefile. Takes another 2 seconds to import it to CVS. Adding files
to a well maintained source tree is really easy.
> It wasn't me making all sorts of rules : it was you. I'm a programmer
> for a long time and have worked on a lot of systems. But it's you
> showing off about your experience and you making the rules I see. First
> we have you condemning inline functions, and now large files. Yeah, I am
> playing devils advocate a bit - but primarily because I find your
> arguments weak and similar to those small minded people who insits only
> their coding style will suffice.
Well I'm not a "programmer." I'm a developer. So it's my job to not
only write software but produce maintainable and manageable source
trees that stand the test of time. That includes proper tree layout,
documentation, API design rules, etc. I don't just sit and write for
loops all day long like your typical code monkey.
> > strategy and just assumed that I meant "all files must be less than
> > 500 lines because compilers can't handle 501 lines."
>
> I never said compilers couldnt handle 501. You produce yet another straw
> man.
>
> Files can be split based on functional partitioning. To split BECAUSE
> its more than 500 lines is ridiculous with modern RCS and navigation
> tools.
It's a rule of thumb. Stop being so obtuse. I said that chances are
if you're writing a SINGLE function that approaches 500 lines that
chances are good you can factor functionality out of it. That doesn't
mean there aren't exceptions. But it's a very common rookie mistake
to put all your code in one basket. 500 was just a number I pulled
out of thin air too. You can obviously factor smaller functions.
But you're being obtuse for argumentative sake...
Tom
Depends on the length and complexity. You seem to be of the school
where longer compilation times == greater success. That sort of
thinking leads to people bragging about "2 million lines of code" and
"8 hour build times" ...
I consider it a design flaw if changing a single line of code results
in a turn around time [after an initial build] of longer than 5
seconds [+/- a few for network traffic].
I also don't code in C++ because I've never found a use for it. But
that's another topic for another usenet group for another day.
Tom
Branching is another topic all together. People can commit changes to
different branches without fear of collisions, that's what branches
are for. But I'd like to hear how if people commit changes to the
same file on the same branch in the same lines of code, how there are
NOT collisions.
Tom
Yeah, what I'm commenting on is if I ask someone to review a subsystem
of a library, we usually are fairly tight knit on who's working on
what. So if they say "I'm working on point addition" then I avoid
that file altogether, etc, whatever. Whereas if I put all my ECC code
in "ecc.c" he might be making changes outside the function in question
[say he notices a typo or bug elsewhere].
> > 2. It speeds up build/rebuild times while testing new code
> > 3. It makes it easier to "smart" link code as not all linker can do
> > per function linking, they're usually per object file.
> > 4. I find it generally easier to work on smaller files, specially
> > when what I'm looking for isn't hidden in the middle of a 3,000 line
> > file... but that's just MHO.
>
> Some text editors (Emacs in particular) have a mode in which you
> can temporarily narrow the visible portion of a file, working on
> a subset as if it were an entire file. (And I just learned
> that there's a vim plugin that does the same thing.)
Cool. You can also do that by not putting all of your code in one
file.
> > Usually, the smart thing to do is sort your source tree with these
> > things called directories. So finding files should be easy.
>
> Unless you've got thousands of them.
Start early. Trying to take a bad project and make it good is more
work than if you just started properly in the first place. Get the
build infrastructure in from the ground up, etc...
Tom
My reaction to your reply wasn't that I thought you thought all files
should be larger than 500 lines, instead I took from your initial
reply that you thought I didn't think compilers could handle files
larger than 500 lines (because I use antiquated tools...).
As I've said a half dozen times already it's just a rule of thumb. If
you're writing a function and it's starting to get long, chances are
there are ways to factor it. Not always, but it's something to look
for.
Tom
> On Dec 15, 12:40 pm, Richard <rgrd...@gmail.com> wrote:
>> Yet you dont seem to know how they work. It is quite usual for many
>> people to work on the same files. Its why these systems exist.
>
> Provided they don't touch overlapping sections of code.
No. Even if they do. And this happens a lot on core files. Its why
merging is a skill.
You can not possibly separate the code ahead of schedule knowing only
one person will work on one area at a time.
If you have a set of smaller files, a decent build system will compile
them in parallel on appropriate hardware. I tend to favour more smaller
files over fewer, bigger files for that reason.
> And how does it affect building other than making it take longer because
> of having to deal with hundreds of files instead of dozens?
Improved parallelism.
> (I don't know how these things work in teams; maybe only one member has
> build/run privileges? Or can anyone build and test a project that
> includes half-finished modules from other team members? Or does each
> person just test, independently, a small portion of the project in a
> test setup. That still doesn't explain this arbitrary file line-limit.)
In a well run team, people only check in tested code to a shared stream.
--
Ian Collins
<snip>
> The code base for a 20
> person team would be on the order of 100,000 lines of code.
For a month, tops, if my experience is anything to go by. If the
project is mature and can justify 20 programmers for maintenance and
developing new features, you're at least an order of magnitude out.
I agree with your general point, however.
As for collisions, CVS is very, very good at managing them.
<snip>
--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
> In <4b27b4b3...@text.giganews.com>, Richard Harter wrote:
>
> <snip>
>
>> The code base for a 20
>> person team would be on the order of 100,000 lines of code.
>
> For a month, tops, if my experience is anything to go by. If the
> project is mature and can justify 20 programmers for maintenance and
> developing new features, you're at least an order of magnitude out.
>
> I agree with your general point, however.
>
> As for collisions, CVS is very, very good at managing them.
>
> <snip>
CVS is crap at handling collisions. You really need to go to a "hunk"
based RCS. Using CVS would give me the heebijeebies after using Git.
My point (I won't speculate on what Richard's point might have been)
is that, with a decent source control system, putting each function
in its own file isn't very helpful in avoiding collisions. If two
people work on the same file at the same time, and they modify
disjoint portions of the file, then merging the changes after the
fact is usually quite straightforward. If they modify the same
portion of the same file, then they would have run into the same
problem even if each function were in its own source file.
This isn't an argument *against* putting each function in its own
source file; I'm just saying that doing so doesn't buy as much as
you seem to be saying it does.
Again, my experience is that modern source control systems don't
lock files, so any concerns about problems caused by file locking
are largely irrelevant. (This wasn't always so; I've used shared
RCS systems in the past.)
I fully agree with this. I guess my point here though is it's easy to
stay focused on your own segment of code if you have your own file to
work on. While the people in my group interact often we're still
individuals and we don't always wait for the word before moving on to
the next piece of code to review.
So far we haven't had any collision problems yet.
> This isn't an argument *against* putting each function in its own
> source file; I'm just saying that doing so doesn't buy as much as
> you seem to be saying it does.
I guess it's a question of discipline. But there are more reasons
than just this to keep files specialized.
> Again, my experience is that modern source control systems don't
> lock files, so any concerns about problems caused by file locking
> are largely irrelevant. (This wasn't always so; I've used shared
> RCS systems in the past.)
You can lock files via CVS. We tend not to here because of the non-
overlap of work, but it is possible.
Tom
I agree with Richard nosurname. I've worked on a number of projects with
a number of sizes of team. Even ones where everyone was logged in to the
same account on the Vax editing files, so you actually could edit the
copy someone else was working on if you wanted! Collisions in all
instances happened rarely because members of the team talk to each other
so they know what each other is doing, and the project/product manager
also knows what everyone is doing. Yes, using CVS and Subversion we've
had the occasional conflict, but they've not been hard to resolve.
I group a set of logically related functions in to a file, and if that
makes the file big then it is big. It means I can use static file scope
variables when that makes sense, have shared static helper functions,
and in general keep the interface as tight as possible.
>>> appreciate faster turn-around on build/rebuild cycles. You'd be an
>>> idiot not to.
>> Yes, but its another one of your straw men. My point is that your
>> arbitrary 500 line limit is bullshit.
>
> First, let me do my impression of you. "ZOMG TOM SAID SOMETHING THAT
> I CAN DISAGREE WITH, *drool*, *wipe face*, I SIMPLY HAVE TO POST A
> REPLY!!!!"
>
> Then, I never said it was a hard written in stone limit. I have hand
> written files that span into 6-7-8 hundred lines long. As a general
> rule though if you're writing something [by hand] that gets over 500
> lines, there is very likely [but not always] a chance to re-factor the
> code to make it easier to work with [and/or a chance for code re-
> use].
I don't find the larger files cause a problem with code reuse, since the
larger file has a load of logically related functions so when you are
reusing one you generally want most of the others anyway, and if not it
is still not large enough for a few extra functions to be a big deal.
> That's the difference between people like me [with experience] and
> people like you [think they know everything].
I disagree with you and I have a fair bit of experience, and have used
everything from manual version control (physically passing files between
people on floppy disk), through having a configuration control system
which was a *person*, through rcs (I was very rare to find a file
locked, since we actually talked to each other in the team), through
CVS, Subversion, MS VCC and maybe a few others I've forgotten.
> We can say things like
> "most files shouldn't be longer than 500 lines" and understand that it
> means "most files shouldn't be super long because you'll probably be
> able to factor the code better and achieve code reuse." Whereas you,
> with little experience didn't know about that sort of development
> strategy and just assumed that I meant "all files must be less than
> 500 lines because compilers can't handle 501 lines."
I disagree. I don't find a file of over a thousand lines takes long to
control, developers should know what each other are working on so they
know where to expect changes (and so what they might find breaks),
commits should be frequent, all of which means rare conflicts which are
easily dealt with.
> tl;dr, sometimes you just have to know when to shut up.
Sometimes you should admit that other peoples experience is also valid.
--
Flash Gordon
I agree with Richard nosurname. I've worked on a number of projects with
a number of sizes of team. Even ones where everyone was logged in to the
same account on the Vax editing files, so you actually could edit the
copy someone else was working on if you wanted! Collisions in all
instances happened rarely because members of the team talk to each other
so they know what each other is doing, and the project/product manager
also knows what everyone is doing. Yes, using CVS and Subversion we've
had the occasional conflict, but they've not been hard to resolve.
I group a set of logically related functions in to a file, and if that
makes the file big then it is big. It means I can use static file scope
variables when that makes sense, have shared static helper functions,
and in general keep the interface as tight as possible.
>>> appreciate faster turn-around on build/rebuild cycles. You'd be an
>>> idiot not to.
>> Yes, but its another one of your straw men. My point is that your
>> arbitrary 500 line limit is bullshit.
>
> First, let me do my impression of you. "ZOMG TOM SAID SOMETHING THAT
> I CAN DISAGREE WITH, *drool*, *wipe face*, I SIMPLY HAVE TO POST A
> REPLY!!!!"
>
> Then, I never said it was a hard written in stone limit. I have hand
> written files that span into 6-7-8 hundred lines long. As a general
> rule though if you're writing something [by hand] that gets over 500
> lines, there is very likely [but not always] a chance to re-factor the
> code to make it easier to work with [and/or a chance for code re-
> use].
I don't find the larger files cause a problem with code reuse, since the
larger file has a load of logically related functions so when you are
reusing one you generally want most of the others anyway, and if not it
is still not large enough for a few extra functions to be a big deal.
> That's the difference between people like me [with experience] and
> people like you [think they know everything].
I disagree with you and I have a fair bit of experience, and have used
everything from manual version control (physically passing files between
people on floppy disk), through having a configuration control system
which was a *person*, through rcs (I was very rare to find a file
locked, since we actually talked to each other in the team), through
CVS, Subversion, MS VCC and maybe a few others I've forgotten.
> We can say things like
> "most files shouldn't be longer than 500 lines" and understand that it
> means "most files shouldn't be super long because you'll probably be
> able to factor the code better and achieve code reuse." Whereas you,
> with little experience didn't know about that sort of development
> strategy and just assumed that I meant "all files must be less than
> 500 lines because compilers can't handle 501 lines."
I disagree. I don't find a file of over a thousand lines takes long to
control, developers should know what each other are working on so they
know where to expect changes (and so what they might find breaks),
commits should be frequent, all of which means rare conflicts which are
easily dealt with.
> tl;dr, sometimes you just have to know when to shut up.
Sometimes you should admit that other peoples experience is also valid.
--
Flash Gordon
Well, OK. I'd never really thought of a development cycle as being
time-consuming enough to warrant parallel processing, (perhaps because in my
case, even working with microprocessors and floppy disks in the 1980s, I
made sure my development cycles hardly ever took more than a second or so,
no matter how complex the project).
Anyway unless everything really is in just one giant file, surely a typical
project will have enough files in it to keep any multi-processor system
happy, even with thousands of line per file?
--
Bartc
In my book, a code review involves having the author of the code and a
number of other people sitting down together to discus it before *any*
changes are made, so the author can learn from anything which is pointed
out, or can correct the corrections if the reviewers are wrong. The
reviewers will *read* the code in advance, just not change it.
> So far we haven't had any collision problems yet.
I've had collisions, but never real problems.
>> This isn't an argument *against* putting each function in its own
>> source file; I'm just saying that doing so doesn't buy as much as
>> you seem to be saying it does.
>
> I guess it's a question of discipline. But there are more reasons
> than just this to keep files specialized.
There are also reasons for keeping several related functions all of
which are externally visible in the same file. It's not back-and-white.
>> Again, my experience is that modern source control systems don't
>> lock files, so any concerns about problems caused by file locking
>> are largely irrelevant. (This wasn't always so; I've used shared
>> RCS systems in the past.)
>
> You can lock files via CVS. We tend not to here because of the non-
> overlap of work, but it is possible.
I which case the code being in larger files would not cause any problems
either.
--
Flash Gordon
In my book, a code review involves having the author of the code and a
number of other people sitting down together to discus it before *any*
changes are made, so the author can learn from anything which is pointed
out, or can correct the corrections if the reviewers are wrong. The
reviewers will *read* the code in advance, just not change it.
> So far we haven't had any collision problems yet.
I've had collisions, but never real problems.
>> This isn't an argument *against* putting each function in its own
>> source file; I'm just saying that doing so doesn't buy as much as
>> you seem to be saying it does.
>
> I guess it's a question of discipline. But there are more reasons
> than just this to keep files specialized.
There are also reasons for keeping several related functions all of
which are externally visible in the same file. It's not back-and-white.
>> Again, my experience is that modern source control systems don't
>> lock files, so any concerns about problems caused by file locking
>> are largely irrelevant. (This wasn't always so; I've used shared
>> RCS systems in the past.)
>
> You can lock files via CVS. We tend not to here because of the non-
> overlap of work, but it is possible.
I which case the code being in larger files would not cause any problems
either.
--
Flash Gordon
If you change the same line of code there is a collision, the person
trying to do the commit gets notified, knows that someone else has
modified it (can check who and why from the log), and can then work out
the appropriate corrective action. This is not a problem any more than
the file being locked preventing such modification is a problem. In
either case people have to look at the different changes which are
either being done, or which they want to do, and work out the correct
way to incorporate both changes. The key is that we are dealing with
people, and people are able to talk to each other.
--
Flash Gordon
Really? Maybe you should give the Linux or OpenSolaris kernel folks
some advice!
I tend to compile and run tests very frequently, so build times are an
issue.
> Anyway unless everything really is in just one giant file, surely a
> typical project will have enough files in it to keep any multi-processor
> system happy, even with thousands of line per file?
It probably will. There might be issues with individual compiles of
very large files hogging a disproportionate amount of memory, but these
tend to come out in the wash as well.
--
Ian Collins
> > Yet again your assumptions are totally wrong.
handy if you quoted him though...
> > And who said anything about ALL functions locked up in 2 files? Also,
> > did you never bother to investigate more modern RCS which can handle
> > hunks from within a file? A file is nothing more than a user view of
> > data anyway in more advanced set ups....
>
> I've worked with git, svn, cvs, and even clearcase.
Ah ClearCase. I feel your pain.
> Collisions happen all the time and they're nasty.
Why doesn't ClearCase handle these collisions? If people are working
in different parts of the file (eg. different functions) then there is
no collision problem. If they are working on the same function then
they collide no matter how many micro file you have.
> That's why file locks exist. The
> fewer resources you lock the better.
so don't lock files
> > > appreciate faster turn-around on build/rebuild cycles. You'd be an
> > > idiot not to.
>
> > Yes, but its another one of your straw men. My point is that your
> > arbitrary 500 line limit is bullshit.
>
> First, let me do my impression of you.
<snip rudeness>
> Then, I never said it was a hard written in stone limit. I have hand
> written files that span into 6-7-8 hundred lines long. As a general
> rule though if you're writing something [by hand] that gets over 500
> lines, there is very likely [but not always] a chance to re-factor the
> code to make it easier to work with [and/or a chance for code re-
> use].
Why does large file size indicate a need to refactor? Why does file
size affect code reuse?
> That's the difference between people like me [with experience]
> and people like you
I'd do some googling if I were you
> [think they know everything].
He's annoying isn't he? He's also right more often then we'd sometimes
like.
> We can say things like
> "most files shouldn't be longer than 500 lines" and understand that it
> means "most files shouldn't be super long because you'll probably be
> able to factor the code better and achieve code reuse." Whereas you,
> with little experience didn't know about that sort of development
> strategy and just assumed that I meant "all files must be less than
> 500 lines because compilers can't handle 501 lines."
>
> tl;dr, sometimes you just have to know when to shut up.
oh yes
<snip>
> > Files can be split based on functional partitioning. To split BECAUSE
> > its more than 500 lines is ridiculous with modern RCS and navigation
> > tools.
>
> It's a rule of thumb. Stop being so obtuse. I said that chances are
> if you're writing a SINGLE function that approaches 500 lines that
> chances are good you can factor functionality out of it.
no you didn't. If you'd said *that* I wouldn't be arguing (much). This
is an exact quote "most source files should be less than 500 lines at
most anyways". The sentence was rather longer but I haven't distorted
what you said by taking you out of context.
You can run a single instance of Emacs with windows displayed on
different computers, so it's possible for two people to edit the same
file at the same time and see each others' changes as they are typed.
-- Richard
--
Please remember to mention me / in tapes you leave behind.
We did not have tools that sophisticated, my point was that it is easy
to manage even with large files and no fancy technology as long as you
follow sensible development practice.
--
Flash Gordon