Errors when cross-compiling the kernel

207 views
Skip to first unread message

David Taylor

unread,
Dec 14, 2013, 9:46:11 AM12/14/13
to
Trying to cross-compile the Raspberry Pi kernel on a Debian 7 virtual
PC. I've got quite a way through the process, and it seems to start
compiling, but I'm getting the following error:

HOSTLD scripts/genksyms/genksyms
CC scripts/mod/empty.o
/home/david/kern/tools/arm-bcm2708/arm-bcm2708-linux-gnueabi/bin/../lib/gcc/arm-bcm2708-linux-gnueabi/4.7.1/../../../../arm-bcm2708-linux-gnueabi/bin/as:
error while loading shared libraries: libz.so.1: cannot open shared
object file: No such file or directory

Any ideas? I can't find libz.so anywhere....

Thanks,
David
--
Cheers,
David
Web: http://www.satsignal.eu
Message has been deleted

gregor herrmann

unread,
Dec 14, 2013, 10:20:08 AM12/14/13
to
On Sat, 14 Dec 2013 14:46:11 +0000, David Taylor wrote:

> HOSTLD scripts/genksyms/genksyms
> CC scripts/mod/empty.o
> /home/david/kern/tools/arm-bcm2708/arm-bcm2708-linux-gnueabi/bin/../lib/gcc/arm-bcm2708-linux-gnueabi/4.7.1/../../../../arm-bcm2708-linux-gnueabi/bin/as:
> error while loading shared libraries: libz.so.1: cannot open shared
> object file: No such file or directory
>
> Any ideas? I can't find libz.so anywhere...

It's in the zlib1g package, maybe this one isn't installed?

gregor
--
.''`. Homepage: http://info.comodo.priv.at/ - OpenPGP key 0xBB3A68018649AA06
: :' : Debian GNU/Linux user, admin, and developer - http://www.debian.org/
`. `' Member of VIBE!AT & SPI, fellow of the Free Software Foundation Europe
`- NP: Bettina Wegner: Auf der Wiese

David Taylor

unread,
Dec 14, 2013, 11:10:39 AM12/14/13
to
On 14/12/2013 14:58, Paul Berger wrote:
> David Taylor wrote:
>
>> Any ideas? I can't find libz.so anywhere....
>
> Maybe this page can help:
> http://packages.debian.org/search?searchon=contents&keywords=libz.so

Thanks, Paul. I wouldn't have known about that page, as I'm more of a
beginner with Linux. Now to see why it's not in the RPi download. The
git fetch/checkout "download" was faulty, so I had to use the .tar
download. But, the exact same .tar download /did/ compile on the RPi.

Right, more progress. From searching with Google (yes, I should have
done this first, but I thought it was just me being ham-fisted) it seems
that the problem is that libz.so is actually a host library, and not a
Raspberry Pi one. Further, as I'm using 64-bit Linux on a virtual PC, I
need to install the 32-bit version of certain libraries, so the next
part of the magic spell (it seems like that at times!) is:

sudo dpkg --add-architecture i386 # enable multi-arch
sudo apt-get update

Then run:
sudo apt-get install ia32-libs

It's taken many days to get this far!

David Taylor

unread,
Dec 14, 2013, 11:52:32 AM12/14/13
to
On 14/12/2013 15:20, gregor herrmann wrote:
> On Sat, 14 Dec 2013 14:46:11 +0000, David Taylor wrote:
>
>> HOSTLD scripts/genksyms/genksyms
>> CC scripts/mod/empty.o
>> /home/david/kern/tools/arm-bcm2708/arm-bcm2708-linux-gnueabi/bin/../lib/gcc/arm-bcm2708-linux-gnueabi/4.7.1/../../../../arm-bcm2708-linux-gnueabi/bin/as:
>> error while loading shared libraries: libz.so.1: cannot open shared
>> object file: No such file or directory
>>
>> Any ideas? I can't find libz.so anywhere...
>
> It's in the zlib1g package, maybe this one isn't installed?
>
> gregor

Thanks, Gregor. As I mentioned to Paul, installing the 32-bit host
libraries on the 64-bit Linux I was using fixed the compile problem. It
now remains to be seen whether I am brave enough to try my own
cross-compiled kernel on a real system. Yes, I will be using a spare SD
card imaged from the existing working one! Very nice to be able to
"backup" and "restore" cards on a house PC.

The Natural Philosopher

unread,
Dec 14, 2013, 1:18:16 PM12/14/13
to
On 14/12/13 16:10, David Taylor wrote:

>
> It's taken many days to get this far!

When I were a lad you had to write the compiler...

--
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to
lead are elected by the least capable of producing, and where the
members of society least likely to sustain themselves or succeed, are
rewarded with goods and services paid for by the confiscated wealth of a
diminishing number of producers.

Guesser

unread,
Dec 14, 2013, 1:59:27 PM12/14/13
to
On 14/12/2013 18:18, The Natural Philosopher wrote:
> On 14/12/13 16:10, David Taylor wrote:
>
>>
>> It's taken many days to get this far!
>
> When I were a lad you had to write the compiler...
>
didn't we already do all that a couple of months ago?

David Taylor

unread,
Dec 14, 2013, 2:17:51 PM12/14/13
to
On 14/12/2013 18:18, The Natural Philosopher wrote:
> On 14/12/13 16:10, David Taylor wrote:
>
>>
>> It's taken many days to get this far!
>
> When I were a lad you had to write the compiler...

My first programming task was updating the Assembler on an IBM 1130 to
accept free-format input, to suit the paper tape the department was
using rather than punched cards.

It was, IIRC, easier than dealing with Linux!

Mel Wilson

unread,
Dec 14, 2013, 4:14:32 PM12/14/13
to
David Taylor wrote:

> On 14/12/2013 18:18, The Natural Philosopher wrote:
>> On 14/12/13 16:10, David Taylor wrote:
>>
>>>
>>> It's taken many days to get this far!
>>
>> When I were a lad you had to write the compiler...
>
> My first programming task was updating the Assembler on an IBM 1130 to
> accept free-format input, to suit the paper tape the department was
> using rather than punched cards.
>
> It was, IIRC, easier than dealing with Linux!

Well yeah. IBM 1130. In those days machines were too simple and stupid to
look out for themselves. GE-415 the same. Do whatever it was told. Had no
choice.

Mel.

Michael J. Mahon

unread,
Dec 14, 2013, 11:23:42 PM12/14/13
to
Anything you're used to is easy. Anything you're not used to is hard. ;-)
--
-michael - NadaNet 3.1 and AppleCrate II: http://home.comcast.net/~mjmahon

David Taylor

unread,
Dec 15, 2013, 9:30:31 AM12/15/13
to
On 15/12/2013 04:23, Michael J. Mahon wrote:
> Mel Wilson <mwi...@the-wire.com> wrote:
[]
>> Well yeah. IBM 1130. In those days machines were too simple and stupid to
>> look out for themselves. GE-415 the same. Do whatever it was told. Had no
>> choice.
>>
>> Mel.
>
> Anything you're used to is easy. Anything you're not used to is hard. ;-)

Yes, there's an element (or 14) of truth in that. But waiting 15
minutes (or 5 hours if you compile on the RPi) to find that something is
wrong is definitely not as productive as using e.g. Delphi on the PC.
Virtually instant, and you can check out each change step by step.

I wrote up what I've found so far:
http://www.satsignal.eu/raspberry-pi/kernel-cross-compile.html

BTW: On the 1130 the only error message you got back was "Error". Not
even a line number....

The Natural Philosopher

unread,
Dec 15, 2013, 10:31:29 AM12/15/13
to
always compile first on a native target if only to check the code for
syntax errors...

Always use Make to ensure that you didn't recompile more than you need
to at any given stage..

then when you have the code working in a simulator/emulator, then burn
your ROM or Flash..

David Taylor

unread,
Dec 15, 2013, 12:43:47 PM12/15/13
to
On 15/12/2013 15:31, The Natural Philosopher wrote:
[]
> always compile first on a native target if only to check the code for
> syntax errors...
>
> Always use Make to ensure that you didn't recompile more than you need
> to at any given stage..
>
> then when you have the code working in a simulator/emulator, then burn
> your ROM or Flash..

Good in theory, but....

When a compile takes a significant part of the day (as with compiling
the kernel on the RPi), making multiple runs is extremely time
consuming! Unfortunately, even if you want to change just one option,
if it's your first compile it still takes almost all the working day.

What simulator would you recommend for the Raspberry Pi kernel?

BTW: the problem arises because the supplied kernel was compiled with
tickless, which makes the kernel-mode GPIO/PPS work very poorly.
Changing this one flag makes a worthwhile improvement bringing the
averaged NTP jitter down from 3.9 microseconds to 1.2 microseconds, with
similar improvements in offset, and correcting an NTP reporting error.

Cheers,
David
Message has been deleted

The Natural Philosopher

unread,
Dec 15, 2013, 9:37:25 PM12/15/13
to
On 16/12/13 01:03, Dennis Lee Bieber wrote:
> On Sun, 15 Dec 2013 17:43:47 +0000, David Taylor
> <david-...@blueyonder.co.uk.invalid> declaimed the following:
>
>>
>> When a compile takes a significant part of the day (as with compiling
>> the kernel on the RPi), making multiple runs is extremely time
>> consuming! Unfortunately, even if you want to change just one option,
>> if it's your first compile it still takes almost all the working day.
>>
> I spent nearly 6 months in the early 80s in an environment where two
> builds a day (for a single application) was a good day. Worse was having to
> message the sysop to "kill the rabble" (RABL, for ReenABLe -- a batch job
> I'd written designed to clear out "connection" state from a "database"); it
> meant I'd concluded the entire database needed to be rebuilt (not an
> operational database -- though the application itself wasn't considered a
> database app; it was a requirements traceability tool, lacking dynamic
> table creation -- a few added capabilities would have given it relational
> algebra).
>
> We were porting a FORTRAN-IV application to something I call
> FORTRAN-minus-2. The ported code ended up filled with variables named: inx,
> linx, jinx, minx, etc. as
>
> call xyz(a-1, b+2, a+b)
> had to be converted to
>
> inx = a-1
> linx = b+2
> jinx = a+b
> call xyz(inx, linx, jinx)
>
> as the compiler could not handle expressions as arguments in a subroutine
> (or function) call.
>
try coding in C for a 6809 then ...with 256k of memory in paged
banks...all the library code was 'select which ROM bank to use, call the
function, get something back in the registers restore ROM bank that
called you and return;'

We had a DSP co processor too. 400Mhz digital scope that was. I'd have
KILLED for a pi.

Rob

unread,
Dec 16, 2013, 4:15:05 AM12/16/13
to
Dennis Lee Bieber <wlf...@ix.netcom.com> wrote:
> On Sun, 15 Dec 2013 17:43:47 +0000, David Taylor
> <david-...@blueyonder.co.uk.invalid> declaimed the following:
>
>>
>>When a compile takes a significant part of the day (as with compiling
>>the kernel on the RPi), making multiple runs is extremely time
>>consuming! Unfortunately, even if you want to change just one option,
>>if it's your first compile it still takes almost all the working day.
>>
> I spent nearly 6 months in the early 80s in an environment where two
> builds a day (for a single application) was a good day.

I agree with you. The kids today can only develop in an IDE where they
can just compile&run with a keypress and have results in a second.
We used to have to wait for hours before the project was compiled and
new tests could be done.

In fact my first experience with programming was in an RJE environment
where you had to submit your source (on cards) and have them back with
a listing (with run results or a listing with syntax errors) the next
working day.

I can tell you this makes you think twice before you code something.
My first program (of course a trivial one) in fact compiled OK on the
first try! But that was after spending most of the afternoon to check
and double-check (and more) to make sure it was OK, and after the
teacher assured me that it would be impossible to get it OK the first
time.

Having quick turnaround for compile&run IMHO leads to poor software
quality, because the tendency is to get functionality OK by trial and
error (running it until it no longer fails with the test cases at hand)
instead of by carefully looking at the algorithm and its implementation.

Michael J. Mahon

unread,
Dec 16, 2013, 12:07:47 PM12/16/13
to
Glad you said this, because I was about to. ;-)

With one turnaround per day, plus a core dump (yes, it was core memory) on
execution errors, *every* data structure in memory was painstakingly
examined to find multiple problems per compile-execute cycle. Of course the
stack--and the stack "residue" beyond the current top-of-stack--was one of
the first data structures examined forensically.

Any detail that was not exactly as expected resulted in either finding a
latent error or revising my understanding of the program's behavior, or
(often) both.

The result was that after several cycles, my understanding of the
implications of the code I had written and it's interactions with the
hardware/software environment was richly improved. My confidence in the
code that worked was substantiated and my corrections to code that failed
were well thought out.

I sometimes found both compiler and OS bugs as well as my own, many of
which did not actually prevent my code from getting correct answers!

When computer cycles are precious, brain cycles are required to wring the
maximum amount of information from each trial run. The effects on both code
quality and programmer confidence (and humility) are remarkable.

My experience managing today's programmers is that they frequently have no
idea what their code actually does during execution. They are often amazed
when they discover that their use of dynamic storage allocation is wasting
90% of the allocated memory, or that a procedure is being executed two
orders of magnitude more frequently than they expected! And their tools
and tests, combined with their inaccurate understanding of their code's
behavior, prevent them from finding out.

They are very poorly prepared to program for performance, since, for
example, they have no practical grasp that a cache miss costs an order of
magnitude more than a hit, and a page miss, perhaps four orders of
magnitude.

Interactive programming does not preclude the development of craft, but it
apparently significantly impedes it.

All this becomes practically hopeless in modern application environments
where one's code constantly invokes libraries, that call libraries, etc.,
etc., until "Hello, world" requires thirty million instructions and has a
working set of a hundred megabytes!

Such progress in the name of eye candy...

David Taylor

unread,
Dec 16, 2013, 12:15:41 PM12/16/13
to
On 16/12/2013 09:15, Rob wrote:
[]
> Having quick turnaround for compile&run IMHO leads to poor software
> quality, because the tendency is to get functionality OK by trial and
> error (running it until it no longer fails with the test cases at hand)
> instead of by carefully looking at the algorithm and its implementation.

Yes, I also remember the days of queuing, or waiting overnight for a
run's output to be returned.

I disagree with you about today's development, though. My experience
with C/C++ suggests that it's too slow. Having to wait a few minutes to
see the effect of a change encourages developers to change too much at
once, rather than a line at a time. I find that with Delphi - where it
really is the instant compile and run you criticise - I make much
smaller changes and can be sure that each change has worked before
introducing the next

I hope the Raspberry Pi encourages similar developments.

(And I think that algorithms are very important. Many people seem to
want to do (or to get the compiler to do) minor optimisations of code
which may work well only on one processor family, whereas my own
experience suggests that using a profiler to find out where the delays
are /really/ happening has most often pointed to regions of the program
where I was not expecting there to be delays, pointing either to less
than optimum algorithm design or, in one case, some debug code which had
been left in.)

gregor herrmann

unread,
Dec 16, 2013, 4:46:54 PM12/16/13
to
On Sun, 15 Dec 2013 14:30:31 +0000, David Taylor wrote:

> I wrote up what I've found so far:
> http://www.satsignal.eu/raspberry-pi/kernel-cross-compile.html

Short addition to your script:

With something like

#v+
# arch/arm/configs/bcmrpi_defconfig
export PLATFORM=bcmrpi
ARCH=arm CROSS_COMPILE=${CCPREFIX} make ${PLATFORM}_defconfig
#v-

you can use the default config instead of an existing one or going
through menuconfig manually.

(Useful if you want to switch to e.g. the rpi-3.10.y branch and don't
have an existing config as a starting point.)

gregor
--
.''`. Homepage: http://info.comodo.priv.at/ - OpenPGP key 0xBB3A68018649AA06
: :' : Debian GNU/Linux user, admin, and developer - http://www.debian.org/
`. `' Member of VIBE!AT & SPI, fellow of the Free Software Foundation Europe
`- NP: Nick Cave And The Bad Seeds: Fable Of The Brown Ape

Martin Gregorie

unread,
Dec 16, 2013, 5:49:45 PM12/16/13
to
On Mon, 16 Dec 2013 02:37:25 +0000, The Natural Philosopher wrote:

> try coding in C for a 6809 then ...with 256k of memory in paged
> banks...all the library code was 'select which ROM bank to use, call the
> function, get something back in the registers restore ROM bank that
> called you and return;'
>
Out of curiosity, which OS were you using?

I've used uniFlex on SWTPc boxes but don't remember jumping through those
hoops (though we were writing in the Sculptor 4GL, which compiled to an
intermediate interpreted form (and bloody fast too) rather than all the
way to binary.

I've also got considerable time with OS-9, though on a 68000 rather than
as level 1 or 2 on a 6809, but am certain that, as level 2 managed memory
in 4K chunks, it was nothing like as convoluted as the stuff you're
describing. In fact, once I'd replaced the Microware shell with the EFFO
one, it was a real pleasure to use.


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |

Martin Gregorie

unread,
Dec 16, 2013, 6:27:46 PM12/16/13
to
On Mon, 16 Dec 2013 17:15:41 +0000, David Taylor wrote:

> I disagree with you about today's development, though. My experience
> with C/C++ suggests that it's too slow. Having to wait a few minutes to
> see the effect of a change encourages developers to change too much at
> once, rather than a line at a time. I find that with Delphi - where it
> really is the instant compile and run you criticise - I make much
> smaller changes and can be sure that each change has worked before
> introducing the next
>
What are you running on?

My fairly average rig (dual core 3.2 GHz Athlon, 4GB RAM running Fedora
18 and using the GNU C compiler is compiling and linking 2100 statements.
600k of code in 1.1 seconds. A complete regression test suite (so far
amounting to 21 test scripts) runs in 0.38 seconds. All run from a
console with make for the compile and bash handling regression tests,
natch, natch.

Put it this way: the build runs way too fast to see what's happening
while its running. The regression tests are the same, though, as you
might hope, they only display script names and any deviations from
expected results.

> I hope the Raspberry Pi encourages similar developments.
>
It does since it has the same toolset. Just don't expect it to be quite
as nippy, though intelligent use of make to minimise the amount of work
involved in a build makes a heap of difference. However, its quite a bit
faster than my old OS-9/68000 system ever was, but then again that was
cranked by a 25MHz 68020 rather than a 800MHz ARM.

I really cut my teeth on an ICL 1902S running a UDAS exec or George 2 and
like others have said, never expected more than one test shot per day per
project: the machine was running customer's work during the day, so we
basically had an overnight development slot and, if we were dead lucky,
sometimes a second lunchtime slot while the ops had lunch - if we were
prepared to run the beast ourselves.

You haven't really programmed unless you've punched your own cards and
corrected them on a 12 key manual card punch....

but tell that to the kids of today....

> (And I think that algorithms are very important.
>
Yes.

> some debug code which had been left in.)
>
I always leave that in, controlled by a command-line option or the
program's configuration file. Properly managed, the run-time overheads
are small but the payoff over the years from having well thought-out
debugging code in production programs is immense.

David Taylor

unread,
Dec 17, 2013, 3:54:19 AM12/17/13
to
Martin,

I'm running on a quad-core Windows 7/64 system, and judging the time
taken to compile the 9 programs in the NTP suite using Visual Studi0
2010. These are almost always a compile from scratch, and not a
recompile where little will have changed. Your 1.1 second figure would
be more than acceptable, and very similar to what I see when using
Embarcadero's Delphi which is my prime development environment.

On the RPi I have used Lazarus which is similar, and allows almost
common code between Windows and Linux programs.

Cards were used by the Computer Department at university when they
bought an IBM 360, and a room full of card punches was rather noisy! I
can't recall now whether it was noisier than the room full of 8-track
paper tape Flexowriters we at the Engineering Department were using, and
yes, we did patch those by hand at times. Almost all of the access to
the IBM 1130 we had was hands-on by the researchers and some undergraduates.

Leaving debug code is a good idea, except when it accounts for 90% of
the program's execution time as seen by a real-time profiler. I do
still try and make my own code as compact as possible, but particularly
as fast as possible, and the profiler has been a big help there. I
haven't done any serious debugging on the RPi, though - it's been more
struggling with things like GNU radio build taking 19 hours and then
failing!

Paul

unread,
Dec 17, 2013, 5:01:33 AM12/17/13
to
In article <l8ncft$4p2$1...@dont-email.me>, david-
tay...@blueyonder.co.uk.invalid says...
>
> On 16/12/2013 09:15, Rob wrote:
> []
> > Having quick turnaround for compile&run IMHO leads to poor software
> > quality, because the tendency is to get functionality OK by trial and
> > error (running it until it no longer fails with the test cases at hand)
> > instead of by carefully looking at the algorithm and its implementation.
>
> Yes, I also remember the days of queuing, or waiting overnight for a
> run's output to be returned.
>
> I disagree with you about today's development, though. My experience
> with C/C++ suggests that it's too slow. Having to wait a few minutes to
> see the effect of a change encourages developers to change too much at
> once, rather than a line at a time. I find that with Delphi - where it
> really is the instant compile and run you criticise - I make much
> smaller changes and can be sure that each change has worked before
> introducing the next

I find that instant grahpical interface make a change, compile and run
ebcourages the youngsters to try ANYTHING to fix problem and not use
any form of version control. Then they go off fixing everything else
they have now broken because they did not acquire data first, to find
out where the problem maybe then use debugs or other data to prove
the area of fault, then prove what the fault is if necessary using
pencil, paper and a bit of grey matter.

> I hope the Raspberry Pi encourages similar developments.
>
> (And I think that algorithms are very important. Many people seem to
> want to do (or to get the compiler to do) minor optimisations of code
> which may work well only on one processor family, whereas my own
> experience suggests that using a profiler to find out where the delays
> are /really/ happening has most often pointed to regions of the program
> where I was not expecting there to be delays, pointing either to less
> than optimum algorithm design or, in one case, some debug code which had
> been left in.)

Most people want to put any old code down first, not interested in
algorithm or design etc..


--
Paul Carpenter | pa...@pcserviceselectronics.co.uk
<http://www.pcserviceselectronics.co.uk/> PC Services
<http://www.pcserviceselectronics.co.uk/pi/> Raspberry Pi Add-ons
<http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font
<http://www.gnuh8.org.uk/> GNU H8 - compiler & Renesas H8/H8S/H8 Tiny
<http://www.badweb.org.uk/> For those web sites you hate

David Taylor

unread,
Dec 17, 2013, 6:31:24 AM12/17/13
to
On 17/12/2013 10:01, Paul wrote:
[]
> I find that instant grahpical interface make a change, compile and run
> ebcourages the youngsters to try ANYTHING to fix problem and not use
> any form of version control. Then they go off fixing everything else
> they have now broken because they did not acquire data first, to find
> out where the problem maybe then use debugs or other data to prove
> the area of fault, then prove what the fault is if necessary using
> pencil, paper and a bit of grey matter.
[]

If that's the case, surely they should be better trained in using the
tools, rather than deliberately making the tools slower and more
difficult to use? Give points for algorithm design!

(That originally came out as "give pints" - might be something in that!)

Michael J. Mahon

unread,
Dec 17, 2013, 10:59:58 AM12/17/13
to
Certainly training would help, but the critical missing
ingredient--necessitated by cumbersome tools--is the development of
engineering discipline...and that is always in short supply.

Martin Gregorie

unread,
Dec 17, 2013, 5:55:04 PM12/17/13
to
On Tue, 17 Dec 2013 08:54:19 +0000, David Taylor wrote:

> On the RPi I have used Lazarus which is similar, and allows almost
> common code between Windows and Linux programs.
>
I don't know about Lazarus: but the C source is identical on the RPi
since it uses the same GNU C compiler and make that all Linux systems use.

> I can't recall now whether it was noisier than the room full of 8-track
> paper tape Flexowriters we at the Engineering Department were using, and
> yes, we did patch those by hand at times.
>
I used those at Uni, but they were feeding an Elliott 503, a set of huge
grey boxes housing solid state electronics but made entirely with
discrete transistors. It compiled Algol 60 direct from paper tape and,
embarrassingly, no matter what I tried on the 1902S, I was never able to
come near the Ellott's compile times: just shows the inherent superiority
of 50 microsecond core backing store over 2800 rpm disk drives.

> Leaving debug code is a good idea, except when it accounts for 90% of
> the program's execution time as seen by a real-time profiler.
>
I that case it was done very badly. The trick of minimising overhead is
the be able to use something like:

if (debug)
{
/* debug tests and displays */
}

rather than leaving, e.g. assertions, inline in live code or, worse,
having debugging code so interwoven with the logic that it can't be
disabled during normal operation. I agree that the overheads of that
approach are high, where the overheads of several "if (debug)..."
statement are about as low as its possible to get.

mm0fmf

unread,
Dec 17, 2013, 7:03:10 PM12/17/13
to
On 17/12/2013 22:55, Martin Gregorie wrote:
> The trick of minimising overhead is
> the be able to use something like:
>
> if (debug)
> {
> /* debug tests and displays */
> }
>

I think you mean
if (unlikely(debug))
{
debug stuff
}

If you want low impact, then tell the compiler it isn't likely so it can
twiddle the branch prediction stuff.

> rather than leaving, e.g. assertions, inline in live code or

I don't know which compiler you use, but in mine assert is only compiled
into code in debug builds. There's nothing left in a non-debug build.

Andy


Rob

unread,
Dec 18, 2013, 3:38:29 AM12/18/13
to
Martin Gregorie <mar...@address-in-sig.invalid> wrote:
> I that case it was done very badly. The trick of minimising overhead is
> the be able to use something like:
>
> if (debug)
> {
> /* debug tests and displays */
> }
>
> rather than leaving, e.g. assertions, inline in live code or, worse,
> having debugging code so interwoven with the logic that it can't be
> disabled during normal operation.

Normally in C you use the preprocessor to eliminate all debug code at
compile time when it is no longer required, so even the overhead of
the if (debug) and the size of the code in the if statement is no
longer there.

David Taylor

unread,
Dec 18, 2013, 10:07:38 AM12/18/13
to
On 17/12/2013 22:55, Martin Gregorie wrote:
> On Tue, 17 Dec 2013 08:54:19 +0000, David Taylor wrote:
[]
>> Leaving debug code is a good idea, except when it accounts for 90% of
>> the program's execution time as seen by a real-time profiler.
>>
> I that case it was done very badly. The trick of minimising overhead is
> the be able to use something like:
>
> if (debug)
> {
> /* debug tests and displays */
> }
>
> rather than leaving, e.g. assertions, inline in live code or, worse,
> having debugging code so interwoven with the logic that it can't be
> disabled during normal operation. I agree that the overheads of that
> approach are high, where the overheads of several "if (debug)..."
> statement are about as low as its possible to get.

Not necessarily bad, just doing a lot of stuff not necessary to the
production version. But now it's as you recommend - optional - using
conditional compile or boolean variables as you show.

Martin Gregorie

unread,
Dec 18, 2013, 5:17:56 PM12/18/13
to
On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:

> I don't know which compiler you use, but in mine assert is only compiled
> into code in debug builds. There's nothing left in a non-debug build.
>
This build of which you speak is the problem with that approach: you'll
have to recompile the program before you can start debugging the problem
while I can simply I can ask the user to set the debug flag, do it again
and unset the debug flag.

Your recompile to turn assertions back on can take days in a real life
situation because you may need to do full release tests and get
management buy-in before you can let your user run it on live data.
Alternatively, it can take at least as long to work out what combo of
data and user action is needed to duplicate the bug and then make it
happen on a testing system. Bear in mind that Murphy will make sure this
happens on sensitive data and that as a consequence you'll have hell's
delight getting enough access to the live system to work out what
happened, let alone being able to get hold of sufficient relevant data to
reproduce the problem.

Two real world examples. In both cases we left debugging code in the
production system:

(1) The BBC's Orpheus system dealt with very complex musical data and was
used by extremely bright music planning people. I provided a debug
control screen for them so they could instantly turn on debug, repeat the
action and turn debug off: probably took 15-20 seconds to do and I'd get
the diagnostic output the next day. A significant number of times the
problem was finger trouble, easy to spot because I had their input and
easy to talk them through it too. If it was a genuine bug or something
that needed enhancement, such as searching for classical music works by
name, I had absolutely all the information we needed to design and
implement the change: input, program flow, DB access, and output.

(2) We also left debugging in a very high volume system that handled call
detail records for a UK telco. This used exactly the debug enabling
method I showed earlier and yet it still managed to process 8000 CDRs/sec
(or 35,000 phone number lookups/sec if you prefer) and that was back in
2001 running on a DEC Alpha box. As I said, the overheads of even a few
tens of "if (debug)" tests per program cycle where invisible in the
actual scheme of things.

My conclusion is that recompiling to remove well designed debugging code,
without measuring the effectiveness doing it, is yet another example of
premature optimization.

Martin Gregorie

unread,
Dec 18, 2013, 5:21:50 PM12/18/13
to
Indeed, but why bother unless you have actual measurements that let you
quantify the trade-off between the performance increase of removing it
and improved problem resolution in the live environment?

Dr J R Stockton

unread,
Dec 18, 2013, 2:36:47 PM12/18/13
to
In comp.sys.raspberry-pi message <l8qko8$8do$2...@dont-email.me>, Tue, 17
Dec 2013 22:55:04, Martin Gregorie <mar...@address-in-sig.invalid>
posted:

>I used those at Uni, but they were feeding an Elliott 503, a set of huge
>grey boxes housing solid state electronics but made entirely with
>discrete transistors. It compiled Algol 60 direct from paper tape and,
>embarrassingly, no matter what I tried on the 1902S, I was never able to
>come near the Ellott's compile times: just shows the inherent superiority
>of 50 microsecond core backing store over 2800 rpm disk drives.

At one stage, I used an Elliott 905, with only paper tape - a 250
char/sec reader, and a punch (and console TTY, maybe?).

By sticking to the end of the Algol compiler a short program, the
compiler could be persuaded to read from a BS4421 interface, initially
with a 1000 char/sec reader. By instead connecting the BS4421 to the
site Network, a speed of (IIRC) about 6000 char/sec could be obtained.


Earlier, I used an ICT/ICL 1905. Its CPU had two features not commonly
found in modern machines :

(1) A machine-code instruction "OBEY",
(2) A compartment which in ours stored the site engineer's lunch.

--
(c) John Stockton, nr London, UK. Mail via homepage. Turnpike v6.05 MIME.
Web <http://www.merlyn.demon.co.uk/> - FAQqish topics, acronyms and links;
Astro stuff via astron-1.htm, gravity0.htm ; quotings.htm, pascal.htm, etc.

mm0fmf

unread,
Dec 18, 2013, 6:45:02 PM12/18/13
to
On 18/12/2013 22:17, Martin Gregorie wrote:
> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
>
>> I don't know which compiler you use, but in mine assert is only compiled
>> into code in debug builds. There's nothing left in a non-debug build.
>>
> you'll
> have to recompile the program before you can start debugging

You may have bugs, I don't! :-)
Message has been deleted

Paul

unread,
Dec 18, 2013, 7:22:53 PM12/18/13
to
In article <l8pcmc$2kv$1...@dont-email.me>, david-
tay...@blueyonder.co.uk.invalid says...
>
> On 17/12/2013 10:01, Paul wrote:
> []
> > I find that instant grahpical interface make a change, compile and run
> > ebcourages the youngsters to try ANYTHING to fix problem and not use
> > any form of version control. Then they go off fixing everything else
> > they have now broken because they did not acquire data first, to find
> > out where the problem maybe then use debugs or other data to prove
> > the area of fault, then prove what the fault is if necessary using
> > pencil, paper and a bit of grey matter.
> []
>
> If that's the case, surely they should be better trained in using the
> tools, rather than deliberately making the tools slower and more
> difficult to use? Give points for algorithm design!

In exams they do and for documentation, but most coders and the like
especially studenst are lazy with that and want to play with code, not
writiung things down.

It is not the tools but the tool using them no matter what training.

> (That originally came out as "give pints" - might be something in
that!)



--

Rob

unread,
Dec 19, 2013, 3:56:25 AM12/19/13
to
David Taylor <david-...@blueyonder.co.uk.invalid> wrote:
> On 17/12/2013 10:01, Paul wrote:
> []
>> I find that instant grahpical interface make a change, compile and run
>> ebcourages the youngsters to try ANYTHING to fix problem and not use
>> any form of version control. Then they go off fixing everything else
>> they have now broken because they did not acquire data first, to find
>> out where the problem maybe then use debugs or other data to prove
>> the area of fault, then prove what the fault is if necessary using
>> pencil, paper and a bit of grey matter.
> []
>
> If that's the case, surely they should be better trained in using the
> tools, rather than deliberately making the tools slower and more
> difficult to use? Give points for algorithm design!

I don't propose to make tools slower, maybe a bit more difficult to
use yes. What I don't like is singlestepping etc. That encourages
fixing boundary errors by just adding a check or an offset, and also
makes developers believe that they can get a correct algorithm by
just trying test cases until it looks ok.

The Natural Philosopher

unread,
Dec 19, 2013, 4:43:32 AM12/19/13
to
The 'IPCC' approach to coding...
..I'll get my coat..

David Taylor

unread,
Dec 19, 2013, 2:40:54 PM12/19/13
to
On 16/12/2013 21:46, gregor herrmann wrote:
> #v+
> # arch/arm/configs/bcmrpi_defconfig
> export PLATFORM=bcmrpi
> ARCH=arm CROSS_COMPILE=${CCPREFIX} make ${PLATFORM}_defconfig
> #v-

Many thanks for that, Gregor. I'll have a play. I did see that
3.10.23+ was now the current version - and that it has drivers for DVB-T
sticks. Apart from that, anything worthwhile in 3.10? Would I need to
recompile my customised NTP?

Martin Gregorie

unread,
Dec 19, 2013, 3:56:52 PM12/19/13
to
On Thu, 19 Dec 2013 08:56:25 +0000, Rob wrote:

> I don't propose to make tools slower, maybe a bit more difficult to use
> yes. What I don't like is singlestepping etc.
>
Why not insist on them writing proper test cases before writing or
compiling any code. 'Proper' involves specifying both inputs and outputs
(if trextual output, to the letter) and including corner cases and
erroneous inputs as well as straight forward clean path tests.

I routinely do that for my own code: write a test harness and scripts for
it. These scripts include expected results either as comments or as
expected results fields which the test harness checks.

Martin Gregorie

unread,
Dec 19, 2013, 3:58:35 PM12/19/13
to
Either thats pure bullshit or you don't test your code properly.

Rob

unread,
Dec 19, 2013, 4:09:01 PM12/19/13
to
Martin Gregorie <mar...@address-in-sig.invalid> wrote:
> On Thu, 19 Dec 2013 08:56:25 +0000, Rob wrote:
>
>> I don't propose to make tools slower, maybe a bit more difficult to use
>> yes. What I don't like is singlestepping etc.
>>
> Why not insist on them writing proper test cases before writing or
> compiling any code. 'Proper' involves specifying both inputs and outputs
> (if trextual output, to the letter) and including corner cases and
> erroneous inputs as well as straight forward clean path tests.

Those that cannot devise a properly working algorithm and write the
code that implements it usually cannot write proper testcases either.

Clear examples are code to sort an array or to search a value in a
sorted array using binary search. Remember "sorting and searching"
by Donald Knuth?

It will take the typical singlestep-modify-test-again programmer many
many iterations before he will be satisfied that the code works OK,
and it will fail within an hour of first release.

The more theoretical approach will require some study but will pay
off in reliability.

gregor herrmann

unread,
Dec 19, 2013, 4:30:21 PM12/19/13
to
On Thu, 19 Dec 2013 19:40:54 +0000, David Taylor wrote:

> On 16/12/2013 21:46, gregor herrmann wrote:
>> #v+
>> # arch/arm/configs/bcmrpi_defconfig
>> export PLATFORM=bcmrpi
>> ARCH=arm CROSS_COMPILE=${CCPREFIX} make ${PLATFORM}_defconfig
>> #v-
>
> Many thanks for that, Gregor. I'll have a play.

you're welcome, and I hope you're successful as well.

> I did see that
> 3.10.23+ was now the current version - and that it has drivers for DVB-T
> sticks. Apart from that, anything worthwhile in 3.10? Would I need to
> recompile my customised NTP?

that's something I can't answer; it's just that I prefer more recent
kernel versions out of principle :)


gregor
--
.''`. Homepage: http://info.comodo.priv.at/ - OpenPGP key 0xBB3A68018649AA06
: :' : Debian GNU/Linux user, admin, and developer - http://www.debian.org/
`. `' Member of VIBE!AT & SPI, fellow of the Free Software Foundation Europe
`- NP: Misha Alperin: Psalm No.1

mm0fmf

unread,
Dec 19, 2013, 6:10:11 PM12/19/13
to
On 19/12/2013 20:58, Martin Gregorie wrote:
> On Wed, 18 Dec 2013 23:45:02 +0000, mm0fmf wrote:
>
>> On 18/12/2013 22:17, Martin Gregorie wrote:
>>> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
>>>
>>>> I don't know which compiler you use, but in mine assert is only
>>>> compiled into code in debug builds. There's nothing left in a
>>>> non-debug build.
>>>>
>>> you'll have to recompile the program before you can start debugging
>>
>> You may have bugs, I don't! :-)
>
> Either thats pure bullshit or you don't test your code properly.
>
>
Mmmmm.... maybe it's time you considered drinking decaf!

;-)

Martin Gregorie

unread,
Dec 19, 2013, 8:20:19 PM12/19/13
to
On Thu, 19 Dec 2013 21:09:01 +0000, Rob wrote:

> Those that cannot devise a properly working algorithm and write the code
> that implements it usually cannot write proper testcases either.
>
Probably true, but I'd strongly suggest it is a skill that can be taught
but that nobody ever bothers to teach it. Evidence: I've *NEVER* seen a
hint of trying to teach it in any course I've been om or in any
programming book I've read.

If you've seen this approach to testing taught, then please tell us about
it.

> Clear examples are code to sort an array or to search a value in a
> sorted array using binary search. Remember "sorting and searching" by
> Donald Knuth?
>
I've not read Knuth, but I own and have read copies of Sedgewick's
"Algorithms" and Wirth's "Algorithms + Data Structures = Programs", which
I suspect is a fair approximation to having all four volumes of Knuth,
and with the added advantage that these use Pascal rather than idealized
assembler as example code. Sedgewicks' code is particularly easy to
transcribe directly into C. Been there, done that.

> It will take the typical singlestep-modify-test-again programmer many
> many iterations before he will be satisfied that the code works OK, and
> it will fail within an hour of first release.
>
Very true.

> The more theoretical approach will require some study but will pay off
> in reliability.
>
Dunno about 'theoretical', but if you start cutting code before thinking
through what you must achieve, preferably by iterating it on paper or at
least as a test file, until you understand what you're doing and can
explain why it is the best approach to another programmer, then you're
heading up a blind ally at full throttle.

On top of that there are probably issues with structuring the code that
you didn't think of and that will bite your bum unless dealt with. IME
Wirth's "top-down incremental development" approach helps a lot here.
Look it up if you've not heard of it.

This approach solves many of the code structuring problems that bottom-up
development can cause. Use it and be prepared to redesign/restructure/
replace existing code as soon as you realize that the code organization
you started with is becoming harder to work with. These difficulties are
only highlighting issues you should have fixed before starting to cut
code. The only good way out is to admit that the code you've ended up
with is crap and do something about it, i.e. rewrite/refactor the ugly
bits and try to never make that mistake again.

Martin Gregorie

unread,
Dec 19, 2013, 8:24:52 PM12/19/13
to
Pure experience over a few decades, dear boy.

Anybody who claims to have written bugfree code that is more complex than
"Hello World" is talking out his arse.

Martin Gregorie

unread,
Dec 19, 2013, 9:39:33 PM12/19/13
to
I should have added that I've met so-called programmers[*] who couldn't
write even that without introducing bugs.

* One particularly memorable example cut COBOL I had to fix on the
infamous GNS Naval Dockyard project. This clown didn't know that COBOL
code drops through from one paragraph to the next by default and
consequently wrote code like this:

PARA-1.
NOTE sentences doing stuff.
GO TO PARA-2.
PARA-2.
NOTE more sentences doing stuff.
...

Other contractors knew him from previous projects and said that they'd
never seen him write a working program. He always managed to leave with
his last paycheck just before the deadline for his program to be
delivered. He was always known for turning up late, doing sod all during
the day, staying late and claiming overtime.

Guesser

unread,
Dec 19, 2013, 9:42:22 PM12/19/13
to
On 20/12/2013 02:39, Martin Gregorie wrote:
> Other contractors knew him from previous projects and said that they'd
> never seen him write a working program. He always managed to leave with
> his last paycheck just before the deadline for his program to be
> delivered. He was always known for turning up late, doing sod all during
> the day, staying late and claiming overtime.
>

Sounds like the guy was a genius to me!

The Natural Philosopher

unread,
Dec 20, 2013, 2:51:19 AM12/20/13
to
On 20/12/13 01:20, Martin Gregorie wrote:
> On Thu, 19 Dec 2013 21:09:01 +0000, Rob wrote:
>
>> Those that cannot devise a properly working algorithm and write the code
>> that implements it usually cannot write proper testcases either.
>>
> Probably true, but I'd strongly suggest it is a skill that can be taught
> but that nobody ever bothers to teach it. Evidence: I've *NEVER* seen a
> hint of trying to teach it in any course I've been om or in any
> programming book I've read.
>

That's probably because you only read books or attended cousre on
'programming; or 'computer science'.

Try reading books on 'software engineering' which cover all of this in
far more detail.



> If you've seen this approach to testing taught, then please tell us about
> it.
>
>> Clear examples are code to sort an array or to search a value in a
>> sorted array using binary search. Remember "sorting and searching" by
>> Donald Knuth?
>>
> I've not read Knuth, but I own and have read copies of Sedgewick's
> "Algorithms" and Wirth's "Algorithms + Data Structures = Programs", which
> I suspect is a fair approximation to having all four volumes of Knuth,
> and with the added advantage that these use Pascal rather than idealized
> assembler as example code. Sedgewicks' code is particularly easy to
> transcribe directly into C. Been there, done that.
>
>> It will take the typical singlestep-modify-test-again programmer many
>> many iterations before he will be satisfied that the code works OK, and
>> it will fail within an hour of first release.
>>
> Very true.
>
>> The more theoretical approach will require some study but will pay off
>> in reliability.
>>
> Dunno about 'theoretical', but if you start cutting code before thinking
> through what you must achieve, preferably by iterating it on paper or at
> least as a test file, until you understand what you're doing and can
> explain why it is the best approach to another programmer, then you're
> heading up a blind ally at full throttle.
>

indeed.

Been on projects run exactly like that.

> On top of that there are probably issues with structuring the code that
> you didn't think of and that will bite your bum unless dealt with. IME
> Wirth's "top-down incremental development" approach helps a lot here.
> Look it up if you've not heard of it.
>

or bottom up...

> This approach solves many of the code structuring problems that bottom-up
> development can cause. Use it and be prepared to redesign/restructure/
> replace existing code as soon as you realize that the code organization
> you started with is becoming harder to work with. These difficulties are
> only highlighting issues you should have fixed before starting to cut
> code. The only good way out is to admit that the code you've ended up
> with is crap and do something about it, i.e. rewrite/refactor the ugly
> bits and try to never make that mistake again.
>
>
you have to do both., At the bottom emd you have to build the sort of
library of useful objects to deal with the hardware or operating system
interface. At the top you need a structured approach to map the needs of
the design into one or more user interfaces, and in between is an unholy
mess that is neither perfectly addressed by either method. In essence
you have to think about it until you see a way to do it.

In general this takes about three iterations. because that's how long it
takes to actually fully understand the problem.

Whether those iterations are on paper or in code is scarcely germane,
the work is the same.

What is not possible is to arrive at a result that is problem free
without actually understanding the problem fully. That is the mistake we
are talking about. Top down or bottom up are just places to start. In
the end you need top to bottom and all places in between.

The Natural Philosopher

unread,
Dec 20, 2013, 2:53:45 AM12/20/13
to
No, it can be done, just not at the first pass.

its not hard to write and to test for bug free code for all the
eventualities you thought of, its what happens when an eventuality you
didn't think of comes along....

The Natural Philosopher

unread,
Dec 20, 2013, 2:54:35 AM12/20/13
to
And then he went into politics?

Rob

unread,
Dec 20, 2013, 4:14:31 AM12/20/13
to
Martin Gregorie <mar...@address-in-sig.invalid> wrote:
> On Thu, 19 Dec 2013 21:09:01 +0000, Rob wrote:
>
>> Those that cannot devise a properly working algorithm and write the code
>> that implements it usually cannot write proper testcases either.
>>
> Probably true, but I'd strongly suggest it is a skill that can be taught
> but that nobody ever bothers to teach it. Evidence: I've *NEVER* seen a
> hint of trying to teach it in any course I've been om or in any
> programming book I've read.
>
> If you've seen this approach to testing taught, then please tell us about
> it.

My informatics teacher was focussing a lot on algorithms and proof
of correctness, and explained a lot about the types of errors you have
to watch out for.
However, that is over 30 years ago. We also learned generic principles
of compilers, operating systems, machine code, etc.
I hear that today they only train you how to work in specific MS tools
and if you are lucky present some info about Linux.

About books: what I found is that many books that explain programming
do not cover the topic of error handling. It is left as an exercise
for the reader, or as a more complicated topic not covered now.

In practice it is quite important to think about error handling before
starting to write code. When it is added as an afterthought it will
be quite tricky to get it right.
(especially when it involves recovery, not only bombing out when
something unexpected happens, and when some kind of configurable logging
of problems that does not overflow during normal operation is desired)

My experience with larger projects is that a lot of time is spent
discussing an error handling strategy and the result still is not
satisfactory and often has a lot of variation depending on who wrote
the specific module.

Michael J. Mahon

unread,
Dec 20, 2013, 4:19:34 AM12/20/13
to
One of my maxims is: "Our most important design tool is the wastebasket,
and it is much underused."

Until you have considered several different approaches to writing a program
(in enough detail to see the advantages and disadvantages of each), you
have no idea whether you are proceeding appropriately.

An empty wastebasket is a sign of trouble unless you've done it before and
know exactly how to proceed. (And yes, pencil and paper are the right tools
at the outset. ;-)

One should never get too wrapped around the axle on the issue of bottom-up
vs. top-down. Virtually every real programming effort will involve both.

Proper high-level structure is a result of top-down thinking, while
efficient use of machines and libraries requires bottom-up thinking. When
insightful top-down design and careful bottom-up design meet elegantly in
the middle, a beautiful and efficient program is the result.
--
-michael - NadaNet 3.1 and AppleCrate II: http://home.comcast.net/~mjmahon

Michael J. Mahon

unread,
Dec 20, 2013, 4:21:37 AM12/20/13
to
Hear, hear!

I'm in complete agreement. That's what I get for reading and responding in
order of posting. ;-)

Guesser

unread,
Dec 20, 2013, 4:43:05 AM12/20/13
to
On 20/12/2013 09:14, Rob wrote:
> In practice it is quite important to think about error handling before
> starting to write code. When it is added as an afterthought it will
> be quite tricky to get it right.
> (especially when it involves recovery, not only bombing out when
> something unexpected happens, and when some kind of configurable logging
> of problems that does not overflow during normal operation is desired)
>

My functions always check that functions they called completed
correctly[1] and return appropriate error codes if they didn't.

That's where I get stuck though, the actual main program loop tends to
just execute some "print error message and terminate" code or if I'm
really feeling generous to users "indicate that operation failed and go
back to main input loop" ;)


[1] of course the exception is the functions that "can never fail" [2].
No need to check those ;)
[2] unless they do of course, but you'll never know because I didn't
return any status.

Rob

unread,
Dec 20, 2013, 5:24:33 AM12/20/13
to
Guesser <alis...@alistairsserver.no-ip.org> wrote:
> On 20/12/2013 09:14, Rob wrote:
>> In practice it is quite important to think about error handling before
>> starting to write code. When it is added as an afterthought it will
>> be quite tricky to get it right.
>> (especially when it involves recovery, not only bombing out when
>> something unexpected happens, and when some kind of configurable logging
>> of problems that does not overflow during normal operation is desired)
>>
>
> My functions always check that functions they called completed
> correctly[1] and return appropriate error codes if they didn't.
>
> That's where I get stuck though, the actual main program loop tends to
> just execute some "print error message and terminate" code or if I'm
> really feeling generous to users "indicate that operation failed and go
> back to main input loop" ;)

That suffices for simple programs. I normally use that method as well.
But in a more complicated environment you may want to have logging
of the complete stack of error reasons. Your function fails because
it received an error from a lower level function, attempted some
recovery using another function but that also failed. The two lower
level functions each returned errors because they got errors from
even lower levels.
But you don't want every function to log any error it encounters.
Sometimes errors are expected and you are prepared to handle them
using an alternative method or other recovery. So the logging has
to be deferred until the upper level decides there is a problem,
yet you want the details of the errors occurring in the lower level
functions.

This makes it more complicated then just returning error numbers.

Guesser

unread,
Dec 20, 2013, 5:31:21 AM12/20/13
to
All my code tends to be for the Sinclair Spectrum so logging anything is
a bit of a problem - my main project at the moment is in fact a
filesystem implementation so if the function that's failing is OPEN or
WRITE dumping a log is not an option :D

The Natural Philosopher

unread,
Dec 20, 2013, 6:05:08 AM12/20/13
to
One comms program I wrote was the prime example of when you really do
NOT want to do that.

At the bottom was a PCI card that was essentially a 50 baud current loop
telex interface.

On top of that were ten layers of routines that handled valid or invalid
codes sent and recieved from the other end, each one a different level.

If the 'wire broke' - a not infrequent event when telexing darkest
Africa - you really didn't want to pass all that lot up the stack and
handle it at a high level.

The solutions was simple. An area of memory with an error code.

Then before even attempting a connection, or answering a call, the error
was cleared and setjmp was performed. If the return from that showed an
error in the code, then a diagnostic was printed and the program
returned to its main loop, or tried again.

Any error anywhere down the stack wrote a unique code in the error
memory and called longjmp.

So error handling was of the form at every level

if(error)
abort(my_unique_error_code)

and that was all you had to do.

Upstairs at te exit from the longjmp, there was a switch statement, each
one of whose case corresponded to a unique error code, and then
performed whatever response was appropriate for that error code. Up to
and including resetting the hardware completely, setting retry counters
and so on.

This made handling new errors a cinch.
Add a new entry to myerrors.h
add a new case to the error handler
check for that error wherever most appropriate and call abort..


By having error handling as a completely separate module, the program
flow for normal operations was not obscured by error handling and vice
versa.

By breaking all the rules of 'structured programming' I achieved a
cleaner neater and more structured program..

And it was a lot easier to debug.

I mention this to illustrate that like the pirates code, structured
programming techniques are 'only a sort of guideline'

Rob Morley

unread,
Dec 20, 2013, 7:25:09 AM12/20/13
to
On Fri, 20 Dec 2013 11:05:08 +0000
The Natural Philosopher <t...@invalid.invalid> wrote:

> By breaking all the rules of 'structured programming' I achieved a
> cleaner neater and more structured program.

But didn't using GOTO make you feel dirty?? :-)

Rob

unread,
Dec 20, 2013, 7:57:02 AM12/20/13
to
Sure, but even in an environment where you can only display errors
to the user this ugly problem shows up all the time.

E.g. you have a function to read configuration, it calls other functions
that finally open some files. One of the files cannot be opened and
a "cannot open file" error is returned to the higher level.
The upper level gets "cannot open file" error as the reason for the
whole function block to fail.
But you don't (unless you are Microsoft) want to display useless
alerts like "Cannot open file [OK]" or revert to "internal error [OK]",
you want to display a helpful message that tells the user (or admin)
WHICH file cannot be opened. Yet you don't want to alert the user
about any file that cannot be opened, there may be files in the system
that are optional or there may be alternatives for the same file.

To solve this, a slightly more sophisticated error handling philosophy
is required.

Guesser

unread,
Dec 20, 2013, 8:34:29 AM12/20/13
to
On 20/12/2013 12:57, Rob wrote:
> To solve this, a slightly more sophisticated error handling philosophy
> is required.
>

To implement that, a slightly more sophisticated programmer is required :D
Message has been deleted

The Natural Philosopher

unread,
Dec 20, 2013, 1:01:48 PM12/20/13
to
No. immensely relieved like when you have a bloody great crap and say
'there I did it'...

it enabled me to pull the project forward at least two weeks and deliver
on time and on budget, and it was a lot easier to understand.

Sometimes 'go to jail., go directly to jail, do not pass go, do not
collect £200' is actually a simpler way to get the job done.

The Natural Philosopher

unread,
Dec 20, 2013, 1:02:31 PM12/20/13
to
+1

and ROFL

Martin Gregorie

unread,
Dec 20, 2013, 3:36:34 PM12/20/13