Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Will too many paradigms addle my brain.

1 view
Skip to first unread message

wooks

unread,
Oct 11, 2005, 5:42:16 PM10/11/05
to
I've just started a comp sci degree. The main language used to teach is
Java - I certainly don't like it but thats the way it is. In the 1st
year we do Java and Prolog (compulsory).

Functional Programming is a 3rd year elective. I reckon I know enough
Scheme to convince the department that I will be able to cope with it
in my 1st year (although I think they use Miranda) and I think I'll be
allowed to choose it.

I am wondering though whether trying to study all 3 paradigms at the
same time is advisable.

Any views.

David Hopwood

unread,
Oct 11, 2005, 7:37:34 PM10/11/05
to
wooks wrote:
> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

Yes it is, definitely. Also get yourself a copy of CTM:
<http://www2.info.ucl.ac.be/people/PVR/book.html>.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Paul Rubin

unread,
Oct 11, 2005, 9:20:23 PM10/11/05
to
David Hopwood <david.nosp...@blueyonder.co.uk> writes:
> Yes it is, definitely. Also get yourself a copy of CTM:
> <http://www2.info.ucl.ac.be/people/PVR/book.html>.

Is the final version a lot different than the draft that was online in
pdf form? I read the first few chapters of that and have been wanting
to get around to the rest. However, it didn't much resemble what I
think of as functional programming. And Oz's use of logic variables
for communicating between concurrent processes seemed to invite
deadlock, etc.

Jon Harrop

unread,
Oct 11, 2005, 9:36:35 PM10/11/05
to
wooks wrote:
> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

Yes. Paradigms typically complement one another and the more you know of all
of them, the better your code is likely to be in any one of them. However,
I have never found a use for OO.

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com

wooks

unread,
Oct 12, 2005, 1:20:23 AM10/12/05
to

I will definitely take all 3 but I am not sure you have addressed the
nub of my question though... which is whether it is advisable to do
them simultaneously bearing in mind I will have to pass exams.

wooks

unread,
Oct 12, 2005, 1:25:19 AM10/12/05
to

Eric Hanchrow wrote:
> >>>>> "wooks" == wooks <woo...@hotmail.com> writes:
>
> wooks> I am wondering though whether trying to study all 3
> wooks> paradigms at the same time is advisable.
>
> If you can't handle three languages in school, you might not be able
> to handle three languages in the Real World. Believe me, many jobs
> require you to use multiple languages at once; I'm working on
> something now that uses
>
> * C++
> * perl
> * Delphi
> * Visual Basic
> * SQL
>
> --

I'm not worried about the real world as I'm already an experienced
programmer. I am focusing on the academic aspect.

Is it a good idea to/better to learn them at the same time when you
don't have to.

Brian Harvey

unread,
Oct 12, 2005, 1:51:02 AM10/12/05
to
"wooks" <woo...@hotmail.com> writes:
>Is it a good idea to/better to learn them at the same time when you
>don't have to.

I don't think this can be answered in principle. If you were my advisee,
I'd be asking questions about "what else might you take instead" and "if
not now, when, and along with what else?"

Some of our students do report being confused when they are doing two very
different things at once, e.g., SICP along with our machine organization/
machine language course. In one case we are asking them to pay careful
attention to low-level details, and in the other case, the entire project is
to abstract those details away (under the rug). But it sounds as if you will
be studying all more or less high level approaches, so I don't think you'll
be too confused. On the other hand, you may be more inclined to strangle the
instructor of the OOP course for making things too complicated. :-)

Cruise Director

unread,
Oct 12, 2005, 2:39:37 AM10/12/05
to
The difficulties of your workload may have nothing to do with computer
paradigms or computer science per se. Beware of biting off more than
you can chew. Maybe you are more productive and disciplined than a
"lazybones" such as myself, but my formula at Cornell always was 1 hard
course, 1 medium difficult course, and 2 easy courses. I did too much
of my own personal, uncredited cogitation to be saddled with relentless
pressure from every possible corner. Frankly I don't believe in it.
Some people think there's moral virtue in it, but I think the vast
majority of people can't handle that much stuff dumped on them all at
once. Better to master 1 thing well.

I'm surprised Jon said he never found a use for OO, as clearly, one has
to talk to all the people in industry who believe in it. :-)


Cheers,
Brandon J. Van Every
(cruise (director (of SeaFunc)
'(Seattle Functional Programmers)))
http://groups.yahoo.com/group/SeaFunc

Ulrich Hobelmann

unread,
Oct 12, 2005, 3:58:44 AM10/12/05
to
wooks wrote:
> Functional Programming is a 3rd year elective. I reckon I know enough
> Scheme to convince the department that I will be able to cope with it
> in my 1st year (although I think they use Miranda) and I think I'll be
> allowed to choose it.

I don't think they will, because Miranda and Haskell are lazy
functional, while ML and Scheme are strict (i.e. evaluate all their
function arguments first). This leads to quite different programming.
Anyway, learning Miranda in School and Scheme or Lisp at home doesn't
hurt. ;)

> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

Sure, why not? When I started school we had a Scheme class (2nd
semester Java), and I learnt C and asm. Since then I learnt quite a few
other languages (and looked a some others).

Keep your mind open and learn to think in program structure, not
language-specific constructs.

--
State, the new religion from the friendly guys who brought you fascism.

Jerzy Karczmarczuk

unread,
Oct 12, 2005, 4:09:57 AM10/12/05
to
wooks wrote:
> ...In the 1st

> year we do Java and Prolog (compulsory).
>
> Functional Programming is a 3rd year elective. ...

> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

Several people say "Of course YES...". That the paradigms complement
themselves, that you can compare different views of the same algorithmic
problem, etc. Nice, optimistic, not too far from my own feelings.

But Brian Harvey says:

> I don't think this can be answered in principle. If you were my advisee,
> I'd be asking questions about "what else might you take instead" and "if
> not now, when, and along with what else?"
>
> Some of our students do report being confused when they are doing two very
> different things at once, e.g., SICP along with our machine organization/
> machine language course.

=

Now, PLEASE, when Brian Harvey says something about the pedagogy of computing,
listen to him, he knows what he says!

It is known from a completely different fairy-tale that children embedded
in a bi-linguistic milieu, and who typically, after some time, master both,
and become perfectly bilingual, acquire both languages at a reduced rate,
they assimilate the same amount of information per time, spread into two
layers.

So an additional question is: can you afford that, if the analogy works in
your case?

I have to say, for example, that I learnt the non-deterministic, monadic
style of the lazy functional implementation of some algorithms much
easier than some of (cleverer than myself) people I know, since I could
translate the stuff to the logic programming paradigms.

The relation, important, although partial and specific, between objects and
closures, is probably easier to grasp when dealing with sequentially.

Anyway, a full-fledged software specialist would need those three paradigms
anyway. But your time-schedule is also a function of many other constraints...


Jerzy Karczmarczuk

Mike Austin

unread,
Oct 12, 2005, 4:25:29 AM10/12/05
to
Jon Harrop wrote:
> wooks wrote:
>
>>I am wondering though whether trying to study all 3 paradigms at the
>>same time is advisable.
>
>
> Yes. Paradigms typically complement one another and the more you know of all
> of them, the better your code is likely to be in any one of them. However,
> I have never found a use for OO.

One of the simple uses of objects is reducing namespace collisions. For
example, Dylan support multi-methods, but like CLOS, the number of parameters
for each generic function must be equal. One way around this I found is to
first dispatch on the type, then return a function, with as many parameters as
I like:

define method move-to (self :: <window>)
curry (method (self :: <window>, x, y)
format-out ("window.move-to\n");
end, self)
end;

move-to (window)(10, 20);

Now I don't have to wory about collisions when defining methods.


Mike

Torben Ægidius Mogensen

unread,
Oct 12, 2005, 5:19:58 AM10/12/05
to
Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:

> It is known from a completely different fairy-tale that children embedded
> in a bi-linguistic milieu, and who typically, after some time, master both,
> and become perfectly bilingual, acquire both languages at a reduced rate,
> they assimilate the same amount of information per time, spread into two
> layers.

This is, indeed, a fairy tale. Children who learn two languages when
growing up become equally proficient in their primary language as
children who learn only one, and depending on to what degree the
secondary language is used, they may learn to be fluent in this too.

From the studies I've seen, it is only when they learn three or more
languages that they become confused, and then only mildly. It may be
because a bilingual family typically has one parent speaking one
language and another parent speaking the other, so the child can keep
the languages apart by asssociating each language with a parent.



> So an additional question is: can you afford that, if the analogy works in
> your case?
>
> I have to say, for example, that I learnt the non-deterministic, monadic
> style of the lazy functional implementation of some algorithms much
> easier than some of (cleverer than myself) people I know, since I could
> translate the stuff to the logic programming paradigms.
>
> The relation, important, although partial and specific, between objects and
> closures, is probably easier to grasp when dealing with sequentially.
>
> Anyway, a full-fledged software specialist would need those three paradigms
> anyway. But your time-schedule is also a function of many other
> constraints...

With programming languages as well as natural languages, the best way
to learn a language is to use it. If you don't have time to make
reasonably sized projects with all the languages you learn, you will
not learn them properly.

However, if you learn languages sequentially, you may have the "ways"
of the first language stuck in your head when learning the second, so
you in the beginning don't really think about the language on its own
premises but rather think about how programs in the language you know
can be converted to the new language. Learning several languages at
the same time and solving the same programming problems in all of them
concurrently would be the ideal way of learning their relative
strengths and weaknesses. But it requires time enough to do this.

Torben

Joachim Durchholz

unread,
Oct 12, 2005, 5:23:23 AM10/12/05
to
wooks schrieb:

> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

In general, we don't know what your brain can manage :-)

Learning too many paradigms at the same time can indeed be confusing.

On the other hand, the more paradigms you know, the more ways to attack
a problem are at your disposition.

On the third hand, it can be immensely frustrating to be forced to
program in, say, Java, and know that some problem that requires hundreds
of lines of boilerplate code for every class could be done in a single
five-liner in Haskell.

On the fourth hand, any predisposition for this kind of frustration is
something that you'll have to overcome sooner or later, anyway... 90% of
most professional careers consist of legacy code maintenance, which
means that you'll likely be stuck with suboptimal tools 90% of your
professional time.
Actually that's nothing to moan about, it's just the way it is - when
you're writing new software, it will be legacy tomorrow, and the tools
that are top-of-the-cream today will be considered suboptimal when it
comes to maintaining your code.


Personally, I'd go for learning as many paradigms as possible anyway.
I'd start with as many courses as I feel comfortable with, and if things
start to get confusing, I'd drop courses until I can handle the
workload, and come back to them later.

Regards,
Jo

Jerzy Karczmarczuk

unread,
Oct 12, 2005, 6:02:10 AM10/12/05
to
Torben Ćgidius Mogensen wrote:
> Jerzy Karczmarczuk <kar...@info.unicaen.fr> writes:
>
>
>>It is known from a completely different fairy-tale that children embedded
>>in a bi-linguistic milieu, and who typically, after some time, master both,
>>and become perfectly bilingual, acquire both languages at a reduced rate,
>>they assimilate the same amount of information per time, spread into two
>>layers.
>
>
> This is, indeed, a fairy tale. Children who learn two languages when
> growing up become equally proficient in their primary language as
> children who learn only one, and depending on to what degree the
> secondary language is used, they may learn to be fluent in this too.
>
> From the studies I've seen, it is only when they learn three or more
> languages that they become confused, and then only mildly.

Would you please reread what I wrote?

I said myself that children acquire perfectly well both languages, only
the *speed* may be affected. And I never mentioned any confusion.
And this "fairy tale" concerning this speed came to me from children
education specialists I happen to know well.

J. Karczmarczuk

wooks

unread,
Oct 12, 2005, 7:46:21 AM10/12/05
to

Brian Harvey wrote:
> "wooks" <woo...@hotmail.com> writes:
> >Is it a good idea to/better to learn them at the same time when you
> >don't have to.
>
> I don't think this can be answered in principle. If you were my advisee,
> I'd be asking questions about "what else might you take instead" and "if
> not now, when, and along with what else?"
>

Well I could do a course on e-business entrepeneurship but we don't get
many electives and I'd rather spend them on something more academic and
see if I can wangle my way on to that course on a not for credit basis.

I do have an imperative/OO background (VB) so should not find Java
unduly difficult whereas in year 2 there will be more Java - Concurrent
Programming which I am not familiar with as well as Compilers.

My background for believing I could cope with a functional programming
course albeit one in Miranda is that I read half of your book over the
summer - Simply Scheme (got a bit stuck on the pattern matching program
but was fine with all the exercises up until then) as well as the first
8 chapters of The Little Schemer (i.e before they introduce
continuations which was too much for my brain at the time).

Strategically I am trying to free up more choice of elective for myself
in Year 3 which is where the university have placed their FP course.
The year 1 and 2 electives are either not practicable or not of
interest.


> Some of our students do report being confused when they are doing two very
> different things at once, e.g., SICP along with our machine organization/
> machine language course. In one case we are asking them to pay careful
> attention to low-level details, and in the other case, the entire project is
> to abstract those details away (under the rug). But it sounds as if you will
> be studying all more or less high level approaches, so I don't think you'll
> be too confused.

Well we are not doing SICP (although I hope to read it at some point)
but we are also doing MIPS programming but to me that is such a
different mindset that I am not concerned.

> On the other hand, you may be more inclined to strangle the
> instructor of the OOP course for making things too complicated. :-)

If you mean like having to write about 12 lines of code just to do
Hello World and being encouraged to come to terms with gizmo packed
IDE's , editors (we've been encourage to experiment with 3 different
environments) and API's before we even start programming. Yes.

wooks

unread,
Oct 12, 2005, 7:51:24 AM10/12/05
to

Cruise Director wrote:
> The difficulties of your workload may have nothing to do with computer
> paradigms or computer science per se. Beware of biting off more than
> you can chew.

This is not extra work.... I am proposing doing it in place of one the
prescribed 1st year electives.

> Maybe you are more productive and disciplined than a
> "lazybones" such as myself, but my formula at Cornell always was 1 hard
> course, 1 medium difficult course, and 2 easy courses.

Are there any easy courses in a CS degree.

Brian Harvey

unread,
Oct 12, 2005, 10:14:52 AM10/12/05
to
"wooks" <woo...@hotmail.com> writes:
>Well I could do a course on e-business entrepeneurship but we don't get
>many electives and I'd rather spend them on something more academic and
>see if I can wangle my way on to that course on a not for credit basis.

Ugh.

I will tell you right in this message everything in the business curriculum:

1. It's good to be greedy.

2. Assorted techniques for manipulating people who think it
isn't good to be greedy.

Since #1 is false, there's really no need for you to study #2.


But you should definitely think beyond computer science. I don't know where
you're going to school, but I'm willing to bet there are courses available in
literature, philosophy, psychology, art, mathematics, physics, etc.
I advise fitting as many of those in as you can, even if it means a little
less computer science.

Bruce Lewis

unread,
Oct 12, 2005, 10:35:02 AM10/12/05
to
"wooks" <woo...@hotmail.com> writes:

> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

I am firmly ambivalent about your question.

Many have mentioned the value of knowing multiple paradigms, and
learning 3 at once might turn out great for you.

On the other hand, it's good to learn a paradigm deeply by immersing
oneself completely in it. You might find this hard to do this when
you're learning all three. One paradigm might creep into another, when
it would be better not to combine them until each is fully learned.

In high school I took French 1 and Spanish 1 at the same time. To keep
the languages apart I focused on the subtle differences in
pronunciation. I did get tripped up on an oral exam with one word that
was difficult. Every phoneme was exactly the same in French and
Spanish, only the stress was on a different syllable. Five brownie
points for anyone who knows what word I'm referring too.

Joe Marshall

unread,
Oct 12, 2005, 11:17:44 AM10/12/05
to
"wooks" <woo...@hotmail.com> writes:

> Are there any easy courses in a CS degree.

*Far* too many if the poor quality of many CS graduates is any
indication.

wooks

unread,
Oct 12, 2005, 11:52:56 AM10/12/05
to

I am in one of the top rated schools in the country. I assure you there
aren't any easy courses , however the pass mark for all courses is 40%
so I guess it is possible to achieve a degree with low marks.

Joe Marshall

unread,
Oct 12, 2005, 11:59:14 AM10/12/05
to
"wooks" <woo...@hotmail.com> writes:

> Joe Marshall wrote:
>> "wooks" <woo...@hotmail.com> writes:
>>
>> > Are there any easy courses in a CS degree.
>>
>> *Far* too many if the poor quality of many CS graduates is any
>> indication.
>
> I am in one of the top rated schools in the country.

Great!

I was reminded of Alan Kay's quote:

``Most undergraduate degrees in computer science these days are
basically Java vocational training.''

Bradd W. Szonye

unread,
Oct 12, 2005, 1:49:32 PM10/12/05
to
wooks wrote:
>>> Are there any easy courses in a CS degree.

Joe Marshall wrote:
>> *Far* too many if the poor quality of many CS graduates is any
>> indication.

> I am in one of the top rated schools in the country.

Even the best schools are hit-or-miss at teaching CS and CE undergrads.
If you're lucky, you'll get instructors who actually enjoy teaching, are
good at teaching, and know the subject well. Even then, you'll only get
a very shallow understanding of the material if you're just looking to
finish your homework and pass your exams.

Undergraduate curricula are modeled after apprenticeships; their goal is
to teach you the basic tools of the thinking man's trade, just as
vocational schools teach you the basics of carpentry, plumbing, etc.
They don't aim for actual mastery of a subject -- that's what the
master's degree program is for (and even masters have very limited
experience).

The "basic tools" for an academic or engineer are critical thinking,
problem-solving skills, and general background material for your
speciality. Unless you go well beyond the course material, you won't
master functional programming (for example); you'll just acquire some
rough familiarity with the paradigm. You certainly won't become an
expert functional programmer just by taking a class, even if the teacher
/is/ excellent and you ace the class.

Unfortunately, it's easy to earn a bachelor's degree with /just/ the
general background material but no real skills, especially if you over-
emphasize the CS/CE portions of your curriculum. Whenever you have a
choice between taking a CS/CE class and taking philosophy, psychology,
literature, art, etc., I'd recommend the humanities class. While there's
no guarantee, in my experience the humanities teachers are a little more
likely than the CS/CE geeks to hammer the critical thinking stuff into
you. The geeks tend to get hung up on the cool details of the subject
instead of the (more important long-term) thinking skills. Philosophy
classes are especially good, so long as you have an engaging professor
or TA.

The other important thing you learn at typical American universities
(not sure if the rest of the world is the same) is work-life balance.
For most undergrads, it's your first experience living on your own.
Learning how to have fun and still get the job done is a big part of
going to school, in my experience. (I was always better at the "having
fun" than the "still getting the job done" part.)
--
Bradd W. Szonye
http://www.szonye.com/bradd

Jon Harrop

unread,
Oct 12, 2005, 1:50:20 PM10/12/05
to
Joe Marshall wrote:
> ``Most undergraduate degrees in computer science these days are
> basically Java vocational training.''

I assume Java is taught because it is commercially viable now. However, I
seem to spend most of my time teaching Java programmers in industry how to
use FPLs...

Jon Harrop

unread,
Oct 12, 2005, 1:50:45 PM10/12/05
to
Mike Austin wrote:
> One of the simple uses of objects is reducing namespace collisions.
> ...

Yes. You can avoid name collisions with other approaches, such as modules in
SML:

structure Window = struct
fun move_to window x y =
print "window.move_to"
end

or OCaml:

module Window = struct
let move_to window x y =
print_endline "window.move_to"
end

So that is not a justification for OOP, AFAICT.

Joachim Durchholz

unread,
Oct 12, 2005, 4:14:30 PM10/12/05
to
Jon Harrop schrieb:

> Joe Marshall wrote:
>
>> ``Most undergraduate degrees in computer science these days are
>> basically Java vocational training.''
>
>
> I assume Java is taught because it is commercially viable now. However, I
> seem to spend most of my time teaching Java programmers in industry how to
> use FPLs...

Where do they get jobs after they were trained?
(I'd like one *g*)

Regards,
Jo

wooks

unread,
Oct 12, 2005, 4:56:01 PM10/12/05
to

Bradd W. Szonye wrote:
> wooks wrote:
> >>> Are there any easy courses in a CS degree.
>
> Joe Marshall wrote:
> >> *Far* too many if the poor quality of many CS graduates is any
> >> indication.
>
> > I am in one of the top rated schools in the country.
>
> Even the best schools are hit-or-miss at teaching CS and CE undergrads.
> If you're lucky, you'll get instructors who actually enjoy teaching, are
> good at teaching, and know the subject well. Even then, you'll only get
> a very shallow understanding of the material if you're just looking to
> finish your homework and pass your exams.
>

I don't believe I said anything here that would have conveyed that
impression.

> Undergraduate curricula are modeled after apprenticeships; their goal is
> to teach you the basic tools of the thinking man's trade, just as
> vocational schools teach you the basics of carpentry, plumbing, etc.
> They don't aim for actual mastery of a subject -- that's what the
> master's degree program is for (and even masters have very limited
> experience).
>

I am doing an undergraduate masters.

> The "basic tools" for an academic or engineer are critical thinking,
> problem-solving skills, and general background material for your
> speciality. Unless you go well beyond the course material, you won't
> master functional programming (for example); you'll just acquire some
> rough familiarity with the paradigm. You certainly won't become an
> expert functional programmer just by taking a class, even if the teacher
> /is/ excellent and you ace the class.
>

It should be apparent from reading from reading my contributions to
this thread that that is not my circumstance.

> Unfortunately, it's easy to earn a bachelor's degree with /just/ the
> general background material but no real skills, especially if you over-
> emphasize the CS/CE portions of your curriculum. Whenever you have a
> choice between taking a CS/CE class and taking philosophy, psychology,
> literature, art, etc., I'd recommend the humanities class.

Our courses are organised into half units.
My first choice of elective was Cognitive Science. The Philosophy
course on offer was a whole unit which would have meant I couldn't do
Cog Sci if I took it. Other options that I considered had timetabling
difficulties. I am not really seeking advice on getting an all round
university education. The FP class is one that I want to take to build
on my what I have already learnt and it has the benefit of freeing up
my options in year 3.

> While there's
> no guarantee, in my experience the humanities teachers are a little more
> likely than the CS/CE geeks to hammer the critical thinking stuff into
> you.

I studied critical thinking by myself before I even decided to apply to
go to university.

> The geeks tend to get hung up on the cool details of the subject
> instead of the (more important long-term) thinking skills. Philosophy
> classes are especially good, so long as you have an engaging professor
> or TA.
>

I am not a geek. I want to do FP. Given a choice I would dump the Java
and OO courses but they are compulsory.

> The other important thing you learn at typical American universities
> (not sure if the rest of the world is the same) is work-life balance.
> For most undergrads, it's your first experience living on your own.

I am a mature student. I have worked in New Zealand, the USA, Africa
and the UK . For a variety of reasons I am doing a degree late in life.

> Learning how to have fun and still get the job done is a big part of
> going to school, in my experience. (I was always better at the "having
> fun" than the "still getting the job done" part.)

Had plenty of fun in my time and think I have a good work/life balance
perspective. I am taking up a completely new sport and have signed up
for some community projects in my first year.

wooks

unread,
Oct 12, 2005, 5:05:50 PM10/12/05
to

One of the schools with which we have a international student exchange
program is MIT. I had no idea standards had dropped so far in your old
school.

wooks

unread,
Oct 12, 2005, 5:25:48 PM10/12/05
to

I am taking Cognitive Science and would have taken a maths class as my
last elective but for timetabling clash. I have mentioned in my post
to Bradd why I am not taking Philosophy. I am also doing a course in
academic writing on a not for credit basis.

The main reason I am thinking of bringing forward the FP class is
because there is a wider range of options to choose in year 3 and 4 so
it will free up an extra slot for then.

Cruise Director

unread,
Oct 12, 2005, 6:44:38 PM10/12/05
to

wooks wrote:

> Cruise Director wrote:
>
> > Maybe you are more productive and disciplined than a
> > "lazybones" such as myself, but my formula at Cornell always was 1 hard
> > course, 1 medium difficult course, and 2 easy courses.
>
> Are there any easy courses in a CS degree.

I'm not entirely sure. I majored in Sociocultural Anthropology and
minored in CS so that my live would be sane. Also, because Computer
Graphics was cancelled on me sophomore year when I was deciding, and
because all CS professors I had had to date were decidedly dull.
Didn't get the good prof and the good course until I was a junior. I
found a lot of the CS hard, but part of that was because I was drifting
in and out of it with a minor, rather than sticking with it the whole
time. I generally did better on the practical lab courses, i.e.
programming, because I valued them more than the theory courses and put
far more energy into them. I mean, what's more important, book
knowledge or producing something that works and runs fast?

Anyways, nobody said that every course you take has to be in CS. Easy
courses were typically things like Art Appreciation or some crap like
that. I say "crap" only because I had a shitty prof for that. In a
parallel universe I am a painter, and that universe might be
overlapping this one sooner than I'd think.

Cruise Director

unread,
Oct 12, 2005, 6:52:50 PM10/12/05
to

Caveat Emptor. When I went through Cornell you had to sustain a grade
much higher than that to be a major. IIRC, better than the C~'s I was
getting in a number of courses. Otherwise I could have been a double
major, as I was only a few theory / math courses shy of it. I think
that standard persists today. A friend of had to go EE because the CS
dept. wouldn't have him. Berated him for what a bad CS student he was,
etc. Well, when he graduated, he got a job at Microsoft and the vast
majority of his peers didn't. Now, I don't think that's yet another
indication of how shoddy Microsoft is. :-) Rather, he had practical
skill and really knew his stuff. It just wasn't appreciated by the
Cornell CS dept.

Cruise Director

unread,
Oct 12, 2005, 6:56:09 PM10/12/05
to

When my friend graduated from Cornell in 2003 (?) they didn't. Only he
did. If you exit college in a bad economy and don't actually know how
to do anything, you're dead. Well, "dead" in the sense of "alternate
life experiences will be forced upon you."

Cruise Director

unread,
Oct 12, 2005, 7:01:26 PM10/12/05
to

Brian Harvey wrote:
>
> I will tell you right in this message everything in the business curriculum:
>
> 1. It's good to be greedy.
>
> 2. Assorted techniques for manipulating people who think it
> isn't good to be greedy.
>
> Since #1 is false, there's really no need for you to study #2.

I dunno, some marketing know-how could be damn useful if you want to
promote your own career, do what you want instead of what others would
force you to do, cause your products to get bought, etc. Maybe not
revel in it, maybe outsource a lot of it to specialists, but certainly
be familiar with basic marketing principles. So sayeth a Cruise
Director.

beza1e1

unread,
Oct 13, 2005, 6:12:34 AM10/13/05
to
Speaking as a cs student at Karlsruhe, Germany: yes.

I now got around the first year, where i had the following courses:
Linear Algebra, Higher maths (analysis): hard
Computer Science 2: medium
Computer Science 1, Numeric: easy
Statistics: laughable

I'm not sure how this is comparable to US curriculums. Older students
tell me most people don't take LA and HM both in the first year. I
probably will now have a lazy second half for my undergraduation.

(not sure if i translated all terms correctly)

Ulrich Hobelmann

unread,
Oct 13, 2005, 7:24:59 AM10/13/05
to
beza1e1 wrote:
> Speaking as a cs student at Karlsruhe, Germany: yes.
>
> I now got around the first year, where i had the following courses:
> Linear Algebra, Higher maths (analysis): hard
> Computer Science 2: medium
> Computer Science 1, Numeric: easy
> Statistics: laughable

I did my Vordiplom in Braunschweig. Lots of hard, hard maths (I got
straight As in high school without any problems, but there I was
fighting through my homework all afternoon and night and barely got
through...), but everything CS-related very, very, very, very easy,
totally laughable. From two weeks of reading stuff (one algorithms
book, and SICP) I learned much more than from the first two semesters in
my CS classes. Mostly I didn't bother to even attend them, just worked
on my maths during that time ;) Oh yes, we also did some reasonably
hard theoretical CS and technical CS, not that that did my any good;
I'll never design a CPU, nor do I profit from knowing about weird
constructions to prove the halting problem.

> I'm not sure how this is comparable to US curriculums. Older students
> tell me most people don't take LA and HM both in the first year. I
> probably will now have a lazy second half for my undergraduation.

Hm, at a reportedly quite good Midwest university (undergrad) I found
the 4xx CS classes very easy. OTOH I sometimes read about US undergrads
having compiler classes where they do cool stuff (such as building whole
compilers in Scheme or SML), which in Germany I did in my 7th semester,
but without much practical or useful emphasis (basically just extracts
from the Dragon book; but a second class involved a very very basic
compiler for the JVM written in C(!) as a group project).

But I think the US have large differences in education, even for the
same degrees.

But since I sat through my first two weeks at the University, I gave up
on learning anyway (I do that in my free-time, when I'm not forced to
stuff for college).

You don't learn stuff there (that you couldn't just learn yourself in
20% the time), you just earn your piece of paper ("diploma") by doing
lots of bull**** for whoever teaches (i.e. doing repetitive exercises
that don't teach you anything you don't already know, but you have to do
them anyway; not that the professors would care...).

--
A government which robs Peter to pay Paul can always
depend on the support of Paul.
George Bernard Shaw

Torben Ægidius Mogensen

unread,
Oct 13, 2005, 8:12:03 AM10/13/05
to
"wooks" <woo...@hotmail.com> writes:

The percentage required to pass isn't really a good indication of
quality -- it depends on how difficult the exercises are. 40% of a
number of exercises that require deep insight into the subject may be
a lot harder than 90% of trivial surface knowledge questions.

Torben

Torben Ægidius Mogensen

unread,
Oct 13, 2005, 8:17:03 AM10/13/05
to
Joe Marshall <jmar...@alum.mit.edu> writes:


> I was reminded of Alan Kay's quote:
>
> ``Most undergraduate degrees in computer science these days are
> basically Java vocational training.''

And when they aren't, the students complain (or simply fail to pass
their exams). :-)

You wouldn't believe (well, maybe you would) how often I hear students
complain that our CS curriculum is too "ivory tower" and out of touch
with "real life" (i.e., the internet economy).

Torben

Jon Harrop

unread,
Oct 13, 2005, 8:20:02 AM10/13/05
to
Torben Ćgidius Mogensen wrote:
> You wouldn't believe (well, maybe you would) how often I hear students
> complain that our CS curriculum is too "ivory tower" and out of touch
> with "real life" (i.e., the internet economy).

Sure but, by definition, students don't know what they're talking about.

When I did physics, some students complained that we should not have had
lectures or examinations on using computers. They took a vote and 80% of
the students said that computers have nothing to do with physics so they
should be removed from the course. Fortunately, the "powers that be"
basically ignored the vote...

Garry Hodgson

unread,
Oct 13, 2005, 8:58:45 AM10/13/05
to
Jon Harrop <use...@jdh30.plus.com> wrote:

> I have never found a use for OO.

but even if it's not a paradigm you'd choose to use,
it's useful to understand it because so much of the
software, libraries, toolkits, articles, etc. use it.

a friend of mine who was a staunch atheist nonetheless
took his children to church. he felt that if they were
going to live in this country, they'd better understand
the culture, whether he/they believed or not.

----
Garry Hodgson, Technical Consultant, AT&T Labs

Your love, your anger, your kindness, your hate.
All of it creates the future for you and your children.
What kind of future are you creating today?

Ulrich Hobelmann

unread,
Oct 13, 2005, 9:44:28 AM10/13/05
to

Yes, that's funny. In Germany we distinguish between universities and
practical schools (good idea, IMHO). The former are supposed to focus
more on the scientific side, but for some reason the vast majority of
students (despite no interest in science at all) goes to the university.
The result is that there have been numerous accusations regarding
ivory-towerness, and the universities have embraced practical, pointless
programming drills (using Java) and abandoned most teaching of
fundamentals and principles behind those languages.

As a result someone like me, who used to be interested in the ivory
tower side of CS couldn't find a place to study, and everybody else
still complains about lack of practical stuff. IMHO, even the practical
stuff couldn't hurt to see some formal fundamentals, but that's just me.

Ulrich Hobelmann

unread,
Oct 13, 2005, 9:51:54 AM10/13/05
to
Garry Hodgson wrote:
> Jon Harrop <use...@jdh30.plus.com> wrote:
>
>> I have never found a use for OO.
>
> but even if it's not a paradigm you'd choose to use,
> it's useful to understand it because so much of the
> software, libraries, toolkits, articles, etc. use it.

Honestly, since even "experts" have discovered that inheritance can be
dangerous and interfaces are much better, most stuff that uses OO really
only uses structs and functions on them, but with the more convenient
field addressing syntax of something like C++ (as compared to C).

That much is the same in every language, i.e. I haven't seen a language
that wouldn't make use of record types in practical code.

While functional languages allow the same kind of inheritance (with
higher-order functions) as OO-langs do, I haven't really the impression
that people use it or need it.

It's good to understand how OO is build on top of standard C-like
languages, just like it's useful to understand all other kinds of
language fundamentals.

> a friend of mine who was a staunch atheist nonetheless
> took his children to church. he felt that if they were
> going to live in this country, they'd better understand
> the culture, whether he/they believed or not.

Heh. My dad let me undergo the same, though he's more of a Buddhist (I
guess). The downside of Christian teaching is that (due to group
pressure or whatever) most people actually believe it (like people in
other countries take Islam or a Buddhist world view for Truth). I used to.

If you tell your kids that Christianity is just part of their culture,
and people in other countries believe entirely different things, and
therefore nobody can ever know which religion is true, that'll make them
a hell of a lot more tolerant than most fundamentalists
*cough*bushists*cough* running around.

Jerzy Karczmarczuk

unread,
Oct 13, 2005, 11:55:29 AM10/13/05
to
Jon Harrop wrote:
...

> When I did physics, some students complained that we should not have had
> lectures or examinations on using computers. They took a vote and 80% of
> the students said that computers have nothing to do with physics ...

Do you mind an off-topic anecdote?

When I was a physicist, we put on one examination a question: "How would
behave the following instruments on the surface of Moon:
a barometer (mercury)
an aneroid barometer
a pendulum clock
...
a pycnometer
... etc. About 10.

Students gave very rich answers, taking into account the gravitation, the
high/low temperatures, the vacuum, etc. I still remember one answer:

"... Unfortunately, I haven't the slightest idea what a pycnometer is for.
But I presume that, if you throw it away, it will fly much farther than
on Earth".

Now, this experiment works with computers as well!!

Those folks who claimed that computers have nothing to do with physics, had
shown simply no imagination. They would never become decent physicists!


Jerzy Karczmarczuk

Jon Harrop

unread,
Oct 13, 2005, 3:44:05 PM10/13/05
to
Ulrich Hobelmann wrote:
> While functional languages allow the same kind of inheritance (with
> higher-order functions) as OO-langs do, I haven't really the impression
> that people use it or need it.

Have there been any studies done on the proportion of inheritance
hierarchies used in "conventional" OO languages that have a simple
equivalent in FPLs? For example, how often are inheritance hierarchies used
as a convoluted way to implement a variant type?

Chris F Clark

unread,
Oct 13, 2005, 4:52:33 PM10/13/05
to
John Harrop asked:

> Have there been any studies done on the proportion of inheritance
> hierarchies used in "conventional" OO languages that have a simple
> equivalent in FPLs? For example, how often are inheritance hierarchies used
> as a convoluted way to implement a variant type?

I can't speak to the existence or non-existence of studies. However,
as an OO practitioner, I would say many of my OO hierarchies are
variant types, and so are other techniques I use (e.g. unions, tagged
or untagged). However, having different ways of describing variant
types, that have slightly different properties, means I have lots of
different angles for my mind to view a problem from, meaning that my
mind is more likely to hit upon a solution, no matter which
perspective it starts addressing the problem from.

Similarly, I think most of my OO hierachies can also be viewed as
poor-person's parameteric polymorphism--i.e. I want parameteric
polymorphism, but don't have an fp way of expressing it in the target
language. Again, I have several different tools for getting around
that problem. The more tools I have, the less problems are road-
blocks. This is particularly true, when one needs a light weight
version of a solution. Adding a class to a heirarchy or a message
type to an interface is much lower impact on a design than building a
first-class function type. Building a first class function type in a
language that doesn't have the right concepts is to write unreadable
and unmaintainable code. Switching languages to something better is
not always an option, particularly, as only one member of a team.

-Chris

Joachim Durchholz

unread,
Oct 13, 2005, 5:09:33 PM10/13/05
to
Cruise Director schrieb:

> I mean, what's more important, book
> knowledge or producing something that works and runs fast?

That depends.

If you're designing large systems, a good measure of theory is actually
indispensable.

The problem is: there's far more theory than anybody really needs, but
you don't know in advance what you'll need. So universities have started
to teach "working with theory" (and, I fear, many professors are using
that as an excuse to spread their ivory-tower theoretic framework, so
this isn't perfect).

Regards,
Jo

Brian Harvey

unread,
Oct 13, 2005, 5:38:43 PM10/13/05
to
Ulrich Hobelmann <u.hob...@web.de> writes:
> nor do I profit from knowing about weird
>constructions to prove the halting problem.

It's a shame that you feel that way. First, a shame that you think about
your education in terms of "profit from," if I'm correctly understanding that
to mean "this will have direct application to my future job." Do you feel
that way about all the math courses you took? And second, a shame that you
don't see the beauty and the phenomenal genius in Turing's development of a
way to prove things about the theoretical limits of computers at a time when
they were just beginning to build actual ones.

>You don't learn stuff there (that you couldn't just learn yourself in
>20% the time)

In a sense this is true about any learning of anything -- you could do it
on your own with some effort. Nevertheless, many people find it beneficial
to be part of a community of scholars, helping each other learn and push the
limits of what's known. Perhaps you are overgeneralizing from a few bad
teaching experiences?

wooks

unread,
Oct 13, 2005, 6:23:11 PM10/13/05
to

I am amazed that you think there was more than one way to interpret
what I actually said.

zitterb...@gmail.com

unread,
Oct 13, 2005, 6:38:00 PM10/13/05
to
Lisp is like kung FOO! Lisp is a meta paradigm. WIthout lisp you
can't understand any other languages.learning the language makes you a
better person. Computer Science is the assault of paradigm to
languages. Lisp is the only thing that has survived for 40 years and it
should be respected. Another meta paradigm is the construction of new
languages. That is what we call Backus Normal Form. Learn both and then
write a lisp macro that does something useful.

Neelakantan Krishnaswami

unread,
Oct 13, 2005, 7:34:40 PM10/13/05
to
In article <dij5os$823$1...@abbenay.CS.Berkeley.EDU>, Brian Harvey wrote:
> "wooks" <woo...@hotmail.com> writes:
>>Well I could do a course on e-business entrepeneurship but we don't get
>>many electives and I'd rather spend them on something more academic and
>>see if I can wangle my way on to that course on a not for credit basis.
>
> Ugh.
>
> I will tell you right in this message everything in the business curriculum:
>
> 1. It's good to be greedy.
> 2. Assorted techniques for manipulating people who think it
> isn't good to be greedy.
>
> Since #1 is false, there's really no need for you to study #2.

This is a rather blinkered outlook. I had a great deal of fun working
at a startup, and the main things that I learned while working at a
startup were humility and a sense of service.

The job of a businessperson is to organize a group of people to create
goods and services so useful that other people will voluntarily part
with their hard-earned money for them. Doing a good job at this is
quite a bit harder than it might seem, since it's fundamentally an act
of empathy and creativity -- you have to focus on what someone else
needs, and understand their needs well enough to create things that
are both valuable to them and whose value can be communicated. But
there's a lot of organizational cruft associated with running a
business, and learning how to handle it is a very useful skill.

In fact, my main regret is going to work at a startup, rather than
starting up a business myself.

> But you should definitely think beyond computer science. I don't
> know where you're going to school, but I'm willing to bet there are
> courses available in literature, philosophy, psychology, art,
> mathematics, physics, etc. I advise fitting as many of those in as
> you can, even if it means a little less computer science.

This is good advice, though. My only caveat is that I found that
reading great literature was more useful than literature classes. I'd
suggest reading on your own and taking writing classes instead.

--
Neel Krishnaswami
ne...@cs.cmu.edu

Torben Ægidius Mogensen

unread,
Oct 14, 2005, 4:49:22 AM10/14/05
to
"zitterb...@gmail.com" <zitterb...@gmail.com> writes:

> Lisp is the only thing that has survived for 40 years and it should
> be respected.

What we call LISP today has only a superficial resemblance to the LISP
of 40 years ago. Same with Fortran and COBOL (that also existed 40+
years ago). Who was it that said "I don't know which language people
will use in the year 2000, but I know they will call it Fortran!"?

And while LISP, Fortran and COBOL retained their names while changing,
you can argue that Algol is still with us in the shape of C, Java and
similar languages. See the "family tree" of programming languages at
http://www.levenez.com/lang/history.html for more details.

Torben

Lauri Alanko

unread,
Oct 14, 2005, 5:09:52 AM10/14/05
to
In article <3r6ugdF...@individual.net>,

Ulrich Hobelmann <u.hob...@web.de> wrote:
> nor do I profit from knowing about weird constructions to prove the
> halting problem.

I doubt that. I believe everyone stumbles into Rice's Theorem sooner
or later. It's good to be ready for that:

"All right, we need something that analyzes a program and tells
whether it--"
"Sorry, can't be done."

This can save anything from minutes of fruitless pondering to
man-years of work in vain. :)


Lauri

Ulrich Hobelmann

unread,
Oct 14, 2005, 5:20:22 AM10/14/05
to
Brian Harvey wrote:
> Ulrich Hobelmann <u.hob...@web.de> writes:
>> nor do I profit from knowing about weird
>> constructions to prove the halting problem.
>
> It's a shame that you feel that way. First, a shame that you think about
> your education in terms of "profit from," if I'm correctly understanding that
> to mean "this will have direct application to my future job." Do you feel

No, actually not. I went to University (while we have practical schools
too in Germany) because I was interested in more thorough background
knowledge, not just applied stuff. But when I have to learn a theory, I
expect that it is for handling a practical problem in a formal way.

It seems that an awful lot of theoretical CS is just theory without
applications, while the practical people (who seem to hate theory) don't
even bother to use formal methods or study principles behind programming
for instance, but just create ad-hoc solutions / languages instead
(often under the discipline name software engineering). The gap in
between is what I'd be interested in, but there aren't too many people
teaching that I guess.

The contrived construction of a funny machine that can't be proven to
halt isn't interesting to me. Many practical algorithms don't just
infinite-loop, and the people writing code *know* that their code (most
often) won't loop. The same with Gödel's stuff. I don't consider weird
constructions practical or useful at all, just because there exists one
totally made up case that refutes something.

> that way about all the math courses you took? And second, a shame that you
> don't see the beauty and the phenomenal genius in Turing's development of a
> way to prove things about the theoretical limits of computers at a time when
> they were just beginning to build actual ones.

Ok, it's nice to know that there are limits, but I'd rather be concerned
about practical limits. Turing machines are a weird design to begin
with (one-dimensional tape, infinite...).

>> You don't learn stuff there (that you couldn't just learn yourself in
>> 20% the time)
>
> In a sense this is true about any learning of anything -- you could do it
> on your own with some effort. Nevertheless, many people find it beneficial
> to be part of a community of scholars, helping each other learn and push the
> limits of what's known. Perhaps you are overgeneralizing from a few bad
> teaching experiences?

Mostly I just think my degree's first two years were a TOTAL waste of
time. Ok, in my free time I read lots of (to me) interesting stuff, so
the years afterwards weren't too exciting either, but had I had those
years right at the beginning, they would have been. There are really
interesting things in theoretical (let's call it "formal") CS, such as
semantics, process calculi, type systems, automata, but incomputability
is more a legend that CS people should have heard of, than something
they should have to study in depth, IMHO.

Ulrich Hobelmann

unread,
Oct 14, 2005, 5:25:03 AM10/14/05
to
Jon Harrop wrote:
> Ulrich Hobelmann wrote:
>> While functional languages allow the same kind of inheritance (with
>> higher-order functions) as OO-langs do, I haven't really the impression
>> that people use it or need it.
>
> Have there been any studies done on the proportion of inheritance
> hierarchies used in "conventional" OO languages that have a simple
> equivalent in FPLs? For example, how often are inheritance hierarchies used
> as a convoluted way to implement a variant type?

Hehe, good question.
But there's a difference to be noted between classes and variant types:
variant types are usually closed, but very efficient (I guess), while
classes are open. You can define subclasses later, along with some
methods. Of course adding a new *function* is harder for classes. The
different implementation also means that a method call might be slower...

Of course if I want this kind of dynamic behavior (let's call it
interface), I'll just pass a function (or the whole interface/virtual
function table) as a parameter :)

Matthias Kretschmer

unread,
Oct 14, 2005, 6:01:16 AM10/14/05
to

the trick is here, that most programming languages are equivalent to
Turing Machines, just that they are simpler than most languages. On the
other hand they are not really impractical. If you consider infinite
tapes as impractical than programming languages that don't bound your
memory usage are impractical, too (well I know many of them). And look
at the book of Schoenhage et.al. "Fast Algorithms - A Multitape Turing
Machine Implementation"
(http://www.informatik.uni-bonn.de/~schoe/tp/TPpage.html).

>
>> that way about all the math courses you took? And second, a shame that you
>> don't see the beauty and the phenomenal genius in Turing's development of a
>> way to prove things about the theoretical limits of computers at a time when
>> they were just beginning to build actual ones.
>
> Ok, it's nice to know that there are limits, but I'd rather be concerned
> about practical limits. Turing machines are a weird design to begin
> with (one-dimensional tape, infinite...).
>
>>> You don't learn stuff there (that you couldn't just learn yourself in
>>> 20% the time)
>>
>> In a sense this is true about any learning of anything -- you could do it
>> on your own with some effort. Nevertheless, many people find it beneficial
>> to be part of a community of scholars, helping each other learn and push the
>> limits of what's known. Perhaps you are overgeneralizing from a few bad
>> teaching experiences?
>
> Mostly I just think my degree's first two years were a TOTAL waste of
> time. Ok, in my free time I read lots of (to me) interesting stuff, so
> the years afterwards weren't too exciting either, but had I had those
> years right at the beginning, they would have been. There are really
> interesting things in theoretical (let's call it "formal") CS, such as
> semantics, process calculi, type systems, automata, but incomputability
> is more a legend that CS people should have heard of, than something
> they should have to study in depth, IMHO.
>

In good classes you should learn a lot, even for pratical purposes.
There is of course some kind of theory that you can't directly apply
practical problems, for example the PCP-theorem is only indirectly
useful in practice, because we don't have PCP-machines, but you can
proof some stuff concerning the APX, PTAS, EPTAS, etc. classes which
are very interesting for practical purposes. E.g. you have some problem
and you need to have an algorithm that quickly calculates the solution
of an instance. If you have this theoretical background you can save a
lot of time, if you don't, well ... (I assume P unequal to NP for now).

The halting problem is tightly connected to problems found in practical
problems and theories like type theory. If you want to ensure that your
type system is decideable you know you can't have the power of a turing
machine. If you want this power, your compiler may not halt on every
module/program instance. Looking at a compiler in more detail. You have
different passes, e.g. look at register allocation. register allocation
is as hard as graph colouring if you have an architecture where the
registers are different and depending on the operation you have to
choose a different one like x86. If you know how good one is able to
approximate graph colouring you know how bad a compiler is at this as
long as (as long as P != NP) no worst-case super-polynomial time
register allocation algorithm is used. This are just some examples, so I
hope you see, having this theoretical background is a good thing in case
one wants to do things right.

Knowing just of these complexity classes or other kind of theoretical
stuff is often not enough, because in reality you don't get your 3SAT
problem or Graph Colouring. You get problem XYZ and you have to figure
out from there. Sometimes, as in the case of register allocation, it is
trivial to reduce it to graph colouring or more important find a
L-reducation for XYZ to some known problem.

--
Matthias Kretschmer

Joachim Durchholz

unread,
Oct 14, 2005, 6:05:05 AM10/14/05
to
Ulrich Hobelmann schrieb:

> Brian Harvey wrote:
>
>> Ulrich Hobelmann <u.hob...@web.de> writes:
>>
>>> nor do I profit from knowing about weird constructions to prove the
>>> halting problem.
>
> No, actually not. I went to University (while we have practical schools
> too in Germany) because I was interested in more thorough background
> knowledge, not just applied stuff. But when I have to learn a theory, I
> expect that it is for handling a practical problem in a formal way.

The halting problem *is* a practical one. It would be nice if there were
a way to check that you haven't inadvertently written an endless loop.
Compilers could warn about nonterminating recursion.
Also, there's a whole class of problems that can be mapped to the
halting problem (the undecidable ones). Knowing which kind of problem is
in that class is of practical value, too.

Of course, the proof that the halting problem cannot be handled
algorithmically isn't in itself practical. No proof that tells us "this
can't be done" is practical. So from a practical perspective, you can be
content with the proof.

On the other hand, if you have a problem and are unsure whether it's
decidable, knowing such proof techniques can help deciding. So there is
some remote practical use even for this kind of knowledge.

> Many practical algorithms don't just
> infinite-loop, and the people writing code *know* that their code (most
> often) won't loop.

Ah, but I had one such instance. I was writing a table control - very
basic down-to-earth GUI stuff.

The "interesting" part here was that row height was determined by
contents. I had to divide the processing into two phases: height
determination and actual laying-out. It's *very* easy to confuse the
steps, and that usually ends up with the laying-out part recursing back
into the height determination code - sometimes that will terminate,
sometimes it will not.

This kind of work isn't too common, with that I agree. But termination
problems can happen if you're doing things that aren't straightforward.

> The same with Gödel's stuff. I don't consider weird
> constructions practical or useful at all, just because there exists one
> totally made up case that refutes something.

It's not the weird construction that's interesting, it's the refutation.

> Ok, it's nice to know that there are limits, but I'd rather be concerned
> about practical limits. Turing machines are a weird design to begin
> with (one-dimensional tape, infinite...).

They are commonplace simply because they are the easiest-to-explain
equivalent of an infinite computer. In the 50ies, when it was unclear
what an algorithm could do or not, and when it was unclear whether
different ways to writing down algorithms would affect the power or not,
people invented dozens of algorithmic notations. Turing machines and the
lambda calculus are what is still in (relatively) common knowledge, but
there were others, and some of them were *really* weird.

The Turing machine survived because it was easiest to prove some things
with it, and because every other formalism could be proved to be
Turing-equivalent.

The lambda calculus survived because it's so abstract that it has served
as a model for many a programming language.

That's all. You don't *need* to know about Turing machines; it's just
that you won't understand the term "Turing equivalence" unless you know
what a Turing machine is.

> Mostly I just think my degree's first two years were a TOTAL waste of
> time.

Me too, in a sense. It was dedicated to learning programming, something
that I had done in my free time before. It was rather sparse on theory,
which was the area where I *could* learn.

The theoretical backgrounds have helped me a lot. It's simply additional
perspectives, and I can make use of them when I'm doing advanced stuff.

Learning this stuff was also a rewarding experience for me, but of
course that's no justification for forcing this stuff on students that
may not feel rewarded. The additional-perspective argument is.

> Ok, in my free time I read lots of (to me) interesting stuff, so
> the years afterwards weren't too exciting either, but had I had those
> years right at the beginning, they would have been. There are really
> interesting things in theoretical (let's call it "formal") CS, such as
> semantics, process calculi, type systems, automata, but incomputability
> is more a legend that CS people should have heard of, than something
> they should have to study in depth, IMHO.

Sorry, that's an undefensible position. You need to know about
decidability if you design type systems, or any kind of inference engine.

You also need to know about decidability to assess the limitations of
inference engines, to check whether the limitations are arbitrary or
really due to undecidability issues.

This knowledge also helps when *using* such engines. If you know that
what you're trying to achieve is undecidable, you automatically try to
transform the problem into something decidable. You know quite exactly
what information needs to be added so that the engine can work.
Without that background knowledge, exploring the problem space is
largely guesswork.

Regards,
Jo

Lauri Alanko

unread,
Oct 14, 2005, 6:12:26 AM10/14/05
to
In article <3r9binF...@individual.net>,

Ulrich Hobelmann <u.hob...@web.de> wrote:
> knowledge, not just applied stuff. But when I have to learn a theory, I
> expect that it is for handling a practical problem in a formal way.

Science involves both basic research and applied research. Basic
research strives only to deepen our understanding about things without
aspiring towards practical applications. A university is a scientific
institution, so it's no wonder that some things being taught there are
not eminently practical. If you don't like this approach, then the
university is probably not the right place for you.

That being said, one of the wonderful things about science is that
anything at all may turn out to have unexpected practical
applications. It's just impossible to know beforehand, which. If
people were taught only things whose practical uses were well-known,
no one would ever come up with applications for other things.

> The contrived construction of a funny machine that can't be proven to
> halt isn't interesting to me. Many practical algorithms don't just
> infinite-loop, and the people writing code *know* that their code (most
> often) won't loop.

If they do, they know it because they have proven it.

> Ok, it's nice to know that there are limits, but I'd rather be concerned
> about practical limits.

Practical limits change all the time. Theoretical ones don't. Guess
which ones are more useful to know about in the long run?

> Mostly I just think my degree's first two years were a TOTAL waste of
> time.

If you don't find use for what you have learned, why do you think the
problem is in what you learned? You might just as well think that
there's something wrong with what you are doing, or how you are doing
it, if you don't get to apply your knowledge enough.

> There are really interesting things in theoretical (let's call it
> "formal") CS, such as semantics, process calculi, type systems,
> automata, but incomputability is more a legend that CS people should
> have heard of, than something they should have to study in depth,
> IMHO.

That's funny, as all of the interesting examples you mention involve
incomputability e.g. in the form of undecidability. Don't you think it
is at all interesting to know e.g. whether a compiler will actually
finish when given a program to compile? Or do you think it's ok for
any old program to get stuck occasionally? You can always push
control-C?


Lauri

Vesa Karvonen

unread,
Oct 14, 2005, 6:45:34 AM10/14/05
to
In comp.lang.scheme Lauri Alanko <l...@iki.fi> wrote:
> [...] If people were taught only things whose practical uses were

> well-known, no one would ever come up with applications for other
> things.

Other than the above, I think that you are right on the money. However, I
don't think that teaching practical things, or teaching theory through
applications, or teaching abstractions through concrete examples,
necessarily stiffles creativity. If anything, I believe that cool
applications of theoretical insights can motivate most people to learn
more theory.

-Vesa Karvonen

Joachim Durchholz

unread,
Oct 14, 2005, 6:53:45 AM10/14/05
to
zitterb...@gmail.com schrieb:

> Lisp is like kung FOO! Lisp is a meta paradigm.

Nope. Lisp is a family of languages, not a paradigm. It is quite
flexible, and can be used for many paradigms - but you still have to
learn the paradigms; Lisp is just the "substrate".

> WIthout lisp you can't understand any other languages.

Nonsense. I have learned Lisp, and yes it was an eye-opener since it
introduced me into higher-order functions - but I would have learned
that from Haskell, or any other FPL.

> Learning the language makes you a better person.

Now that's *utter* nonsense. Unless you specify what you mean with
"better person".
It certainly doesn't make me more compassionate, for one instance :-)

> Computer Science is the assault of paradigm to languages.

Doesn't compute.

> Lisp is the only thing that has survived for 40 years and it
> should be respected.

If that were of any relevance, I'd also have to respect Fortran and RPG.

Fortran... well, current-day Fortran bears little to no resemblance to
40-year-old Fortran dialects, so it doesn't really count.

But RPG? It's an unspeakable evil from ancient times, and should better
be left untouched. (For the curious and foolhardy: three-address code,
GOSUB but no local variables, no pointers, no variable-length strings,
variable names limited to six characters. And that's just the largest
deficits, there are numerous useless limitations in the details.)

I do respect Lisp. I wouldn't respect it for its age along, though that
certainly adds to the respect.

> Another meta paradigm is the construction of new languages. That is
> what we call Backus Normal Form.

BNF is unrelated to Lisp.

> Learn both and then write a lisp macro that does something useful.

I have always been sceptical about self-definable syntax. It tends to
encourage code that nobody but the original macro author understands.

Regards,
Jo

Ulrich Hobelmann

unread,
Oct 14, 2005, 7:50:46 AM10/14/05
to
Matthias Kretschmer wrote:
>> The contrived construction of a funny machine that can't be proven to
>> halt isn't interesting to me. Many practical algorithms don't just
>> infinite-loop, and the people writing code *know* that their code (most
>> often) won't loop. The same with Gödel's stuff. I don't consider weird
>> constructions practical or useful at all, just because there exists one
>> totally made up case that refutes something.
>
> the trick is here, that most programming languages are equivalent to
> Turing Machines, just that they are simpler than most languages. On the
> other hand they are not really impractical. If you consider infinite
> tapes as impractical than programming languages that don't bound your
> memory usage are impractical, too (well I know many of them). And look
> at the book of Schoenhage et.al. "Fast Algorithms - A Multitape Turing
> Machine Implementation"
> (http://www.informatik.uni-bonn.de/~schoe/tp/TPpage.html).

But isn't the Lambda Calculus, or Recursive Functions equivalent in
power to the funny tape machine? Both are much easier to understand,
IMHO, and the first one even provides the basis for lots of programming
languages. Why does every CS student have to suffer through the Turing
stuff, but most haven't even *heard* of Lambda Calculus? This just
doesn't make sense to me.

> If you have this theoretical background you can save a
> lot of time, if you don't, well ... (I assume P unequal to NP for now).

Yes, that's one thing one should know.

> The halting problem is tightly connected to problems found in practical
> problems and theories like type theory. If you want to ensure that your
> type system is decideable you know you can't have the power of a turing
> machine. If you want this power, your compiler may not halt on every
> module/program instance. Looking at a compiler in more detail. You have
> different passes, e.g. look at register allocation. register allocation
> is as hard as graph colouring if you have an architecture where the
> registers are different and depending on the operation you have to
> choose a different one like x86. If you know how good one is able to
> approximate graph colouring you know how bad a compiler is at this as
> long as (as long as P != NP) no worst-case super-polynomial time
> register allocation algorithm is used. This are just some examples, so I
> hope you see, having this theoretical background is a good thing in case
> one wants to do things right.

Sure, one should know some facts, but I don't see the use of studying
all the theoretical background behind it as very necessary. Students
can always read up on it, and one of the most important things to learn
in CS is *where* to find things, not exactly how everything works in
detail. The world is too big for that.

> Knowing just of these complexity classes or other kind of theoretical
> stuff is often not enough, because in reality you don't get your 3SAT
> problem or Graph Colouring. You get problem XYZ and you have to figure
> out from there. Sometimes, as in the case of register allocation, it is
> trivial to reduce it to graph colouring or more important find a
> L-reducation for XYZ to some known problem.

Maybe sometimes... ;)

Ulrich Hobelmann

unread,
Oct 14, 2005, 8:01:17 AM10/14/05
to
Lauri Alanko wrote:
> In article <3r9binF...@individual.net>,
> Ulrich Hobelmann <u.hob...@web.de> wrote:
>> knowledge, not just applied stuff. But when I have to learn a theory, I
>> expect that it is for handling a practical problem in a formal way.
>
> Science involves both basic research and applied research. Basic
> research strives only to deepen our understanding about things without
> aspiring towards practical applications. A university is a scientific
> institution, so it's no wonder that some things being taught there are
> not eminently practical. If you don't like this approach, then the
> university is probably not the right place for you.

I just wish the emphasis had been different... Many things could have
been done in shorter time, and they could have taught students more
interesting stuff. As it is, everybody I know *hates* theoretical CS,
and I only don't because I know there are cool advanced things out there.

You could say we only learn '60s and '70s stuff. The same goes for
operating system, for instance. Semaphores are cool, and you should
understand them. But *writing* programs with them for 3-4 weeks, and
the prof never telling you about more modern approaches (process
calculi, message passing) that programmers can actually write
non-deadlocking programs in (try to write a large multithreaded program
given only semaphores ;) ), is nonsense. Our programming teaching
focuses on Algol-derivatives, nothing else. For a *university*, the
highest possible ivory tower in my country, that's clearly unacceptable.
I only have to think of all the things most graduates here will never
have heard of in their whole life and I wonder why we spend so much time
doing nothing.

>> There are really interesting things in theoretical (let's call it
>> "formal") CS, such as semantics, process calculi, type systems,
>> automata, but incomputability is more a legend that CS people should
>> have heard of, than something they should have to study in depth,
>> IMHO.
>
> That's funny, as all of the interesting examples you mention involve
> incomputability e.g. in the form of undecidability. Don't you think it
> is at all interesting to know e.g. whether a compiler will actually
> finish when given a program to compile? Or do you think it's ok for
> any old program to get stuck occasionally? You can always push
> control-C?

Well, for instance most compilers process an abstract syntax tree
underneath. These ASTs are defined recursively/inductively, so at some
point the recursion *has* to end. This simple knowledge is far more
relevant to me than the fact that there are loops or recursion patterns
that nobody *wants* to program, that never terminate. By the way, most
computer software probably isn't even supposed to terminate!

Lauri Alanko

unread,
Oct 14, 2005, 8:09:44 AM10/14/05
to
In article <3r9kcnF...@individual.net>,

Ulrich Hobelmann <u.hob...@web.de> wrote:
> Why does every CS student have to suffer through the Turing
> stuff, but most haven't even *heard* of Lambda Calculus? This just
> doesn't make sense to me.

I have often wondered the same. I think it is mostly just for
historical reasons. Many concepts about computability are _much_
simpler to grasp in LC. For example, LC programs are much easier to
compose together than Turing machines.

Then again, there is some justification for TMs: arguably, of the
various equivalent models of computation, TMs are most
"down-to-earth", i.e. closest to physical reality. This gives it
credibility, since the _point_ of the computational models is that
they should express things that are really computable in the real
world.

> Sure, one should know some facts, but I don't see the use of studying
> all the theoretical background behind it as very necessary. Students
> can always read up on it, and one of the most important things to learn
> in CS is *where* to find things, not exactly how everything works in
> detail. The world is too big for that.

You are exactly right. That's why basic undergrad education just
glances quickly at about gazillion things: you don't really learn
anything "deeply" but later, you'll remember "hey, I remember reading
about something like this at that class..."

And reading through the proof of the undecidability of the halting
problem is _not_ very deep. The average student walks away with a
vague feeling that there was some problem about halting that couldn't
be solved, and he'll know where to look for more info if he ever needs
it. It's valuable, but not very much. If the theorem was only stated
in the class _without_ going through the proof, the average student
would forget completely about it after the exam. :)


Lauri

Gary Baumgartner

unread,
Oct 14, 2005, 9:36:13 AM10/14/05
to
In article <dio2nq$uks$1...@online.de>,
Joachim Durchholz <j...@durchholz.org> wrote:
[...]

>I have always been sceptical about self-definable syntax. It tends to
>encourage code that nobody but the original macro author understands.

Would you claim this about functions, datatypes or classes?
What's so different about (my-function a ...) versus (my-macro a ...)?
Don't you just see "my-function" or "my-macro" and look up its documentation?

Gary Baumgartner

Matthias Blume

unread,
Oct 14, 2005, 10:21:55 AM10/14/05
to
Ulrich Hobelmann <u.hob...@web.de> writes:

> The contrived construction of a funny machine that can't be proven to
> halt isn't interesting to me. Many practical algorithms don't just
> infinite-loop, and the people writing code *know* that their code
> (most often) won't loop. The same with Gödel's stuff. I don't
> consider weird constructions practical or useful at all, just because
> there exists one totally made up case that refutes something.

It is interesting (and should be interesting to anyone with an
interest in computing) in the sense that it gives an upper bound on
what can be done. There are quite practical problems whose
solvability would be very desirable, but which are not solvable in
general. Many examples can be derived directly from Rice's theorem,
which in turn is an almost /immediate/ consequence of the
unsolvability of the Halting Problem. In particular, we cannot
construct a machine that that can compare two arbitrary programs for
semantic equality. Similarly, there can be no "perfect" optimizer
that yields the smallest or fastest equivalent to a given program.

Of course, there are stronger constraints, such as complexity
constraints, that can make particular problems infeasible to be solved
precisely on a computer. NP-hardness can be a serious bummer (unless
there exists a good enough approximation algorithm), and even a lower
bound of n^3 or n^2 can be a serious problem depending on the
application.

The bottom line is that in order to understand these things one has to
have a firm graps of what one's computational model is, what it can
do, and what it can't.

By the way, the whole point of discussing the Halting Problem is to
show that there are /practically relevant/ problems which cannot be
solved on a computer. If it weren't for that, a simple cardinality
argument is all that is needed to show that there exist incomputable
functions.

>> that way about all the math courses you took? And second, a shame that you
>> don't see the beauty and the phenomenal genius in Turing's development of a
>> way to prove things about the theoretical limits of computers at a time when
>> they were just beginning to build actual ones.
>
> Ok, it's nice to know that there are limits, but I'd rather be
> concerned about practical limits. Turing machines are a weird design
> to begin with (one-dimensional tape, infinite...).

Yes, discussing practical limits is important. The notion of
NP-hardness and NP-completeness has spawned a huge amount of research.
Few theoreticians today worry about the Halting Problem -- that's just
a given. Everybody worries about upper and lower bounds (and hopes
them to be at most polynomial).

The Turing Machine (which is not my favorite model of computation
either, btw.) is constructed the way it is to be /extremely simple/.
Apart from the idealization of having an infinite tape, it is obvious
that it can be practically realized. With the lambda calculus or
mu-recursive functions one needs a few steps to see that (usually by
showing how to implement them on something akin to the TM).

> Mostly I just think my degree's first two years were a TOTAL waste of
> time.

I am beginning to agree with you. At least you don't sound like to
"got" it.

> Ok, in my free time I read lots of (to me) interesting stuff,
> so the years afterwards weren't too exciting either, but had I had
> those years right at the beginning, they would have been. There are
> really interesting things in theoretical (let's call it "formal") CS,
> such as semantics, process calculi, type systems, automata, but
> incomputability is more a legend that CS people should have heard of,
> than something they should have to study in depth, IMHO.

If you are capable of understanding type systems in depth, you should
have no problem with computability. The proof of the unsolvability of
the Halting Problem is completely trivial and fits on a few lines once
you have the notion of a universal function in place. A universal
function, in turn, is a fundamental concept that you can't weasel
around: it is the essence most existing programming language
implementations. If you study type systems, you have to understand
things like strong normalization, system F, etc. One important
property that a type system either possesses or doesn't is
decidability. And that's directly related to computability.

If you really think that studying (in)computability was a waste of
time, your entire CS education has been a waste of time.

Regards,
Matthias

Matthias Blume

unread,
Oct 14, 2005, 10:33:42 AM10/14/05
to
Lauri Alanko <l...@iki.fi> writes:

> Then again, there is some justification for TMs: arguably, of the
> various equivalent models of computation, TMs are most
> "down-to-earth", i.e. closest to physical reality. This gives it
> credibility, since the _point_ of the computational models is that
> they should express things that are really computable in the real
> world.

Yes, that's the main point behind TMs. One other issue that comes up
with the LC is that there is a less obvious complexity model. What is
the time and space consumption of a LC reduction sequence? Counting
beta steps is not good enough as on a real machine one needs to
implement deep substitution. One could go to explicit substitutions
(i.e., the lambda-sigma calculus with DeBruijn indices etc), but that
is *much* more complicated, especially to the uninitiated. Similar
things can be said about space consumption. Again, simply looking at
the size of an expression might not be good enough because of sharing.
Reasoning about sharing is not easy at all. (There is a reason why
optimal reductions have been studied for such a long time.)

Of course, this being said, the TM is not a very good (read:
realistic) model for thinking about complexity either. That's why
there are also plenty of other models out there. But at least it is
easy to analyze on its own terms should you choose to do so. That's
something that cannot be said quite so easily about the LC.

> And reading through the proof of the undecidability of the halting
> problem is _not_ very deep.

Yes. The proof is dead simple. All the real work is in the
construction of the universal function.

Matthias

Marcin 'Qrczak' Kowalczyk

unread,
Oct 14, 2005, 10:41:33 AM10/14/05
to
Matthias Kretschmer <mccr...@gmx.net> writes:

> the trick is here, that most programming languages are equivalent to
> Turing Machines,

Not quite.
http://lambda-the-ultimate.org/node/view/203
http://lambda-the-ultimate.org/node/view/1038

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Matthias Blume

unread,
Oct 14, 2005, 10:44:16 AM10/14/05
to
Ulrich Hobelmann <u.hob...@web.de> writes:

> Well, for instance most compilers process an abstract syntax tree
> underneath. These ASTs are defined recursively/inductively, so at
> some point the recursion *has* to end. This simple knowledge is far
> more relevant to me than the fact that there are loops or recursion
> patterns that nobody *wants* to program, that never terminate.

That's a naive (or shall I say: uninformed) view of what a compiler
does. Yes, the frontend will terminate, fine. What about the
optimizer? If the optimizer performs abstract interpretation of some
form, it might not terminate unless one is careful. If it performs
partical evaluation, it might not terminate. If it "evaluates under
the lambda" it might not terminate. A few years ago an otherwise
excellent entry in the ICFP programming contest (raytracer) stumbled
over this problem by being over-agressive in its implementation of the
GML language where an infinite loop in an otherwise unused texture
would send it into an infinite loop...

These things /do/ matter in practice!

> By the way, most computer software probably isn't even supposed to
> terminate!

Nonsense. Almost every computer software I know is supposed to terminate, at
least given the appropriate input:

An application should shut quit after I press Apple-Q, an OS should
shut down after I issue the "shutdown" command, an interactive program
should eventually come back and wait for new input from the user, and
so on and so on.

And even if you consider embedded software that controls some
long-running device, there is usually a pretty obvious decomposition
of the program into a terminating program which is run repeatedly.

Matthias

Antoun Kanawati

unread,
Oct 14, 2005, 11:11:31 AM10/14/05
to
Ulrich Hobelmann wrote:
> It seems that an awful lot of theoretical CS is just theory without
> applications, while the practical people (who seem to hate theory) don't
> even bother to use formal methods or study principles behind programming
> for instance, but just create ad-hoc solutions / languages instead
> (often under the discipline name software engineering). The gap in
> between is what I'd be interested in, but there aren't too many people
> teaching that I guess.

The dichotomy of "practical" vs "theoretical" is false.

Good practical choices are often backed by good theory.

Besides the amount of theory that you need in practice varies with the
kind of practice you do. If you're writing some simple application,
you may find all that theory totall wasted. But, if you're writing
a compiler, you may find that theory comes in very handy.

> The contrived construction of a funny machine that can't be proven to
> halt isn't interesting to me. Many practical algorithms don't just
> infinite-loop, and the people writing code *know* that their code (most
> often) won't loop. The same with Gödel's stuff. I don't consider weird
> constructions practical or useful at all, just because there exists one
> totally made up case that refutes something.

The weird construction is NOT the point, it merely proves a very
important statement with significant implications on the limitations
of real practical programs. As in: you cannot write a program to
decide whether another program will (or will not) halt on a particular
input.

And, we know, from practice, that infinite loops occur. Put the two
together, and you understand why no compiler writer has bothered
implementing a warning for infinite loops, or various other
runtime pathologies, in spite of our ability to detect many of these
cases by visually inspecting the code.

Throw in complexity theory, analysis of algorithms, etc... and now
you have a good foundation for estimating the computational cost of
your practical solutions, BEFORE you start designing and implementing
them.

> Ok, it's nice to know that there are limits, but I'd rather be concerned
> about practical limits. Turing machines are a weird design to begin
> with (one-dimensional tape, infinite...).

This particular limit (halting is undecidable) is PRACTICAL and
UNIVERSAL. The universality derives primarily from the essential
simplicity of the Turing Machine.

And, you should note that the approach is VERY PRACTICAL, since
it make generalization very easy.

> Mostly I just think my degree's first two years were a TOTAL waste of
> time. Ok, in my free time I read lots of (to me) interesting stuff, so
> the years afterwards weren't too exciting either, but had I had those
> years right at the beginning, they would have been. There are really
> interesting things in theoretical (let's call it "formal") CS, such as
> semantics, process calculi, type systems, automata, but incomputability
> is more a legend that CS people should have heard of, than something
> they should have to study in depth, IMHO.

Computability (and lack thereof) is only a small part of a formal
education, and it happens to be a fundamental part of it, since it
defines a universal limit on our ability to solve problems by
algorithmic means. I don't undestand why you're singling it out.
--
A. Kanawati

Ulrich Hobelmann

unread,
Oct 14, 2005, 11:34:12 AM10/14/05
to
Matthias Blume wrote:
>> Mostly I just think my degree's first two years were a TOTAL waste of
>> time.
>
> I am beginning to agree with you. At least you don't sound like to
> "got" it.

Back then I got it alright. I just stopped caring about these things.
Since my time is limited, I'd rather spend it on things that help me in
real-life, such as getting a job. The world has enough problems, so I
don't worry about computational limits anymore. Ask 80+% of CS people
in the world if they *really* apply this theory in their daily
programming, and I think most don't. Except in this newsgroup I've
never heard of one.

>> Ok, in my free time I read lots of (to me) interesting stuff,
>> so the years afterwards weren't too exciting either, but had I had
>> those years right at the beginning, they would have been. There are
>> really interesting things in theoretical (let's call it "formal") CS,
>> such as semantics, process calculi, type systems, automata, but
>> incomputability is more a legend that CS people should have heard of,
>> than something they should have to study in depth, IMHO.
>
> If you are capable of understanding type systems in depth, you should
> have no problem with computability. The proof of the unsolvability of
> the Halting Problem is completely trivial and fits on a few lines once
> you have the notion of a universal function in place. A universal

It's not about understanding. It was about being posed lots of stupid
homework involving these things. Most (all?) of theoretical CS was MUCH
easier than what I had to do for maths.

> function, in turn, is a fundamental concept that you can't weasel
> around: it is the essence most existing programming language
> implementations. If you study type systems, you have to understand
> things like strong normalization, system F, etc. One important
> property that a type system either possesses or doesn't is
> decidability. And that's directly related to computability.
>
> If you really think that studying (in)computability was a waste of
> time, your entire CS education has been a waste of time.

Maybe. The part involving the university definitely, but I already knew
that after spending two weeks there. The only reason I studied all this
time is to get a diploma, a piece of paper that will magically allow me
to earn several times of what a student's programming job pays (for the
same work). ;)

Joachim Durchholz

unread,
Oct 14, 2005, 12:31:42 PM10/14/05
to
Marcin 'Qrczak' Kowalczyk schrieb:

> Matthias Kretschmer <mccr...@gmx.net> writes:
>
>
>>the trick is here, that most programming languages are equivalent to
>>Turing Machines,
>
> Not quite.
> http://lambda-the-ultimate.org/node/view/203
> http://lambda-the-ultimate.org/node/view/1038

The paper quoted there is simply faulty.

It assumes that you cannot build a Turing Machine that computes outputs
that depend on input which in turn depends on previous output.

But a TM can do what a computer program can:

1) If the cardinality of all potential input signals is finite, it can
return a list of things to do for each case.
2) If the cardinality is countably infinite, the TM can compute another
TM that will accept the next input. (Some kind of "telescoping"
operation, I'd say - and I suspect that's how IO in purely functional
languages works, too.)
3) Uncountably infinite input cannot be handled by neither TMs nor
computer programs (all are limited to strings over finite alphabets,
which makes the inputs countable), so this case doesn't

I think that makes interactive TMs exactly equivalent to standard TMs.
Or, rather, equivalent in those respects that matter: termination,
decidability, etc.
(I'd have to provide a proof if this where a reviewed periodical. I'll
leave that to readers *gg*)

Regards,
Jo

Joachim Durchholz

unread,
Oct 14, 2005, 12:41:24 PM10/14/05
to
Gary Baumgartner schrieb:

Because macros can do things that functions can't.

In (say) Pascal, when I look at a function declaration and see

function foo (baz: integer): integer

I know that it won't modify baz. That may already be all that I need to
know about foo.

For macros, I need to inspect the full macro body to find out whether
it's adding a "var" to that parameter name. I.e. I have to read the full
sources, or believe in what the docs tell me (and the docs are almost
always incomplete, to believing them usually isn't good workmanship).

IOW it comes down to the guarantees that the language gives me. If the
macro language cannot weaken the guarantees that the base language gives
me, then OK. If it can, it's dangerous - or, put more neutrally, the
safeness of a language for use is the lower of the safeness of the macro
language and that of the base language. (A non-existent macro language
has infinite safety - you can't do anything dangerous with it *ggg*)

Regards,
Jo

Gary Baumgartner

unread,
Oct 14, 2005, 2:38:09 PM10/14/05
to
In article <dion3l$tnu$1...@online.de>,

Joachim Durchholz <j...@durchholz.org> wrote:
>Gary Baumgartner schrieb:
>> In article <dio2nq$uks$1...@online.de>,
>> Joachim Durchholz <j...@durchholz.org> wrote:
>> [...]
>>
>>>I have always been sceptical about self-definable syntax. It tends to
>>>encourage code that nobody but the original macro author understands.
>>
>> Would you claim this about functions, datatypes or classes?
>> What's so different about (my-function a ...) versus (my-macro a ...)?
>> Don't you just see "my-function" or "my-macro" and look up its
>> documentation?
>
>Because macros can do things that functions can't.
>
>In (say) Pascal, when I look at a function declaration and see
>
> function foo (baz: integer): integer
>
>I know that it won't modify baz. That may already be all that I need to
>know about foo.

But in almost all cases that's not all you need to know. Otherwise, you
could just not call foo.

>For macros, I need to inspect the full macro body to find out whether
>it's adding a "var" to that parameter name. I.e. I have to read the full
>sources, or believe in what the docs tell me (and the docs are almost
>always incomplete, to believing them usually isn't good workmanship).

For functions ... I have to read the full sources, or believe what the
docs tell me about what will be returned, side-effects, etc.

>IOW it comes down to the guarantees that the language gives me. If the
>macro language cannot weaken the guarantees that the base language gives
>me, then OK. If it can, it's dangerous - or, put more neutrally, the
>safeness of a language for use is the lower of the safeness of the macro
>language and that of the base language.

I can essentially agree with your neutral version, and the flip side is
that expressivity is the upper of the macro and base language. You are
comfortable with a certain degree of safety (functions without call
by reference) and expressiveness, but not less safety than that with more
expressiveness (functions with call by reference, macros, etc).

Since it's a matter of degree, I think it needs more justification to say
the a certain balance (except near the endpoints) is the right one.


By the way, macros can improve safety by removing more repetition.
For example, I teach a course whose slides contain the following Java code:

public static Node delete(Node front, Object o) {
Node previous = null;
Node current = front;
while (current != null && !current.value.equals(o)) {
previous = current;
current = current.link;
}
if (current != null) {
if (current == front) {
front = current.link;
} else {
previous.link = current.link;
}
}
return front;
}

This is a case of a common pattern that Zahn and Knuth noted over 30 years ago:

while (!a && !b) {
...
}
if (a) {
post-a
} else {
post-b
}

Notice however that the original code tests !a to select post-b, presumably
for efficiency.

Wouldn't it be nice to be able to:

specify a and b once, avoiding copying mistakes

not have to negate and reason about negation, avoiding more mistakes

say "until", just like we can in English, and have the post-condition

not have to read "a" twice, not have the computer execute it twice

Scheme doesn't have until built in, but I can define it:

;; Until loop
;
; (until condition body ...)
;
; Like a while loop, except ends when condition is *true*
;
; If the condition is a disjunction
;
; (or sub-condition
; sub-condition
; ...)
;
; it may be written in the following form to allow post-processing
; based on the (first) sub-condition causing termination:
;
; (one-of (sub-condition [optional post-processing])
; (sub-condition [optional post-processing])
; ...)
;
(define-syntax until
(syntax-rules (one-of)
((_ (one-of clauses ...) do0! ...)
(letrec ((loop (lambda () (cond clauses ... (#t do0! ... (loop))))))
(loop)))
((_ condition do0! ...) (until (one-of (condition)) do0! ...))))


To compare with the Java delete, I'll first:

(define value car)
(define link cdr)
(define set-link! set-cdr!)
(define == eq?)

Now:

(define (delete front o)
(let ((previous null)
(current front))
(until (one-of ((null? current))
((equal? (value current) o)
(if (== current front)
(set! front (link current))
(set-link! previous (link current)))))
(set! previous current)
(set! current (link current))))
front)

I could go further and define macros to capture the common forms:

(set! v (op v a ...)) ; update a variable based on its current value
(if c (op1 a ...) (op2 a ...) b ...) ; select operation to apply to a value

both of which appear in the above code.

>[...]

Gary Baumgartner

David Hopwood

unread,
Oct 14, 2005, 4:01:41 PM10/14/05
to
Marcin 'Qrczak' Kowalczyk wrote:
> Matthias Kretschmer <mccr...@gmx.net> writes:
>
>>the trick is here, that most programming languages are equivalent to
>>Turing Machines,
>
> Not quite.
> http://lambda-the-ultimate.org/node/view/203
> http://lambda-the-ultimate.org/node/view/1038

And <http://c2.com/cgi/wiki?InteractiveComputationIsMorePowerfulThanNonInteractive>.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

David Hopwood

unread,
Oct 14, 2005, 4:23:18 PM10/14/05
to
Joachim Durchholz wrote:
> Marcin 'Qrczak' Kowalczyk schrieb:
>> Matthias Kretschmer <mccr...@gmx.net> writes:
>>
>>> the trick is here, that most programming languages are equivalent to
>>> Turing Machines,
>>
>> Not quite.
>> http://lambda-the-ultimate.org/node/view/203
>> http://lambda-the-ultimate.org/node/view/1038
>
> The paper quoted there is simply faulty.
>
> It assumes that you cannot build a Turing Machine that computes outputs
> that depend on input which in turn depends on previous output.
>
> But a TM can do what a computer program can:
>
> 1) If the cardinality of all potential input signals is finite, it can
> return a list of things to do for each case.

This cardinality is not finite in most interactive models, and cannot be
made finite without introducing severe restrictions.

> 2) If the cardinality is countably infinite, the TM can compute another
> TM that will accept the next input. (Some kind of "telescoping"
> operation, I'd say - and I suspect that's how IO in purely functional
> languages works, too.)

You've just described a Sequential Interaction Machine, which is not itself
a TM; it is constructed from a TM. Also, a SIM is deterministic, while most
interactive models are not.

<http://c2.com/cgi/wiki?InteractiveComputationIsMorePowerfulThanNonInteractive>
refutes the argument you're trying to make.

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Joachim Durchholz

unread,
Oct 14, 2005, 4:56:39 PM10/14/05
to
David Hopwood schrieb:

> Joachim Durchholz wrote:
>
>>1) If the cardinality of all potential input signals is finite, it can
>>return a list of things to do for each case.
>
> This cardinality is not finite in most interactive models, and cannot be
> made finite without introducing severe restrictions.

I listed that case just for completeness.

>>2) If the cardinality is countably infinite, the TM can compute another
>>TM that will accept the next input. (Some kind of "telescoping"
>>operation, I'd say - and I suspect that's how IO in purely functional
>>languages works, too.)
>
> You've just described a Sequential Interaction Machine, which is not itself
> a TM; it is constructed from a TM. Also, a SIM is deterministic, while most
> interactive models are not.

Well, computer programs are deterministic (nondeterminism, while in
theory interesting, is generally avoided in practice because it makes
debugging far too difficult), so that doesn't seem to be a serious
restriction. (Nondeterministic calculi could be interesting to model the
entire human-machine interaction.)

Still, I'm reluctant to accept that "a machine that can do interaction
is more powerful than a TM".

Let me try a proof outline:
1) Define the non-deterministic Turing machine (NDTM) with a tape that
can be modified everywhere the TM didn't read or write yet.
2) TMs with a countably infinite number of tapes are equal in power to
standard TMs. Model the nondeterminism in the NDTM by providing a
(countably infinite) number of tapes that model all the possible
modifications of the "outside world".
3) Postulate that for a concrete run, the "outside world" will choose
what tape cell modifications will be in effect.

This construction leaves the NDTM with the task of providing answers for
all possible inputs (something that a real program must also do), while
keeping it at the same power as a standard TM.

I don't think that the conclusions are wrong, though I'm pretty sure
that the reasoning has a lot of gaping holes in need of fixing :-)

I'm not even sure that there's any relevance in the question whether the
tape is fixed beforehand or not - it may well be that all the proofs
associated with TMs are independent of whether the tape's contents is
fixed before the run or not.

Regards,
Jo

Joachim Durchholz

unread,
Oct 14, 2005, 5:07:03 PM10/14/05
to
Gary Baumgartner schrieb:
> In article <dion3l$tnu$1...@online.de>,
> Joachim Durchholz <j...@durchholz.org> wrote:
>
>>Gary Baumgartner schrieb:
>>
>>>In article <dio2nq$uks$1...@online.de>,
>>>Joachim Durchholz <j...@durchholz.org> wrote:
>>>[...]
>>>
>>>>I have always been sceptical about self-definable syntax. It tends to
>>>>encourage code that nobody but the original macro author understands.
>>>
>>>Would you claim this about functions, datatypes or classes?
>>>What's so different about (my-function a ...) versus (my-macro a ...)?
>>>Don't you just see "my-function" or "my-macro" and look up its
>>> documentation?
>>
>>Because macros can do things that functions can't.

I have to modify this one:

I have yet to see things macros can do that functions (including
higher-order ones) cannot do in a safer manner.

(Just take a look at how Haskell people create "sublanguages". Look, ma,
no macros! *ggg*)

>>In (say) Pascal, when I look at a function declaration and see
>>
>> function foo (baz: integer): integer
>>
>>I know that it won't modify baz. That may already be all that I need to
>>know about foo.
>
> But in almost all cases that's not all you need to know. Otherwise, you
> could just not call foo.

Such limited knowledge is useful in programmers' everyday practice:
hunting bugs, and adapting code to new requirements.

Knowing the trails that need *not* be chased in advance is an invaluable
asset in such a situation. It allows one to concentrate on one detail
and leave all the other details out.

> By the way, macros can improve safety by removing more repetition.

Agreed. (Again, HOFs can be used to achieve the same effect.)

Regards,
Jo

Philippa Cowderoy

unread,
Oct 14, 2005, 7:45:24 PM10/14/05
to
On Fri, 14 Oct 2005, Joachim Durchholz wrote:

> I have to modify this one:
>
> I have yet to see things macros can do that functions (including higher-order
> ones) cannot do in a safer manner.
>

Build new types.

> (Just take a look at how Haskell people create "sublanguages". Look, ma, no
> macros! *ggg*)
>

Yet there's still a use for Template Haskell.

--
fli...@flippac.org

A problem that's all in your head is still a problem.
Brain damage is but one form of mind damage.

Philippa Cowderoy

unread,
Oct 14, 2005, 7:47:08 PM10/14/05
to
On Fri, 14 Oct 2005, Joachim Durchholz wrote:

> Still, I'm reluctant to accept that "a machine that can do interaction is
> more powerful than a TM".
>

No TM can launch an ICBM, for one. More generally, if you can interact
with things you've a chance of finding something more powerful than a TM
to interact with.

--
fli...@flippac.org

Sometimes you gotta fight fire with fire. Most
of the time you just get burnt worse though.

Joe Marshall

unread,
Oct 14, 2005, 9:37:35 PM10/14/05
to
Ulrich Hobelmann <u.hob...@web.de> writes:

> The contrived construction of a funny machine that can't be proven to
> halt isn't interesting to me. Many practical algorithms don't just
> infinite-loop, and the people writing code *know* that their code
> (most often) won't loop. The same with Gödel's stuff. I don't
> consider weird constructions practical or useful at all, just because
> there exists one totally made up case that refutes something.

Occasionally someone on comp.lang.lisp states ``Yeah, in *theory* you
cannot tell if a program halts, but in *practice* it should be easy.''
They see Gödel's proof as an `artificial' construct of a `pathological
case' that would never occur in a `real program'.

However, it turns out that the halting problem and issues of
undecidability are *trival* to uncover in very simple problems (Gödel
took the further step to prove that there is no way to paper over the
simple problems). Here are two simple examples:

(define k0 '(#t () (x)))

(define (kernel i s)
(list (not (car s))
(if (car s)
(cadr s)
(cons i (cadr s)))
(cons 'y (cons i (cons 'z (caddr s))))))

(define (mystery list)
(let ((result (foldl kernel k0 list)))
(if (null? (cadr result))
#f
(mystery
(if (car result)
(cadr result)
(caddr result))))))

In this first example, we fold a kernel function over a list and
iterate on the result. The kernel function trivially halts, and the
fold function will halt on any finite list if the kernel does.
Nonetheless, no one has been able to prove whether the mystery
function halts or not. So what is it about the mystery function that
defies analysis? How is this different from any other iteration that
someone *knows* won't loop?

Here's a different example:

(define (base-bump base n)
(if (< n base)
n
(do ((exponent 0 (+ exponent 1))
(probe base (* probe base))
(divisor 1 probe))
((> probe n)
(+ (* (expt (+ base 1) (base-bump base exponent))
(quotient n divisor))
(base-bump base (remainder n divisor)))))))

(define (goodstein seed)
(do ((i 2 (+ i 1))
(n seed (- (base-bump i n) 1)))
((zero? n))
(newline)
(display n)))

The `hereditary base-n representation' of a number is when you write
the number as a sum of powers-of-n and recursively write the exponents
in hereditary base-n representation. For example, the number 35 in
base 2 is

(+ (expt 2 5) (expt 2 1) 1)

but we can rewrite 5 as
(+ (expt 2 2) 1)

so the fully expanded heriditary base-2 representation of 35 is

(+ (expt 2 (+ (expt 2 2) 1))
(expt 2 1)
1)

The `base-bump' operation works by taking a number in heriditary
base-n representation and replacing every occurrance of n with n+1.
`Base-bump'ing 35 would be

(+ (expt 3 (+ (expt 3 3) 1))
(expt 3 1)
1)

or, in decimal,

22876792454965

As you can see, bumping the base can be quite impressive. Suppose we
subtract one from that result, giving 22876792454964. and bumped the
base from 3 to 4:

(+ (expt 4 (+ (expt 4 4) 1))
(expt 4 1))

or, in decimal,
536312317197703883982960999928233845099174632823695735108942
457748870561202941879072074971926676137107601274327459442034
15015531247786279785734596024336388

The next iteration gives us a number with 2185 digits and the one
after that a number with 36307 digits.

There are two interesting things about this process. First, despite
the huge rate of growth, the sequence converges to zero for all
positive integers. Second, the assertion that the sequence converges
is undecidable in number theory (Peano axioms). In other words, you
need not resort to Gödelian techniques to find undecidable programs.

> Mostly I just think my degree's first two years were a TOTAL waste of

> time. Ok, in my free time I read lots of (to me) interesting stuff,


> so the years afterwards weren't too exciting either, but had I had
> those years right at the beginning, they would have been. There are
> really interesting things in theoretical (let's call it "formal") CS,
> such as semantics, process calculi, type systems, automata, but
> incomputability is more a legend that CS people should have heard of,
> than something they should have to study in depth, IMHO.

Incomputibility is more practical than you think.

Jon Harrop

unread,
Oct 14, 2005, 11:53:32 PM10/14/05
to

Seeing as I've been trying to learn Lisp and Scheme, I'll just try to
translate Joe's code into OCaml and SML.

Joe Marshall wrote:
> (define k0 '(#t () (x)))

The quoted symbols can be substituted with polymorphic variants, so I'll
OCaml first:

let k0 = (true, [], [`x]);;

> (define (kernel i s)
> (list (not (car s))
> (if (car s)
> (cadr s)
> (cons i (cadr s)))
> (cons 'y (cons i (cons 'z (caddr s))))))

let kernel (a, b, c) i =
(not car, (if a then b else i :: b), `y :: i :: `z :: c);;

> (define (mystery list)
> (let ((result (foldl kernel k0 list)))
> (if (null? (cadr result))
> #f
> (mystery
> (if (car result)
> (cadr result)
> (caddr result))))))

let rec mystery list = match List.fold_left kernel k0 list with
| (_, [], _) -> `f
| (a, b, c) -> mystery (if a then b else c);;

> (define (base-bump base n)
> (if (< n base)
> n
> (do ((exponent 0 (+ exponent 1))
> (probe base (* probe base))
> (divisor 1 probe))
> ((> probe n)
> (+ (* (expt (+ base 1) (base-bump base exponent))
> (quotient n divisor))
> (base-bump base (remainder n divisor)))))))

SML makes arbitrary-precision integer arithmetic easier so I'll use MLton
here:

fun expt n m = if m=0 then 1 else n*expt n (m-1)

fun base_bump base n =
if n<base then n else
let fun aux exponent probe divisor =
if probe <= n then aux (exponent+1) (probe*base) probe else
expt (base+1) (base_bump base exponent) * (n div divisor) +
base_bump base (n mod divisor) in
aux 0 base 1
end

> (define (goodstein seed)
> (do ((i 2 (+ i 1))
> (n seed (- (base-bump i n) 1)))
> ((zero? n))
> (newline)
> (display n)))

fun goodstein seed =
let fun aux i n =
if n<>0 then (
print (IntInf.toString n^"\n");
aux (i+1) (base_bump i n - 1))
else () in
aux 2 seed
end

--
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com

Matthias Buelow

unread,
Oct 15, 2005, 12:21:30 AM10/15/05
to
In comp.lang.scheme Ulrich Hobelmann <u.hob...@web.de> wrote:

>The contrived construction of a funny machine that can't be proven to
>halt isn't interesting to me. Many practical algorithms don't just

Well. The point is, that it's not just a "funny machine", but _every_
machine conceivable (via reciprocal emulation of various existing
machine types, and the Church-Turing-Thesis as a strong indication
for those that haven't been conceived yet.) It's mostly a philosophical
result of, imho, quite some importance. In extension of it, if the
human mind could be shown that it is just an electrochemical
contraption (which imho is quite the possibility), this would then
indicate that the human mind is no more powerful than a TM. Wouldn't
you agree that, if such a thing could ever be proven, it would be
a rather interesting result? This is to show that it's not just
some idle pastime of some weird mathematicians but a topic of broad
interest, from social, philosophical and theological effects to
technical ones (can we build programs that can "think" like humans?
etc.)

mkb.

Matthias Buelow

unread,
Oct 15, 2005, 12:30:27 AM10/15/05
to
In comp.lang.scheme Ulrich Hobelmann <u.hob...@web.de> wrote:

>But isn't the Lambda Calculus, or Recursive Functions equivalent in
>power to the funny tape machine? Both are much easier to understand,
>IMHO, and the first one even provides the basis for lots of programming
>languages. Why does every CS student have to suffer through the Turing
>stuff, but most haven't even *heard* of Lambda Calculus? This just
>doesn't make sense to me.

What's "space", as defined with recursive functions, or with the
lambda calculus? The TM has certain advantages, like being a lot
nearer to actual computers (i.e., physcial devices) than most other
mathematical notions, while at the same time being simplistic enough
not having to bother with unnecessary complications (like, space
being calculated over a sum over the bits being used at clock ticks
with a RAM (random access machine with dyadic coding), instead, you
just count the tape cells that have been used.) IMHO it's a rather
ingenious and elegant model.

mkb.

Matthias Buelow

unread,
Oct 15, 2005, 2:46:51 AM10/15/05
to
In comp.lang.scheme Joachim Durchholz <j...@durchholz.org> wrote:

>Because macros can do things that functions can't.
>In (say) Pascal, when I look at a function declaration and see

>I know that it won't modify baz. That may already be all that I need to
>know about foo.

A good approach to guard against such problems is:

* to carefully document all side effects (whether it's a Lisp macro,
or a C function that stores something through a pointer argument),
* only use macros sparingly, where functions won't work
* apply common sense, think about the person having to read and
understand the program, follow common idiom and the KISS principle

Unfortunately, all of the above won't help with bad programmers (or
if you're weeks behind the deadline.)

The worst thing that happens is when someone generously uses macros
"for sports" (or template metaprogramming in C++, or an extremely
terse syntax in Perl, etc.) because he wants to show off his
self-perceived ingenuity[1] or whatever, and the whole source is
also completely undocumented. Unfortunately, one comes across such
programs more often than one would wish.. ;-/

mkb.

[1] "If it was difficult to write, it should be difficult to read."

Joachim Durchholz

unread,
Oct 15, 2005, 5:26:20 AM10/15/05
to
Philippa Cowderoy schrieb:

> On Fri, 14 Oct 2005, Joachim Durchholz wrote:
>
>> Still, I'm reluctant to accept that "a machine that can do interaction
>> is more powerful than a TM".
>
> No TM can launch an ICBM, for one.

Simply attach it to a detector that detects it if the TM writes a One to
a given tape cell, and let that launch the ICBM.

That's near enough to real-world computers for me that I don't need a
more powerful model. (By whatever definition of "more powerful".)

> More generally, if you can interact with things you've a chance of
> finding something more powerful than a TM to interact with.

Not sure who's "you" in that scenario - the computer + software? the
"non-computer" part of the world?

Regards,
Jo

Joachim Durchholz

unread,
Oct 15, 2005, 5:34:40 AM10/15/05
to
Philippa Cowderoy schrieb:

> On Fri, 14 Oct 2005, Joachim Durchholz wrote:
>
>> I have to modify this one:
>>
>> I have yet to see things macros can do that functions (including
>> higher-order ones) cannot do in a safer manner.

"as safe or safer" should be here ^^^^^

> Build new types.

Not a problem if types are first-class values.

On systems where they are purely compile-time attributes, I think if a
macro can do things that the language lacks, then it's more an argument
that that specific language has a deficitary type system, than an
argument that macros serve a useful purpose.

OTOH, if a language is *designed* to do types via some macro mechanism,
it all depends on how the macros work. My reservations don't come from
macros per se (if you will, functions are a kind of macros, too), they
come from making macros so flexible that a macro can to anything, in
particular escape language rules.

>> (Just take a look at how Haskell people create "sublanguages". Look,
>> ma, no macros! *ggg*)
>
> Yet there's still a use for Template Haskell.

Hmm... I was always wondering what the motives behind TH were, which
problems it would solve.
And what the guarantees of the language are.

Regards,
Jo

Ulrich Hobelmann

unread,
Oct 15, 2005, 5:38:40 AM10/15/05
to

I guess you could do what compilers do LC-derived languages do: count
used variables at any given time. Since scoping in LC is strictly
lexical, with no funny additions, that shouldn't be too hard. You can
then make a relation of space per function invocation.

Ulrich Hobelmann

unread,
Oct 15, 2005, 5:43:11 AM10/15/05
to
Joe Marshall wrote:
> Occasionally someone on comp.lang.lisp states ``Yeah, in *theory* you
> cannot tell if a program halts, but in *practice* it should be easy.''
> They see Gödel's proof as an `artificial' construct of a `pathological
> case' that would never occur in a `real program'.
>
> However, it turns out that the halting problem and issues of
> undecidability are *trival* to uncover in very simple problems (Gödel
> took the further step to prove that there is no way to paper over the
> simple problems). Here are two simple examples:

No, what I meant is that the problem doesn't interest me, because
*programmers* write and test programs, and usually those hand-written
programs don't infinite-loop. Of course a general halt-detector for
arbitrary binary code doesn't exist, but that's fine by me. The
important thing is that people can check if their programs halt, and
that's usually humanly decideable, if you have some value that decreases
over every loop iteration and exits when zero.

I don't think there are programmers that do really complicated
(maybe-not-halting) algorithms, that just sit down and code them without
analysis beforehand.

Ulrich Hobelmann

unread,
Oct 15, 2005, 5:45:07 AM10/15/05
to

Of course it's interesting to know. I just had to spend too much time
and work on it IMHO. Maybe I'm just pissed because university wasn't
what I expected before I went there ;)
Well, it won't be long anymore.

Joachim Durchholz

unread,
Oct 15, 2005, 8:07:27 AM10/15/05
to
Ulrich Hobelmann schrieb:

> No, what I meant is that the problem doesn't interest me, because
> *programmers* write and test programs, and usually those hand-written
> programs don't infinite-loop.

The key term is "usually". This means that *occasionally* this is a
problem anyway. And knowing about stuff that's only useful occasionally
is still a net gain.

> I don't think there are programmers that do really complicated
> (maybe-not-halting) algorithms, that just sit down and code them without
> analysis beforehand.

Sure. But for that kind of analysis, knowledge about the halting problem
and all the associated stuff is a valuable tool.

Regards,
Jo

Matthias Buelow

unread,
Oct 15, 2005, 8:47:56 AM10/15/05
to

Of course one can. The issue is this: You have to define something,
it isn't obvious. Not even with the RAM model is it obvious, due
to a RAM program not doing just local modifications (like a TM):
if a RAM program touches memory cells 1 and 1000, does it mean the
algorithm is consuming 2 cells, or 1000? With a Turing Machine,
both space aswell as time are intuitively defined.

mkb.

Philippa Cowderoy

unread,
Oct 15, 2005, 10:54:43 AM10/15/05
to
On Sat, 15 Oct 2005, Joachim Durchholz wrote:

> Philippa Cowderoy schrieb:
>> On Fri, 14 Oct 2005, Joachim Durchholz wrote:
>>
>>> Still, I'm reluctant to accept that "a machine that can do interaction is
>>> more powerful than a TM".
>>
>> No TM can launch an ICBM, for one.
>
> Simply attach it to a detector that detects it if the TM writes a One to a
> given tape cell, and let that launch the ICBM.
>
> That's near enough to real-world computers for me that I don't need a more
> powerful model. (By whatever definition of "more powerful".)
>

Have one end of the tape for input from the outside world, or a second
tape for IO and I'm perfectly happy - it doesn't take a massively strange
design, just an explicit IO channel somewhere.

--
fli...@flippac.org

"My religion says so" explains your beliefs. But it doesn't explain
why I should hold them as well, let alone be restricted by them.

Antoun Kanawati

unread,
Oct 15, 2005, 11:35:03 AM10/15/05
to
Ulrich Hobelmann wrote:
> No, what I meant is that the problem doesn't interest me, because
> *programmers* write and test programs, and usually those hand-written
> programs don't infinite-loop. Of course a general halt-detector for
> arbitrary binary code doesn't exist, but that's fine by me. The
> important thing is that people can check if their programs halt, and
> that's usually humanly decideable, if you have some value that decreases
> over every loop iteration and exits when zero.

In spite of the halting problem, it is still possible to perform
signficant formal analysis on programs, and decide a large number of
issues on them. This requires specialized notations, properly stated
invariants, and resources that are not commonly available.

The unassisted human is unreliable, and can't write much code
without introducing error, and can't read much code without
overlooking errors. This is why we test software.

As someone famous said: Beware of the following program, I have
proven it correct, but I have not tested it.

So, ultimately, theory complements practice.

> I don't think there are programmers that do really complicated
> (maybe-not-halting) algorithms, that just sit down and code them without
> analysis beforehand.

So, if your education does not include this foundational theory, how do
you analyze a complex problem?

--
A. Kanawati
NO.anto...@comcast.net

David Hopwood

unread,
Oct 15, 2005, 12:27:38 PM10/15/05
to
Joachim Durchholz wrote:
> Philippa Cowderoy schrieb:
>> On Fri, 14 Oct 2005, Joachim Durchholz wrote:
>>
>>> Still, I'm reluctant to accept that "a machine that can do
>>> interaction is more powerful than a TM".
>>
>> No TM can launch an ICBM, for one.
>
> Simply attach it to a detector that detects it if the TM writes a One to
> a given tape cell, and let that launch the ICBM.

Doesn't work. An ICBM launcher has to be able to respond to its input
in real-time, i.e. it is a single process that launches the ICBM *when*
it receives an interactive input.

A TM is not sufficient to model this properly. If you run the TM multiple
times in a loop, you no longer have just a TM; you need some way to model
the overall process. Interactive models of computation allow you to do that,
even for much more complicated examples.

(For this particular example, you need a real-time interactive model.
It wouldn't be a good idea to have an arbitrary delay before launching
the ICBM.)

--
David Hopwood <david.nosp...@blueyonder.co.uk>

Aaron Denney

unread,
Oct 15, 2005, 1:07:17 PM10/15/05
to
["Followup-To:" header set to comp.lang.functional.]

On 2005-10-15, Antoun Kanawati <ant...@comcast.net> wrote:
> As someone famous said: Beware of the following program, I have
> proven it correct, but I have not tested it.

Knuth: "Beware of bugs in the above code; I have only proved it correct,
not tried it."

--
Aaron Denney
-><-

Ulrich Hobelmann

unread,
Oct 15, 2005, 1:57:35 PM10/15/05
to
Antoun Kanawati wrote:
>> I don't think there are programmers that do really complicated
>> (maybe-not-halting) algorithms, that just sit down and code them
>> without analysis beforehand.
>
> So, if your education does not include this foundational theory, how do
> you analyze a complex problem?

Well, I had my share of theory, but I think most programmers out there
don't encounter this kind of stuff. Maybe I'm wrong, and they do. In
that case I'd probably agree with you that theory is good to know.

As it is, I'd rather have skipped learning all the stuff I might as some
point in ten years need to know, and learned other stuff instead of
finished earlier.

To relate this opinion to my initial comments:
It's all cool that we learn so much in Germany, but in the US a graduate
(BSc) is maybe 22-23, not 25-26, and they do get a job usually. By the
time they're 26 they might make more than a German at his entry-level
job -- if the German finds one, that is. On both sides of the ocean you
also have grad school, but I suppose it depends on your employer if you
can profit from that. If you need more theory, you can always study it
on your own, no need for grad school. The German 4.5 year degree could
easily be compressed into 3 years, and still teach more CS than an
average US college (that OTOH offer a more diverse education than the
German system that focuses only on major/minor right from the beginning).

--
Blessed are the young for they shall inherit the national debt.
Herbert Hoover

Joe Marshall

unread,
Oct 15, 2005, 4:23:53 PM10/15/05
to
Ulrich Hobelmann <u.hob...@web.de> writes:

> No, what I meant is that the problem doesn't interest me, because
> *programmers* write and test programs, and usually those hand-written
> programs don't infinite-loop. Of course a general halt-detector for
> arbitrary binary code doesn't exist, but that's fine by me. The
> important thing is that people can check if their programs halt, and
> that's usually humanly decideable, if you have some value that
> decreases over every loop iteration and exits when zero.

That was the point of my examples. It's certainly obvious that the
kernel function and the foldl function halt because their input is
always finite. The mystery function simply isn't humanly decidable
(although no one has ever found an input for which it doesn't halt).

On the other hand, the goodstein sequence program has a value that
increases dramatically (at first).

> I don't think there are programmers that do really complicated
> (maybe-not-halting) algorithms, that just sit down and code them
> without analysis beforehand.

I agree. But there are times when I write a program and start it
going and wait.... and wait.... and wait .... and then I start to
wonder: is it a big problem, a slow computer, or a bug. For
instance, this program:


(define (m i j k)
(cond ((= i 0) (+ k 1))
((and (= i 1) (= k 0)) j)
((and (= i 2) (= k 0)) 0)
((= k 0) 1)
(else (m (- i 1) j (m i j (- k 1))))))

(m 4 4 4) => ??

Does it halt?

It helps to know *how* to figure out what the problem is.


Antoun Kanawati

unread,
Oct 15, 2005, 5:36:30 PM10/15/05
to

Thanks!

--
A. Kanawati

Joachim Durchholz

unread,
Oct 16, 2005, 5:18:01 AM10/16/05
to
Ulrich Hobelmann schrieb:

> Well, I had my share of theory, but I think most programmers out there
> don't encounter this kind of stuff. Maybe I'm wrong, and they do. In
> that case I'd probably agree with you that theory is good to know.

It depends entirely on whether they're "coding", or "engineering
software" (for some suitable definition of these two terms, no value
judgement implied).

For the former, you don't need much theory.

For the latter, theory is one of the many tools in the toolbox. It may
lie unused for weeks, months, or (sometimes) even years, but when you
need it, it's indispensable.
IOW it's clearly a specialty tool.

Regards,
Jo

Chris F Clark

unread,
Oct 16, 2005, 12:25:45 PM10/16/05
to
Ulrich Hobelmann <u.hob...@web.de> writes:

> No, what I meant is that the problem doesn't interest me, because
> *programmers* write and test programs, and usually those hand-written
> programs don't infinite-loop. Of course a general halt-detector for
> arbitrary binary code doesn't exist, but that's fine by me. The
> important thing is that people can check if their programs halt, and
> that's usually humanly decideable, if you have some value that decreases
> over every loop iteration and exits when zero.
>
> I don't think there are programmers that do really complicated
> (maybe-not-halting) algorithms, that just sit down and code them without
> analysis beforehand.

Well, your fundamental assertion here is wrong. Average programmers
write arbitrarily complex and potentially non-halting programs all the
time. Moreover, they do it without intending to do it. Then they run
their programs and are mystified when they don't work, because they
don't have the theoretical background to understand what the errors
they made are. Instead they step through the programs one statement
at a time writing down values of variables and hoping to see the flaw.
I work with such people on a regular basis.

Howerver, your real gripe seems to be not about theory in general, but
about reductions to the halting problem. You may be right. You may
never have a practical use for that particular theoretical technique.
However, I don't think you can tell that a priori.

When one spends a lot of time doing something, the point is that
practicing it makes it second nature. I had the same problem with
trigonometry. I had real trouble with the identities desptie them
being simplyy symbolic formulas, because I had real trouble with
remembering which was sine and which was cosine--my particular
dyslexia kicks in there. Now, most of my life, not remembering my
trig has not been an issue. However, it was crucial when I had to
solve a particular problem in Calculus a few years later. The problem
did not fall to the standard solution for that type of problem (le
Hospital's rule if I recall correctly), unless one transformed the
problem into a trigonometric space. Not being facile with
trigonometry, the idea did not occur to me and I would have been able
to do the transformation if it had. So, I simply could not solve that
problem, because I lacked the theory (in this case trigonometry)--and
in particular the practicing of the theory so that it's application
was second nature to me.

Now, I don't know how many times I have actually transformed problems
into the halting problems in my professional career, maybe none.
However, I know that from time to time I read such transformations in
articles that I read on areas that do interest me. The fact that I
can make the transformations without much thought allows me to read
the article and the proof and understand what is being written. In
fact, it often lets me skip the detailed reading of the proof, because
I can understand the essential point being made because the concept is
second nature.

Theoretical knowledge works that way. The more theory one understands
the easier it is to learn more things. That's useful, even for
someone like me who is essentially a garden-variety practitioner. It's
nice to be able to read things like the paper on Boyer-Moore string
matching and understand why it works. That's one real reason to learn
theory, to be able to read papers of interesting new results, results
that can make your day-to-day programming life easier and better. Sure
I don't write proofs for a living, but it is nice to be able to read
and understand papers that include proofs.

If I have any regrets, its not in learning how to reduce problems into
the halting problem. Its rather in not being able to solve recurrance
equations as second nature. There are a lot of times I cannot follow a
performance argument, simply because I cannot solve the recurrance
equation trivially. I guess I should get myself a book on them and do
some pratice....

-Chris

fredas...@yahoo.com

unread,
Oct 17, 2005, 9:05:11 PM10/17/05
to
THINGS YOU SHOULD KNOW ABOUT BRANDON J. VAN EVERY BEFORE REPLYING TO
ONE OF HIS POSTS

1. He has never designed any game, nor contributed to the design of
any game, which has ever seen the light of day, despite referring
to himself as a "game designer." (In rebuttal, he pointed out his
"one complete game" from "1983" on the "Atari 800" which he showed
to his "8th grade math teacher.")

2. He has never been employed in the game industry, in any way,
shape, manner or form. Despite this, for some reason he managed
to get named as an Independent Games Festival judge; a curious
turn of events, since their stated intent is to appoint
"professionals in the game industry" (their quote, not his).

3. In fact, the only programming job he had listed on his resume was
for only "2 years" ending in "1998," working in C and assembly on
a graphics driver, as a "Sr. Software Engineer" -- a curious
title, since this was his first (and only) job in the software
industry. There is no evidence he has used C++, nor any other
language, professionally. (And the company in question is
defunct, anyway, so there is no way to verify his claim.)

4. The other jobs he has mentioned having after this one and only
items on his resume are: "yard maintenance work," "painting
apartments," "scrubbing floors," "sub minimum wage signature
gathering," and working for "$5/hour at a Vietnamese restaurant."

5. The only personal project he actually wrote code for and made
available in some manner was Free3d, a software 3D rendering
engine. Stating that its goals were to be "100% efficient, 100%
portable" and to release it in a "one year time frame," which he
started in "1993" and abandoned in "1996," admitting that it
"barely drew even a single polygon" and "did hardly anything in
the 3D department."

6. Almost every Internet community (Usenet newsgroup, mailing list,
etc.) he has ever introduced himself to has resulted in him
repeating the same pattern: asking leading questions, demanding
people do things his way, becoming hostile, annoying the other
participants, alienating them, and finally leaving in disgust.

7. Of the projects (open source and otherwise) whose communities he
has (briefly) joined, he has never contributed anything tangible
in terms of code or documentation.

8. The project he has intermittently claimed to be working on, Ocean
Mars, is vaporware -- and is one of his admitted "failures." He
allegedly sunk "nine months of full time 60 hours/week" and about
"$80K" into it (at least; he "stopped counting") with only a
"spherical hexified icosahedron" display to show for it (only
allegedly, since it has never been shown or demonstrated
publicly).

9. Since his embarassing frustration with his Ocean Mars project, he
has decided that C and C++ aren't "worth anything as a resume
skill anymore," and embarked on a quest in 2003 to find a
high-level language that will suit his needs. After more than a
year, at least ten languages, and not having even "written a line
of code" in any of them, he still has yet to find a language that
will suit him.

10. Finally, despite vehemently insisting that he is not a troll, many
people quite understandingly have great difficulty distinguishing
his public behavior from that of a troll.

Bradd W. Szonye

unread,
Oct 19, 2005, 1:24:16 PM10/19/05
to
Ulrich Hobelmann wrote:
> But isn't the Lambda Calculus, or Recursive Functions equivalent in
> power to the funny tape machine? Both are much easier to understand,
> IMHO, and the first one even provides the basis for lots of programming
> languages. Why does every CS student have to suffer through the Turing
> stuff, but most haven't even *heard* of Lambda Calculus? This just
> doesn't make sense to me.

In my 4th-year computation theory class, we learned of Turing machines,
mu-recursive functions (closely related to the lambda calculus), and
Church's Thesis. The course first presented a series of increasingly
powerful automata. In that context, Turing machines make a lot of sense:
The automata clearly model computing devices (rather than mathematical
functions), and the Turing machine is the simplest computing device with
the full power of real hardware.

The course then developed grammars and functions the same way, leading
up to unrestricted grammars and mu-recursive functions, to demonstrate
Church's Thesis from the device, language, and mathematical angles. I
think it's a good approach to teaching computation theory. Alas, the
textbook was quite difficult, and I was quite bored with school by that
point, so I didn't really understand the material at the time. I had a
similar problem with Lisp in my AI class; I didn't really "get" Lisp
until I decided to teach myself Scheme a couple years ago.

Nowadays, I strongly dislike schooling, specifically classroom teaching.
I learn much better from self-study. Universities have an excellent
/environment/ for learning -- lots of free time, lots of resources, lots
of smart people around -- but even at the best universities, classrooms
still basically suck.
--
Bradd W. Szonye
http://www.szonye.com/bradd

BR

unread,
Oct 19, 2005, 1:31:58 PM10/19/05
to
On Wed, 19 Oct 2005 17:24:16 +0000, Bradd W. Szonye wrote:


> Nowadays, I strongly dislike schooling, specifically classroom teaching.
> I learn much better from self-study. Universities have an excellent
> /environment/ for learning -- lots of free time, lots of resources, lots
> of smart people around -- but even at the best universities, classrooms
> still basically suck.

One size fits all. Now there is one thing they provide that self-study
doesn't. The push to do something. Some people need to be "under the gun",
so to speak.

It is loading more messages.
0 new messages