* Rainer Joswig | How big is a working Franz Allegro lisp image?
* Erik Naggum | on my system, the free system memory decreases by 740K when I start up | the second Allegro CL 5.0.1. the similar process with CLISP requires | 1108K of fresh memory. it is very hard on my system to measure the exact | memory consumption of a process except for the fresh memory it grabs.
* Rainer Joswig | You have not answered my question.
I think what I wrote suggests that I'm aware of that. your question is like a child asking "daddy, why is the water blue?" and to avoid getting into a lot of detail because the child doesn't understand his question, it is sometimes useful to answer "because they put blue stuff in it". [this is actual conversation between child and father.]
what I have written above is what the cost in RAM would be in a ROM-based implementation. I regret that you asked an unaswerable question.
| Unfortunately it's easy for me to put my software in RAM, not in ROM.
so now "easy for me" has moved up to the top of the list of requirements?
you're not really trying to solve any problems, are you, Rainer? this _is_ just another stupid exercise in showing how Common Lisp is Big and Bloated and how bad that is, isn't it? let's create more problems so we're sure we can't solve any of them! that's how we keep academics out of the unemployment statistics when they have ceased to be useful, but it is still not a smart way to use their brainpower.
| A user would use the usual CL on top of that [the Core Lisp].
this means that you actually confuse a proto-Lisp with a Core Lisp. I'm frankly amazed that this is possible. people build proto-Lisps in order to make booting the whole system easy -- it is not what people should use to program anything, and it's so implementation-dependent that there is no point at all in standardizing it, especially not by people who don't actually know how to boot a Common Lisp system.
a Core Lisp that is just like a proto-Lisp upon which everything else in Common Lisp is built is a waste of time and effort -- it would be like defining some primitives in C and instead of ignoring that as necessity, make a whole lot of stink about how others need to define the same primitives in C in _their_ Common Lisps so some other Common Lisp which has exactly the same external definition can be retargeted on another Core Lisp. why would anyone ever think of wasting time on this? sheesh.
| Tweaking something small should be easier than tweaking something large.
this has never been the case. what makes you think it suddenly became the case? why _should_ it be easier, when it clearly isn't?
| This is wrong. Sure startup time is affected by total system size. You | need to be careful about that at initialization time and load time. Is | the code still in cache, etc. Many systems now have very fast cache | systems (for example the backside cache of the G3), taking advantage of | that is not unreasonable. You might have to deal with non-locality of | code and data, ...
you're just bickering now, Rainer. you do understand that this stuff is completely tangential to the issue of total system size.
the usually _relevant_ costs of a startup is related to how much cannot be done prior to startup. this includes dynamic linking (with shared libraries), initialization of memory pools, and any preparations necessary for graceful error handling and exit. this is stuff that does not take time to do, and the likelihood that it is in cache is directly related to how often you do it, not how big the total system is. if it ever would be important to reduce the cache misses at startup time, compile the startup code specially, and earn exactly nothing except that you might win a stupid startup-time contest arranged by people who have no clue about what makes a whole Common Lisp system useful.
| The use of that is that a large part of your program might uses routines | from your kernel. Additionally runtime services like GC would surely | benefit if they could stay in cache.
how would all of this wonderful stuff of yours fit in the _same_ cache that can't hold the full system today, when it has be at least as big after it has been slopped onto the Core Lisp? whatever made you believe that the cache could hold more just because the core is smaller? geez.
but _are_ we really defining a Core Lisp with the strongest requirement that it fit in today's processor caches? is that what this is exercise is _really_ about? no, I don't think so. the cache argument is bogus, the "easy for me" argument is bogus. this is all about Common Lisp being too big and bloated and someone wanting so desperately to prove it just to annoy other people.
| > I wonder which agenda John Mallery actually has -- what he says doesn't | > seem to be terribly consistent. neither do your arguments, Rainer. in | > brief, it looks like you guys want to destabilize the agreement that has | > produced what we have today, for no good reason except that insufficient | > catering to individual egos have taken place up to this point. | | Sure, go on Erik. Make fool out of yourself by blaming other people. | It's a well known tactic by you to mix in your personal attacks.
yet, what is really amusing is that you answer in _much_ worse kind. the obvious conclusion is that I must have hit on some real truth and bruised some of the already very fragile egos.
| > haven't various people tried to produce a Core English already? | | What has this to do with the topic I was discussing?
it shows that you don't learn from history and available experience.
| > Core Lisp is a mistake, but it will be a serious drain on the resources | > available in the Common Lisp community. language designer wannabes and | > language redesigners should go elsewhere. | | Erik, you finally made it in my kill file.
again, it is very obvious that truth hurts: this is a waste and you guys know it, but at least the effort will stand a chance of being remembered.
| Not that I expect you to care, but I don't feel that urgent a need to | read your pointless rantings anymore. Better try to impress beginners | with your excurses in Lisp and how the world is according to you.
I'm sorry, Rainer, I'm not used to being exposed to such reeking envy, but I feel profoundly sorry for you, yet I'm happy you won't respond, now that I am in your kill file: the vileness of your response suggests that there is no limit to what disgusting level you could sink to in order to prove I'm a fool you should not have to listen to, even though you do know I speak the truth about your useless waste of time.
a Core Lisp would be good if we wanted to encourage more implementations in the free software world, or wanted to encourage the retargetability of existing implementations. already, however, whoever wants to reimplement Common Lisp is better off buying or reusing existing code for the exterior of the language (unless the argument from McCarthy and Joswig is that the existing implementations aren't as good as they would have made them, had they been allowed to do them) -- any smarter approach to implementation would also be different from the past, and so any Core Lisp would therefore make better implementations _less_ likely.
but what better way to respond to "it's a waste of time, stupid!" than to acknowledge it by responding "I'm not listening to you!". this would have been _so_ amusing had it not been for the enormous waste of effort and derailing of community efforts that will now take place in spite of the obvious.
I also thought this kind of idiocy was what we had the Scheme community to learn from and not have to repeat ourselves. I guess I was wrong: some people just have to make their very own mistakes before they learn.
now, even with the obvious futility of this project, there are a need for Core Lisps in the plural: how you define them is so dependent on the specific application needs that each project will want its own definition -- and not just because they have unique needs, but because it takes more time to evaluate the existing alternatives than to roll one's own. so it's better to let each project learn from others via the literature and define his own Core Lisp, than to standardize it for all to use.
the history of programming has shown us that subsetting languages does not work, neither in the definition phase where you try to define which components some component depends on (people then agree on which subset to use and the full definition is lost), nor in the shaking of the component tree so unneeded components fall out (programmers will want to use features without having to remember their component, and will hate to reimplement things only because they don't want a whole component). yet, every Lisp implementor has to grapple with the "cold load order problem" -- from which substrate the first few definitions can be executed. every Lisp system that is loading needs to have a few pieces in place, but in a natively compiled Lisp, what's necessary for loading the system is not what is necessary for running the system once fully loaded. and which primitives to port and base others on depends on how the compiler is built and used, and cannot be ported to another compiler unless that compiler have demands put on it that it is meaningless to demand from a competitor in a free market.
I can only assume that the people who want to define a subset are not familiar with the boot problem in the environments they use all day, or ignore it for some higher agenda. e.g., it is "educational" to try to dump a random package with Emacs. effectively, the packages that are dumped with Emacs are written in a much smaller Emacs Lisp than the
> > > Human-derived attempts to make unambiguous languages have gone > > > nowhere. English, which is full of contextual nuance and blunt > > > construction, has won over the regularities of Latin and Greek and the > > > pseudo-regularities of languages like Spanish and German partly, at > > > least, because people don't have patience with aesthetics when they > > > stand in the way of practicality. They want to ELECT practicality > > > when it suits their need, but they don't want to be REQUIRED > > > practicality.
It's also more fun.
> > At a certain point in history Latin had won over Greek and a plethora > > of other languages in the Mediterranean basin.
Is this not something to do with the fact that this was the language of choice of an imperial bureaucracy?
> That is, people's natural inclination is to make lots of variations for > pragmatic reasons, especially ones that favor pronunciation, not to try to > distill the language to a microscopic and easy-to-teach core.
My understanding of the development of english is that it was developed in england and was a kind of trading language to enable way of a disperate series of comunities to communicate with each other. This was because, being on the edge of the European continent, these dark northern islands were regularly invaded by people such as the Romans (speaking Latin), the Saxons and Angles (speaking German dialects), the Vikings (speaking Scandanavian dialect), the Normans (speaking early French) and so on.
A large set of complex tenses and gramatical construct were then trashed to a minimal form to enable people to talk to each other about such burning topics as how much they hated their neighbours in the next village, how much for the fish? and who was having an adulterous with who else.
Then there is the strong influence of the church in England, all the early writing carried out in Latin; and the choices made by the ruling elite at the time. It wasn't until the late middle-ages that the offical language (French) was rejected and english became acceptable.
It should also be noted that until recently (that is the life time of my grandparents) that normal working people in the UK took great delight in talking in all sorts of really strange accents and dialects so as to make themselve totally incomprehensible to anybody other than people born within a radius of about 40 miles and that english was still a trading and official ('imperial') language of the elite.
Infact there are parts of the UK in which people take delight in speaking languages that have nothing to do with english at all. Look at cymraeg or gaelic.
> Often the pronunciation accomodations make the language harder to learn, not > easier, but presumably the reason people do it is that they'd rather have a > language they can use easily than one they can learn easily, because they > are more equipped to learn it in spite of irregularity than they are to > speak it in spite of regularity.
English in not at all regular, particulary when it comes to spelling. It is very flexible particularly when spoken. Any language with a very simple tense and verb structure, and the ability to steal anything neat that comes along (look at excellent words like verandah, gazibo or concervatory, any language that needs three words for a shed in your back garden for sitting in must have something for it). This results in a very very large vocabulary that you can just string together easily.
But I do not want to denigrade any other languages. I feel deeply ashamed that my education has let me down to such an extent that I only know a smattering of words in other languages. I wish could read and speak German (the native tongue of my beloved Wittegenstein, Gunter Grass ....) or Italian (Primo Levi, Dante ...) or French (Zola, Flobert ...), or Russian (Landau, Dostoevsky ...) or all the languages in the world. If only I could live for a thousand years so that I could learn all the languages that there are :-(
But what has this got to do with computer languages? I'm not entirely sure as I think I lost the plot a while ago.
OK, languages should be large, irregular, easily extendible, fun, full of dialects and nuances and all those other good thing that as an english speaker I like about english. Just could you do something about the spelling?
ps: The next question is of course, do Americans speak english? Or just American ;-)
* Chuck Fry | What's the difference between "proto-Lisp" and "Core Lisp"? No one has | so much as posted a hypothetical description of a Core Lisp, so where is | your basis for comparison?
a proto-Lisp is very close to the compiler and the machine and has access to a lot of low-level stuff, like pointers, representational details of complex types, etc, and allows you to isolate them from the exported Lisp. as a Core Lisp has been described (by intended use), it would be a programming language in its own right, defining primitives upon which the rest of Common Lisp needs to be defined, but which are primitives and which need access to the machine is very hard to tell. CLOS could be defined entirely in a Common Lisp sans CLOS, but to make it perform well, you need a lot of access to lower-level stuff. efficient stream I/O is the same way, and the two combined really need special support to do well.
| > this is wrong -- startup time is unaffected by total system size. | | Again, since no spec has been made available for comparison, I don't see | how one can draw a conclusion either way.
by watching other systems, large and small, of course. size of the system has nothing to do with it. that is, other factors are so much more important that system size becomes completely irrelevant.
| > haven't various people tried to produce a Core English already? | | What does that have to do with programming languages?
if you ask hard enough questions, no answer about programming languages has to do with programming languages. take Kent's many philosophic, linguistic, and psychologic comments. they aren't about programming languages per se, but about people defining and using programming languages, and as long as we are human, that actually has strong merit.
if we aren't considering humans, however, a lot more options become available in programming language design, and none of the lessons learned from other experiments involving languages and humans have any bearing on what we do.
| Users have been screaming for a core + libraries architecture for years. | It's time we gave it to them.
users have asked for features and extensions to Perl and C++, and that's what they got. I think the tobacco industry uses the same core argument. the good way is to give people what they need to be happy, not what they want. they want core + libraries, but that's not what they need, as every person who has set out to do this in the past have discovered. what they _do_ need is very hard to figure out as long as they keep thinking they need core + libraries. the problem is: when people _get_ core + libraries, they want languages with everything in them, because it's a terrible mess to deal with core + incompatible and overlapping libraries.
let's take a good look at what core + libraries would solve, and let's at least pretend that core + libraries is not the solution. which _other_ ways to obtain the desiderata will we find in our search?
#:Erik -- suppose we blasted all politicians into space. would the SETI project find even one of them?
I'm trying now to put together a little library of functions for set theory and combinatorics in Lisp and I find it amazing how easy this is. Being able to write abstract concepts, as the cartesian product, in four lines, is very impressive.
There are so many mathematical functions predefined, but a very fundamental one lacks. This is a function which generates a list of numbers, especially 0 1 ... n.
A mathematician creates everything around this function. I wrote one in Lisp and using macro characters I can now write [4 10]. I use this all day.
Probably the speed of this function is not so decisive, but it could unify style. Often it is a good substitute for loop. Comparing
(defun old-factorial (n) (do ((f 1 (* f k)) (k 1 (+ k 1))) ((> k n) f)))
Real time: 4.952857 sec. Run time: 4.96 sec. Space: 15876312 Bytes GC: 30, GC time: 2.17 sec.
with (time (old-factorial 5000)) and
Real time: 3.186444 sec. Run time: 3.14 sec. Space: 15916312 Bytes GC: 31, GC time: 0.69 sec.
with (time (new-factorial 5000)), being
Real time: 0.307474 sec. Run time: 0.31 sec. Space: 40264 Bytes
for (time [0 5000]). After -C on CLisp all become around 10 times faster, with the old factorial now being a little quicker than the new one.
Another newbie question: Why do these functions, for example the first one, which is a simple iteration, consume so much space? I can only imagine that all those 5000 values of f remain in memory, but I find this very strange.
* chu...@best.com (Chuck Fry) | So why don't you go design such an intelligent telephone? No one is | stopping you.
really? have you ever tried to do something like that? I have. I have been running my own business since 1984, and I can tell you a few things about who is stopping new ideas. as can any businessman. we have a bunch of people who have originally had the money, but who have actively stopped a few companies that have really hurt the Lisp community. we have seen how politicians (Margaret Thatcher) have all but obliterated the market for Artificial Intelligence in a whole country. such people must be overcome, but they are actually stopping a lot of good ideas out there which can't overcome their resistance.
I really hope the above line is an attempt at something like "if you are so good, how come you aren't rich?" and other reversals of causality that completely ignore reality in order to ridicule people, and not something you actually believe in. otherwise, you'll get _seriously_ hurt if you get an idea of your own.
#:Erik -- suppose we blasted all politicians into space. would the SETI project find even one of them?
* Rainer Joswig | a bozo like you. | You are a real idiot | you are lying. | I especially hate fools like you
let's just keep this in mind for a second.
| If you feel you can go on and behave like Erik Naggum (who is largely | responsible for the ugly tone in this newsgroup)
Rainer, I'm not responsible for your behavior. you're responsible for your own behavior. all the time, with no exceptions. if you don't like what you see others do, behave better yourself, do not _ever_ blame your inability to be a mature adult on anyone else. grow up or shut up.
if you want to be angry, be angry, but take responsibility for that, too, don't pretend that it's somebody else's fault that you are angry. if you look real close at what I do, I never, ever blame anybody else for my very own irritation at people, and I have never in my entire life made anyone else appear to be responsible for my actions. I realize now that you are really envious of that, but take it out in some other way, OK?
regardless of whether your Baby Lisp is a good idea or not, I think we have heard enough for a while.
#:Erik -- suppose we blasted all politicians into space. would the SETI project find even one of them?
William Deakin <wi...@pindar.com> writes: > A large set of complex tenses and gramatical construct were then trashed to a > minimal form to enable people to talk to each other about such burning topics > as how much they hated their neighbours in the next village, how much for the > fish? and who was having an adulterous with who else.
I think this is the same thing I read somewhere -- English is basically a frozen-out creole (between, I guess, some Germanic language, come Celtic language, Norse (is that Germanic?) and perhaps little bits of Latin), and that explains a lot of the way it is.
Perhaps CL is a kind of frozen-out creole in some sense too.
jos...@lavielle.com (Rainer Joswig) writes: > In article <37A08867.4FFEE...@inka.de>, Friedrich.Domini...@inka.de wrote:
> > > - if you run it with assertions on, the code will be dead slow, > > > so you turn off assertions
> > No not necessarily.
> But "probably"?
I don't know that that follows. CMUCL claims that by using a slightly smart compiler, it can have code which is both safe and fast. It does this (it says) by pulling declarations (which it treats as assertions general) up out of inner loops and then doing the checks only once and relying on things it's worked out about the loop to not have to check all the time. I presume that these dbc systems could do somewhat similar things?
Tim Bradshaw wrote: > William Deakin <wi...@pindar.com> writes:
> > A large set of complex tenses and gramatical construct were then trashed to a > > minimal form to enable people to talk to each other about such burning topics > > as how much they hated their neighbours in the next village, how much for the > > fish? and who was having an adulterous with who else.
> I think this is the same thing I read somewhere -- English is basically a > frozen-out creole (between, I guess, some Germanic language, come Celtic > language, Norse (is that Germanic?) and perhaps > little bits of Latin), and that explains a lot of the way it is.
There was an interesting series on the development and history of english on BBC 2 about 10 years ago, or so. I watched it and my folks bought the book of the series. Fascinating stuff. I also alway liked the fact that JRR Tolkein was a translator of Beowulf, of which I have a battered copy of at home. I also believe there is a yearly prize at Brum Uni name in his honour for the best Old English translation (won by a mate of mine when I was there). But enough already! This is c.l.l.
> It does > this (it says) by pulling declarations (which it treats as assertions > general) up out of inner loops and then doing the checks only once and > relying on things it's worked out about the loop to not have to check > all the time. I presume that these dbc systems could do somewhat > similar things?
It very depend (in Eiffel) I you use loop variants and/or loop invariants it have to be checked any time that the conditions hold. But probably I got you wrong? I don't know how any language could change that without checking.
BTW the original question was learning Scheme, CL (either of one, both, just one (which?)
The DBC was put into discussion as a question from my side if that wouldn't make sense in CL too.
In article <lwu2qm1noo....@copernico.parades.rm.cnr.it>, marc...@copernico.parades.rm.cnr.it says...
> ... a subset of Scheme? :)
It was little more than a thin layer over C. Imagine adding minimal list crunching, GC and a much better syntax to C. -- Please note: my email address is munged; You can never browse enough "There are no limits." -- tagline for Hellraiser
> > > (defun iota (n) > > > (loop for i from 0 upto n collect i))
> > VADE RETRO!
> > **ALARM** LIST-FOR-SERIES ANTIPATTERN!
> Is there any compiler or partial evaluator that could > optimize this consing away? Would "Stalin" (Scheme) > find such a thing?
I don't know. One thing's for sure: I wouldn't write code that requires unusual compiler smarts to avoid sloppy performance. I should take a look at that #Z. There's no reason why one could not write (a la Haskell):
#Z(1 step .. n)
so that then (defun fact (n) (series:collect-fn 'integer (lambda () 1) #'* #Z(1 .. n)))
or even use (defun series-reduce (typ f z &key :initial-value) (series:collect-fn typ (lambda () initial-value) f z))
> (defun iota (n) > (loop for i from 0 upto n collect i)) Fernando Mato Mira wrote: > **ALARM** LIST-FOR-SERIES ANTIPATTERN! Gareth Rees wrote: > Surely it's no big deal in this case? The consing involved in computing > n! is similar to the consing in building the list, so there's no > complexity improvement by rewriting the reduce as a loop. Fernando Mato Mira wrote: > Consing in looping? Why?
No, consing in multiplying. Representing n! takes about n log n storage (since n! is about n^n, which takes about (n log n / log b) digits to represent in base b).