a...@littlepinkcloud.invalid writes:
>Anton Ertl <
an...@mips.complang.tuwien.ac.at> wrote:
>> a...@littlepinkcloud.invalid writes:
>>>Anton Ertl <
an...@mips.complang.tuwien.ac.at> wrote:
>>>> On interpreting an FP number, the number is put on the FP stack by
>>>> REC-FLOAT/FNUMBER?, and nothing else (NOOP) needs to be done. The
>>>> NUMBER-CONVERSION mechanism does not allow specifying something extra
>>>> to do on interpretation, so you cannot use it to replace TO X (with
>>>> e.g., ->X), you cannot use it to recognize ordinary words (so
>>>> SwiftForth hard-codes the word recognizer into the text interpreter),
>>>> and you cannot use it to implement a dot-parser.
>>>
>>>Exactly so: as I said, NUMBER-CONVERSION is a simple mechanism, and it
>>>cannot do these things. That's what makes it better than recognizers:
>>>rather than a general-purpose language extension mechanism it handles
>>>user-defined literals.
>>
>> It's better because it can do less? Is this some kind of doublethink?
>
>I don't think it is. There's a great deal to be said for simple
>mechanisms which do a single thing in a simple way.
And there is a great deal to be said for a simple mechanism that can
do more things in a simple way.
>I know you don't
>much like talk of this kind, but it't the essence of Forth.
I don't think that limiting mechanisms to a single thing is the
essence of Forth, on the contrary: being able to use a single
mechanism for more is common in Forth. E.g., the return stack is not
just used for return addresses, but also for counted-loop parameters
and temporary values.
>> Given that every new feature can be declared as a "WIBNI", "WIBNI" is
>> useless as an argument for determining whether a feature is beneficial
>> or not (except if you deem every feature detrimental).
>
>No. I did explain quite carefully what a WIBNI is: it's what happens
>when you set out to do one thing, you notice that some other thing
>(not actually required by what you set out to do) can be done by
>extending your scope a little bit, you think "Wouldn't it be nice if
>we did that too", and iterate. It is the proverbial road to hell,
>paved with good intentions.
So extending the existing return stack (used by docol and ;s) with >R
R> R@, and moreover, with DO LOOP, paved the road to hell? Sure,
there are people who prefer separate stacks for these three purposes,
and it would follow your dogma of "mechanism which do a single thing
in a simple way", but Moore (and the majority of the Forth community)
actually choose to use a combined mechanism for these three purposes,
which requires additional words (i.e., added complexity to the return
stack mechanism), but reduces the overall implementation complexity.
>> My impression is that the opposition to recognizers knows that
>> technically they have no leg to stand on, that's why they avoid any
>> technical discussion of recognizers like the plague, and instead
>> throw around emotional appeals like "WIBNI", "main battle tank", or
>> "language of the month", and (not so recently) "make Forth more like
>> C".
>
>I can turn that around easily enough: there is no strictly technical
>argument in their favour either. It's a matter of some people who
>like to use this kind of language feature and some who don't.
And apparently the opposition to recognizers fear that too many people
like the additional features, that's why they avoid discussing them
and prefer throwing around phrases like "WIBNI", "main battle tank" or
"language of the month", and (not so recently) "make Forth more like
C".
>>>You don't make a good language by adding features that might be
>>>useful. It's impossible to prove that Feature X is detrimental by,
>>>say, measuring its length, however much people might demand
>>>"technical reasons", thereby trying to exclude anything that can't
>>>be measured.
>>
>> "Not measurable" is acceptable, then you have a qualitative argument.
>> However, emotional appeals don't advance the discussion and therefore
>> don't have a place there.
>
>Emotional appeals? Not at all, unless you are to classify all
>discussion of language design, elegance, and Forth's extreme
>simplicity as "mere emotion", in the vein of Mr Spock.
If you discuss them without referring to concrete properties of the
proposal, as you do, it is certainly not a technical discussion, and
obviously you only appeal to the emotions of the audience.
>> [WIBNI]
>> Actually, I was wrong; it means "The proposal is not already
>> implemented in or planned for VFX Forth" (so, e.g.,
>> <
http://www.forth200x.org/twos-complement.html> was not WIBNI despite
>> not coming from Stephen Pelc). And it's what it means to Stephen
>> Pelc. And given that he introduced the term,
>
>He didn't. He might even have got it from me; I got it from Chris
>Stephens. But it might just have been in the air.
Whoever came up with the term, and whatever it means to that person,
or to you, it has been "escalator"ed by Stephen Pelc. He used it
frequently and (in my presence) exclusively in recent years, and with
the meaning above.
>>>However, the WIBNI is a useful mental tool: whenever you have a
>>>program that you're creating and you think "With a little more work
>>>it could be made to do these other useful things too", that's a
>>>WIBNI. It happens when you have a job to do, you see something that
>>>*is not part of the requirements* but you see that by making your
>>>program just a little more complicated you can implement it. Forth,
>>>the language and system, is the result of resisting that pressure,
>>>as explained by Moore in his early writings.
>
>> Forth is the result of *not* following this principle. If Moore had
>> followed this principle, he would not have introduced an extensible
>> dictionary and a text interpreter, because that can always do more
>> than is required at any particular moment.
>
>Um, pardon? That makes no sense at all.
Why not?
>> There are different metrics for programs: implementation complexity,
>> generality, interface complexity, correctness, performance, etc.
>> Chuck Moore favours minimal implementation complexity over the others
>> more than most programmers, but as Forth shows, he does not do it
>> exclusively. More relevant to the present discussion: he does not
>> avoid generality just for the sake of being less general.
>
>No, of course not.
But your only complaint that I could identify has been that the
proposal is too general. It has not been about the implementation
complexity or anything else.
>> Concerning the requirements, one of the requirements for a text
>> interpreter that is not a WIBNI (in the original sense) is the ability
>> to support a dot-parser.
>
>Is it? Says who?
Ask Stephen Pelc, whether he thinks that a dot-parser is a WIBNI, and
if so, why VFX implements a dot-parser.
>> And this is not just useful for large systems, but also for small
>> ones. At the last Forth-Tagung Joerg Voelker talked about programming
>> a relatively large application on a small microcontroller. One of the
>> issues he had was the amount of space for stuff like structure field
>> names. E.g., if he has fields
>>
>> somename-x
>> somename-y
>> somename-z
>> anothername-w
>> anothername-x
>> anothername-y
>>
>> This takes quite some space on his chip. Instead, one could have two
>> wordlists:
>>
>> SOMENAME containing X Y Z
>> ANOTHERNAME containing W X Y
>>
>> which saves some space, and then use a dot-parser to access SOMENAME.X
>> and ANOTHERNAME.X.
>>
>> So it's not as if the additional features of the proposed recognizers
>> were for things for which there are no requirements.
>
>So, I think the question is about "one shot" wordlists. Are you
>really saying that a dot parser is needed for that?
Having one-shot vocabularies would be another way to solve this
problem, but is it really better in any way?
Is the implementation complexity lower? Doubtful. You need an extra
hook in the interpreter for either searching the vocabulary, or for
restoring the search order afterwards, and you need some new mechanism
that works at parse time that actually sees that the word sets up a
one-shot vocabulary lookup and deals with it.
Is the interface complexity lower? No. SOMENAME X looks like a
sequence of two Forth words, into which you can insert a comment, or
"1 DROP" etc, but in reality these two lexemes belong together, just
like SOMENAME.X. Only in the latter case it is obvious, and that's
what makes it better.
We put some parts of the implementation for one-shot vocabularies into
Gforth several years ago (resulting in 35 SLOC that contain "prelude",
plus additional code around it), but never finished this work; the
limitations of those things that could be straightforwardly
implemented were too restrictive to make it appear worthwhile to go
further, and I never found a way that was worthwhile.
Gforth has a scope recognizer (not quite as powerful as VFX's
dot-parser, but pretty much in line with what one-shot vocabularies
could do) that takes 13 SLOC.
So if we have a need for something like one-shot vocabularies, is a
dot-parser implemented as a recognizer a good solution? Yes.