This year it is lex(1), same situation only I found this one quicker:
lex reserves a certain amount of space for its input buffer -- I haven't
taken the time to check how much -- that is left as an exercise for the
reader. If you are trying to match some hellaciously long strings or,
like myself, you screw up and accidentally match a hellaciously long
string, you will get either a memory fault or a bus error in your
program. This is because lex does not check to see whether it has
filled its input array and keeps merrily on writing into random
areas of memory.
I am going to get all the specifics on this problem to submit to the
Unix Hotline (although it has been over a year since I told them about
the bug in the yacc(1) documentation and I haven't seen that corrected).
If anyone is interested in more particulars, drop me a line and I will
send it -- the bug fix should be trivial once you know the problem.
By the way, could some lucky person who has the System V Release 2
Programmer's (not User's) Manuals look at the yacc(1) documentation and
see if anything is mentioned about token numbers over 1000 not being
allowed? Maybe they did process the documentation MR and I just haven't
seen the results.
Thanks in advance,
--
The MAD Programmer -- 919-228-3313 (Cornet 291)
alias: Curtis Jackson ...![ ihnp4 ulysses cbosgd clyde ]!burl!rcj
Well certainly token numbers over 1000 will prevent yacc from working right
and yacc should probably warn you about them, but REALLY, what do you want.
Yacc has a number of wired in restrictions about the size of the grammar and
such. I hardly think requiring you to use less than 1000 tokens or else
revise the yacc sources is a big restriction.
Not surprised yacc has not been modified to remove this 'bug'
--
Kurt Guntheroth
John Fluke Mfg. Co., Inc.
{uw-beaver,decvax!microsof,ucbvax!lbl-csam,allegra,ssc-vax}!fluke!kurt
RTFM -- "Read The F*ing Manual!!"