Can't get the HTML documentation's calculator example to work.

81 views

Peter Olsen

Aug 31, 2011, 8:02:16 PM8/31/11
I can't run the calculator in Section 4.1, "Lex Example" of the HTML
documentation. I've found and corrected some simple discrepancies between
version 3.4 and the exampe listing. for example replacing lex.lexer wit
lex.Lexer, but this one stumps me. My eventual goal is to write a translator
from Algol to Python (don't ask), but so far I can't even get the simple
examples to work.

Here are my code and results, Any help will be greatly appreciated.
--------------------------------------------------------------

# ------------------------------------------------------------
# calclex.py
#
# tokenizer for a simple expression evaluator for
# numbers and +,-,*,/
# ------------------------------------------------------------
import lex

# List of token names. This is always required
tokens = (
'NUMBER',
'PLUS',
'MINUS',
'TIMES',
'DIVIDE',
'LPAREN',
'RPAREN',
)

# Regular expression rules for simple tokens
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_LPAREN = r'\('
t_RPAREN = r'\)'

# A regular expression rule with some action code
def t_NUMBER(t):
r'\d+'
t.value = int(t.value)
return t

# Define a rule so we can track line numbers
def t_newline(t):
r'\n+'
t.lexer.lineno += len(t.value)

# A string containing ignored characters (spaces and tabs)
t_ignore = ' \t'

# Error handling rule
def t_error(t):
print "Illegal character '%s'" % t.value[0]
t.lexer.skip(1)

# Build the lexer
lexer = lex.Lexer()

# To use the lexer, you first need to feed it some input text using
# its input() method. After that, repeated calls to token() produce
# tokens. The following cod#e shows how this works:

# # Test it out
data = '''
3 + 4 * 10
+ -20 *2
'''

#Give the lexer some input
lexer.input(data)

# Print the lexer directory so that I know that the lexer has been
# created been created.
print "Lexer directory"
print dir(lexer)

# Tokenize
print "\nStarting the 'print token' loop."
while True:
tok = lexer.token() # This is line 74 from error trace.
if not tok: break # No more input
print tok

# When executed, the example will produce the following output:

# \$ python example.py
# LexToken(NUMBER,3,2,1)
# LexToken(PLUS,'+',2,3)
# LexToken(NUMBER,4,2,5)
# LexToken(TIMES,'*',2,7)
# LexToken(NUMBER,10,2,10)
# LexToken(PLUS,'+',3,14)
# LexToken(MINUS,'-',3,16)
# LexToken(NUMBER,20,3,18)
# LexToken(TIMES,'*',3,20)
# LexToken(NUMBER,2,3,21)

# Lexers also support the iteration protocol. So, you can write the above loop
as follows:

# for tok in lexer:
# print tok

# The tokens returned by lexer.token() are instances of LexToken. This object
has# attributes tok.type,
# tok.value, tok.lineno, and tok.lexpos. The following code# shows an example of
accessing these attributes:

# # Tokenize
# while True:
# tok = lexer.token()
# if not tok: break # No more input
# print tok.type, tok.value, tok.line, tok.lexpos

---------------------------------------------------------------
Outout
---------------------------------------------------------------

Lexer directory
['__doc__', '__init__', '__iter__', '__module__', '__next__', 'begin', 'clone',
'current_state', 'input', 'lexdata', 'lexerrorf', 'lexignore', 'lexlen',
'lexliterals', 'lexmodule', 'lexoptimize', 'lexpos', 'lexre', 'lexreflags',
'lexretext', 'lexstate', 'lexstateerrorf', 'lexstateignore', 'lexstateinfo',
'lexstatere', 'lexstaterenames', 'lexstateretext', 'lexstatestack', 'lextokens',
'lineno', 'next', 'pop_state', 'push_state', 'readtab', 'skip', 'token',
'writetab']

Starting the 'print token' loop.
Traceback (most recent call last):
File "<stdin>", line 75, in <module>
File "lex.py", line 318, in token
for lexre,lexindexfunc in self.lexre:
TypeError: 'NoneType' object is not iterable

Peter Olsen
pco...@gmail.com

A.T.Hofkamp

Sep 1, 2011, 8:41:56 AM9/1/11
On 01/09/11 02:02, Peter Olsen wrote:
> # Build the lexer
> lexer = lex.Lexer()

Here my working code does

lexer = lex.lex()

It works for me then, but I may have changed more things while testing.

Albert