Not silly. There are a number of prior posts that discuss alternative solutions. Short answer, there is no simple way (e.g. parameter setting) to modify the operation of the default nltk tokenizer (i.e. nltk.word_tokenize module). There are a number of other tokenizer modules all with their own quirks. You may be able to use regular expressions. So, for instance, the following might work for your purposes:
import re
text = "O'Leary, that's my hat!"
tokens = re.findall(r"\w+(?:[']\w+)*|'|[-.(]+|\S\w*", text)
# yields ["O'Leary", ',', "that's", 'my', 'hat', '!']