I have embarassing problem using the logging module. I would like to
encapsulate the creation and setting up of the logger in a class, but
it does not seem working.
Here are my relevant parts of the code:
def __init__(self, fileName, loggerName = 'classLog'):
self.Logger = logging.getLogger(loggerName)
self.traceName = fileName
handler = logging.FileHandler(self.traceName,'a')
formatter = logging.Formatter("%(name)s %(asctime)s
%(filename)s %(lineno)d %(levelname)s %(message)s")
self.Handler = handler
if __name__ == "__main__":
name = 'testlog.trc'
classLog = LogClass(name)
logger = classLog.fetchLogger()
logger.info("Created .. ")
The trace file is created properly but contains no lines at all. If I
put the code directly in __main__, it works fine.
What did I miss? Any ideas are wellcome.
Linux: Choice of a GNU Generation
That the default level is less than INFO - if you set that to e.g.
However, I think there are a few problems here beside that. For once,
reading PEP8 might be worth considering.
And the logging-module is written so that setting it up & using it are
de-coupled. Which you foil here somewhat. What is the above supposed to do?
I'm not sure why you need to do this. Diez's reply tells you why you
don't see any output, but your code may also lead to other problems.
For example, if you create two LogClass instances with loggerName
values of "A" and "A.B", then any call to logger "A.B" will lead to
two messages in the log. That's because when a call to "A.B" is
handled, then it is passed to all handlers associated not only with
logger "A.B" but also "A" (its parent logger) and the root logger (its
grandparent). Since you have two FileHandlers configured (one for
"A.B" and one for "A"), the message will end up appearing in two files
(or the same file, if you used the same filename for both ClassLog
It's generally suspicious when you see someone trying to instantiate a
logger and adding a handler at the same time, as you're doing. The
reason this is a potential anti-pattern is that, other than for
trivial scripts, there isn't a natural one-to-one mapping between
loggers and handlers. Loggers (defined by their names, as in "A.B")
define areas of an application organized hierarchically (and answer
the question about a logging event, "Where did it happen?") whereas
handlers are about who's interested in those events, i.e. potential
log readers - they are generally organized according to the answer to
the question about a logging event, "Who wants to know?". In trivial
or command-line scripts, there's often just a one-to-one mapping (root
logger -> console) or one-to-two (root logger -> console and file) but
once your application gets more complex, then you usually have a good
few loggers (based on application areas) but just a few handlers (e.g.
one log file for everything, one log file for errors, console, and one
or two email handlers).
_LOGGER_NAME = 'foo'
FORMAT = '%(name)s %(asctime)s %(filename)s %(lineno)d %(levelname)s
def __init__(self, fileName):
logging.FileHandler.__init__(self, fileName, 'a')
if __name__ == '__main__':
# split creation from configuration
logger = logging.getLogger(_LOGGER_NAME)
I personally use the following pattern:
In any submodule moduleA.py of an application:
_logger = logging.getLogger(MyApp.logger.name + '.moduleA') # attach my
logger to MyApp logger
# Configuration : nothing to be done, relies on MyApp configuration logger
# You can add code in case you are executing your module in standalone
mode (for unit testing for instance)
if __name__ == '__main__':
_logger = logging.getLogger('moduleA')
# here is some unit tests
It's also common to use the pattern
logger = logging.getLogger(__name__)
which will use the name of the module for a logger name, correctly
getting the name of subpackages and submodules when used therein.