Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A new way to configure Python logging

263 views
Skip to first unread message

Vinay Sajip

unread,
Oct 22, 2009, 5:25:15 AM10/22/09
to pytho...@python.org
If you use the logging package but don't like using the ConfigParser-based
configuration files which it currently supports, keep reading. I'm proposing to
provide a new way to configure logging, using a Python dictionary to hold
configuration information. It means that you can convert a text file such as

# logconf.yml: example logging configuration
formatters:
brief:
format: '%(levelname)-8s: %(name)-15s: %(message)s'
precise:
format: '%(asctime)s %(name)-15s %(levelname)-8s %(message)s'
handlers:
console:
class : logging.StreamHandler
formatter : brief
level : INFO
stream : ext://sys.stdout
file:
class : logging.handlers.RotatingFileHandler
formatter : precise
filename : logconfig.log
maxBytes : 1000000
backupCount : 3
email:
class: logging.handlers.SMTPHandler
mailhost: localhost
fromaddr: my_...@domain.tld
toaddrs:
- suppor...@domain.tld
- dev_...@domain.tld
subject: Houston, we have a problem.
loggers:
foo:
level : ERROR
handlers: [email]
bar.baz:
level: WARNING
root:
level : DEBUG
handlers : [console, file]
# -- EOF --

into a working configuration for logging. The above text is in YAML format, and
can easily be read into a Python dict using PyYAML and the code

import yaml; config = yaml.load(open('logconf.yml', 'r'))

but if you're not using YAML, don't worry. You can use JSON, Python source code
or any other method to construct a Python dict with the configuration
information, then call the proposed new configuration API using code like

import logging.config

logging.config.dictConfig(config)

to put the configuration into effect.

For full details of the proposed change to logging, see PEP 391 at

http://www.python.org/dev/peps/pep-0391/

I need your feedback to make this feature as useful and as easy to use as
possible. I'm particularly interested in your comments about the dictionary
layout and how incremental logging configuration should work, but all feedback
will be gratefully received. Once implemented, the configuration format will
become subject to backward compatibility constraints and therefore hard to
change, so get your comments and ideas in now!

Thanks in advance,


Vinay Sajip


Jean-Michel Pichavant

unread,
Oct 23, 2009, 1:27:11 PM10/23/09
to Vinay Sajip, pytho...@python.org
For my part, I'm configuring the loggers in the application entry point
file, in python code. I'm not sure I am that concerned. However being a
great fan of this module, I kindly support you for any improvements you
may add to this module and appreciate all the work you've already done
so far.

Cheers,

Jean-Michel

Vinay Sajip

unread,
Oct 23, 2009, 2:16:22 PM10/23/09
to Jean-Michel Pichavant, pytho...@python.org
> For my part, I'm configuring the loggers in the application entry point

> file, in python code. I'm not sure I am that concerned. However being a
> great fan of this module, I kindly support you for any improvements you
> may add to this module and appreciate all the work you've already done
> so far.

Thanks, I also appreciate your comments on python-list to help out users new to logging or having trouble with it.

If you're happy configuring in code, that's fine. The new functionality is for users who want to do declarative configuration using YAML, JSON or Python source (Django is possibly going to use a dict declared in Python source in the Django settings module to configure logging for Django sites).

Best regards,

Vinay Sajip



Wolodja Wentland

unread,
Oct 23, 2009, 5:14:52 PM10/23/09
to pytho...@python.org
On Thu, Oct 22, 2009 at 09:25 +0000, Vinay Sajip wrote:

> I need your feedback to make this feature as useful and as easy to use as
> possible. I'm particularly interested in your comments about the dictionary
> layout and how incremental logging configuration should work, but all feedback
> will be gratefully received. Once implemented, the configuration format will
> become subject to backward compatibility constraints and therefore hard to
> change, so get your comments and ideas in now!

First and foremost: A big *THANK YOU* for creating and maintaining the
logging module. I use it in every single piece of software I create and
am very pleased with it.

You asked for feedback on incremental logging and I will just describe
how I use the logging module in an application.

Almost all applications I write consist of a/many script(s) (foo-bar,
foo-baz, ...) and a associated package (foo).

Logger/Level Hierarchy
----------------------

I usually register a logger 'foo' within the application and one logger
for each module in the package, so the resulting logger hierarchy will
look like this:

foo
|__bar
|__baz
|__newt
|___witch

I set every loggers log level to DEBUG and use the respective logger in
each module to log messages with different log levels. A look at the
code reveals, that I use log levels in the following way:

* DEBUG - Low level chatter:
* Called foo.bar.Shrubbery.find()
* Set foo.newt.Witch.age to 23
* ...

* INFO - Messages of interest to the user:
* Read configuration from ~/.foorc
* Processing Swallow: Unladen

* WARNING - yeah, just that (rarely used)
* Use of deprecated...

* ERROR:
* No such file or directory: ...
* Bravery fail: Sir Robin

Among other levels specific to the application, like PERFORMANCE for
performance related unit tests, ...

And *yes*: I use the logging module to output messages to the user of
which I think she *might* be interested in seeing/saving them.

Application User Interface
--------------------------

I like to give my users great freedom in configuring the application and
its output behaviour. I therefore usually have the following command
line options:

-q, --quiet No output at all
-v, --verbose More output (Messages with Level >= INFO)
--debug All messages

And I like the idea to enable the user to configure logging to the
console and to a log file independently, so I also provide;

--log-file=FILE Path of a file logged messages will get saved to
--log-file-level=LEVEL Messages with level >= LEVEL will be saved

Sometimes I need special LogRecord handling, for example, if I want to
enable the user to save logs to a html file, for which I write a HTML
Handler and expose the templates (mako, jinja, you-name-it) used for
generating the HTML to the user.

The last facet of the logging system I expose to the user is the format
of the log messages. I usually do this within the applications
configuration file (~/.foorc) in a special section [Logging].

Implementation
--------------

You have rightfully noted in the PEP, that the ConfigParser method
is not really suitable for incremental configuration and I therefore
configure the logging system programmatically.

I create all loggers with except the root (foo) with:

LOG = logging.getLogger(__name__)
LOG.setLevel(logging.DEBUG)

within each module and then register suitable handlers *with the root
logger* to process incoming LogRecords. That means that I usually have a
StreamHandler, a FileHandler among other more specific ones.

The Handlers I register have suitable Filters associated with them,
so that it is easy to just add multiple handlers for various levels to
the root handler without causing LogRecords to get logged multiple
times.

I have *never* had to register any handlers with loggers further down in
the hierarchy. I much rather like to combine suitable Filters and
Handlers at the root logger. But that might just be me and due to my
very restricted needs. What is a use case for that?

The unsuitabililty of the ConfigParser method however is *not* due to the
*format* of the textual logging configuration (ie. ini vs dict) but
rather due to the fact that the logging library does not expose all
aspects of the configuration to the programmer *after* it was configured
with .fileConfig().

Please contradict me if I am wrong here, but there seems to be *no* method
to change/delete handlers/loggers once they are configured. Surely I
could temper with logging's internals, but I don't want to do this.

PEP 391
-------

I like PEP 391 a lot. Really! Thanks for it. The configuration format is
very concise and easily readable. I like the idea of decoupling the
object ultimately used for configuring (the dict) from the storage of
that object (pickled, YAML, JSON, ...).

What I dislike is the fact that I will still not be able to use it with
all its potential. If PEP 391 would have already been implemented right
now I would expose the logging configuration to the user in:

~/.foo/logging

load the dictionary and *programmatically* change the configuration to
meet the user demands (quiet, verbose, file, ...) stated with command
line options by adding/deleting/changing handlers in the dict before
passing it to dictConfig.

That seems suboptimal. ;-)

What I would *love* to see in the future would be:

* Default logging configuration in a YAML/JSON/... file somewhere in
{/etc/foo/logging.conf, WindowsFooMagic/logging.conf} which describes
all loggers/handlers/filters/... that *might* get used by the
application eventually

* Additionally: The possibility to *override* some parts of the
configuration in another file (files?).

* The possibility to enable/disable certain parts of the configuration.

* Access to all parts of the logging infrastructure, so that I can adapt
already configured parts to my actual needs.

* Easy configuration of a lower *and* upper bound for Handlers, so that
I can easily add additional (more verbose) Handlers without fear of
messages getting logged multiple times.

The point of all this is, that the final configuration of the logging
system is unknown until the configuration files *and* the command
line have been parsed and does not change (often) afterwards.

My main idea is to see the configuration files not as the final
configuration of the logging system but rather as a definition of the
building blocks that can be plucked together easily programmatically if
the developer sees the need to do so.

with kind regards

Wolodja Wentland

Post Scriptum

I just wrote what came to my mind. It might be that I am not aware of
better ways to deal with incremental configuration. And I read PEP 391
for the first time today, so I might have overlooked a lot of points.

But this is how I do it right now. Please point out anything that might
make my life easier.

signature.asc

Vinay Sajip

unread,
Oct 24, 2009, 3:54:04 AM10/24/09
to pytho...@python.org
Wolodja Wentland <wentland <at> cl.uni-heidelberg.de> writes:

> First and foremost: A big *THANK YOU* for creating and maintaining the
> logging module. I use it in every single piece of software I create and
> am very pleased with it.

I'm glad you like it. Thanks for taking the time to write this detailed
post about your usage of logging.

> You asked for feedback on incremental logging and I will just describe
> how I use the logging module in an application.
>
> Almost all applications I write consist of a/many script(s) (foo-bar,
> foo-baz, ...) and a associated package (foo).
>
> Logger/Level Hierarchy
> ----------------------
>
> I usually register a logger 'foo' within the application and one logger
> for each module in the package, so the resulting logger hierarchy will
> look like this:
>
> foo
> |__bar
> |__baz
> |__newt
> |___witch
>
> I set every loggers log level to DEBUG and use the respective logger in

You only need set foo's level to DEBUG and all of foo.bar, foo.baz etc.
will inherit that level. Setting the level explicitly on each logger is
not necessary, though doing it may improve performance slightly as the
system does not need to search ancecstors for an effective level. Also,
setting the level at just one logger ('foo') makes it easier to turn down
logging verbosity for foo.* by just changing the level in one place.

> Among other levels specific to the application, like PERFORMANCE for
> performance related unit tests, ...

I'm not sure what you mean here - is it that you've defined a custom level
called PERFORMANCE?

> Application User Interface
> --------------------------
>
[snip]

All of this sounds quite reasonable.



> Implementation
> --------------
>
> You have rightfully noted in the PEP, that the ConfigParser method
> is not really suitable for incremental configuration and I therefore
> configure the logging system programmatically.

Since you allow users the ability to control logging from the command-line,
you need to do programmatic configuration anyway.

> I create all loggers with except the root (foo) with:
>
> LOG = logging.getLogger(__name__)
> LOG.setLevel(logging.DEBUG)
>
> within each module and then register suitable handlers *with the root
> logger* to process incoming LogRecords. That means that I usually have a
> StreamHandler, a FileHandler among other more specific ones.

See my earlier comment about setting levels for each logger explicitly. How
do you avoid low-level chatter from all modules being displayed to users? Is
it through the use of Filters?

> The Handlers I register have suitable Filters associated with them,
> so that it is easy to just add multiple handlers for various levels to
> the root handler without causing LogRecords to get logged multiple
> times.
>
> I have *never* had to register any handlers with loggers further down in
> the hierarchy. I much rather like to combine suitable Filters and
> Handlers at the root logger. But that might just be me and due to my
> very restricted needs. What is a use case for that?

There are times where specific handlers are attached lower down in the
logger hierarchy (e.g. a specific subsystem) to send information to a relevant
audience, e.g. the development or support team for that subsystem. Technically
you can achieve this by attaching everything to the root and then attaching
suitable Filters to those handlers, but it may be easier in some cases to
attach the handlers to a lower-level logger directly, without the need for
Filters.

> The unsuitabililty of the ConfigParser method however is *not* due to the
> *format* of the textual logging configuration (ie. ini vs dict) but
> rather due to the fact that the logging library does not expose all
> aspects of the configuration to the programmer *after* it was configured
> with .fileConfig().
>
> Please contradict me if I am wrong here, but there seems to be *no* method
> to change/delete handlers/loggers once they are configured. Surely I
> could temper with logging's internals, but I don't want to do this.

You are right, e.g. the fileConfig() API does not support Filters. There is
also no API to get the current configuration in any form.

There isn't a strong use case for allowing arbitrary changes to the logging
setup using a configuration API. Deletion of loggers is problematic in a
multi-threaded environment (you can't be sure which threads have a reference
to those loggers), though you can disable individual loggers (as fileConfig
does when called with two successive, disjoint configurations). Also, deleting
handlers is not really necessary since you can change their levels to achieve
much the same effect.

> PEP 391
> -------
>
> I like PEP 391 a lot. Really! Thanks for it. The configuration format is
> very concise and easily readable. I like the idea of decoupling the
> object ultimately used for configuring (the dict) from the storage of
> that object (pickled, YAML, JSON, ...).

That's right - the dict is the lingua franca that all of these formats can
serialize/deserialize from files, sockets etc.

> What I dislike is the fact that I will still not be able to use it with
> all its potential. If PEP 391 would have already been implemented right
> now I would expose the logging configuration to the user in:
>
> ~/.foo/logging
>
> load the dictionary and *programmatically* change the configuration to
> meet the user demands (quiet, verbose, file, ...) stated with command
> line options by adding/deleting/changing handlers in the dict before
> passing it to dictConfig.
>
> That seems suboptimal.

I'm not sure exactly what you mean here. The basic approach is that you
specify your default logging configuration in the dict (via YAML, JSON or
other means) and allow overriding levels via command-line, then that would be
OK. Of course if you specify a file destination in the configuration, it's not
obvious how in general you'd override the filename with a value from a
command-line option. But the answer there is to define all other handlers in
the configuration, load the configuration, then add handlers programmatically
based on command-line options specified for a particular run of a script.

> What I would *love* to see in the future would be:
>
> * Default logging configuration in a YAML/JSON/... file somewhere in
> {/etc/foo/logging.conf, WindowsFooMagic/logging.conf} which describes
> all loggers/handlers/filters/... that *might* get used by the
> application eventually

The future is now, in the sense that PEP 391 is not yet implemented and people
can have an input by saying how they would like it to work. You can do that
with PEP 391 implemented as it stands, though your configuration would have to
leave out any handlers which are optionally specified via command-line
arguments.

> * Additionally: The possibility to *override* some parts of the
> configuration in another file (files?).

That requirement is too broad to be able to give a one-size-fits-all
implementation.

> * The possibility to enable/disable certain parts of the configuration.

You can do that by changing levels in an incremental call. Can you give more
details about what else you might want to enable/disable?

> * Access to all parts of the logging infrastructure, so that I can adapt
> already configured parts to my actual needs.

An example to illustrate your point would be helpful.

> * Easy configuration of a lower *and* upper bound for Handlers, so that
> I can easily add additional (more verbose) Handlers without fear of
> messages getting logged multiple times.

It's easiest to do this using a Filter with an upper bound on the level. I
appreciate that it would be easier to do with an upper bound added to the
Handler itself, but the use case is not that common to warrant such a basic
change in the API.

> The point of all this is, that the final configuration of the logging
> system is unknown until the configuration files *and* the command
> line have been parsed and does not change (often) afterwards.

That's true most of the time, though for some long-running server-type
applications the configuration might need to change multiple times over a
long period to change verbosity levels. Of course, some listening mechanism
would need to be in place for the long-running application to register a
request to set new levels.

> My main idea is to see the configuration files not as the final
> configuration of the logging system but rather as a definition of the
> building blocks that can be plucked together easily programmatically if
> the developer sees the need to do so.

Although I can see how configuration in general can benefit from a building-
block style approach (I developed an alternative hierarchical configuration
system, see http://www.red-dove.com/python_config.html - though that uses its
own JSON-like format and offers some nice features, it's not standard enough
to consider using for the logging package.)

The use of dicts means that users can combine portions of the final dict from
different sources, PEP391 doesn't prescribe exactly how this is done. The dict
presented to dictConfig() must be complete and consistent, but where all the
different bits come from is up to the application developer/system
administrator.

Thanks very much for the feedback,


Vinay Sajip

Wolodja Wentland

unread,
Oct 24, 2009, 6:53:45 AM10/24/09
to pytho...@python.org
On Sat, Oct 24, 2009 at 07:54 +0000, Vinay Sajip wrote:
> Wolodja Wentland <wentland <at> cl.uni-heidelberg.de> writes:
[snip]

> > foo
> > |__bar
> > |__baz
> > |__newt
> > |___witch
> >
> > I set every loggers log level to DEBUG and use the respective logger in

> You only need set foo's level to DEBUG and all of foo.bar, foo.baz etc.
> will inherit that level.

OK, thanks for pointing that out!

[snip]


> > Among other levels specific to the application, like PERFORMANCE for
> > performance related unit tests, ...
>
> I'm not sure what you mean here - is it that you've defined a custom level
> called PERFORMANCE?

Exactly. I used that particular level for logging within a unit test
framework for messages about performance related tests. Combined with a
Handler that generated HTML files from the LogRecord queue using various
templates (make, jinja, ...) it became a nice way to create nice looking
test reports.

Could a HTMLHandler be added to the standard set? Preferably one that
leaves the choice of the template engine to the user.

> > Application User Interface


> [snip]
> All of this sounds quite reasonable.

Great :-)


>
> > Implementation
> > --------------
> >
> > You have rightfully noted in the PEP, that the ConfigParser method
> > is not really suitable for incremental configuration and I therefore
> > configure the logging system programmatically.

> Since you allow users the ability to control logging from the command-line,
> you need to do programmatic configuration anyway.

Yes, but that could be made easier. (see below)

> > I create all loggers with except the root (foo) with:
> >
> > LOG = logging.getLogger(__name__)
> > LOG.setLevel(logging.DEBUG)
> >
> > within each module and then register suitable handlers *with the root
> > logger* to process incoming LogRecords. That means that I usually have a
> > StreamHandler, a FileHandler among other more specific ones.
>
> See my earlier comment about setting levels for each logger explicitly. How
> do you avoid low-level chatter from all modules being displayed to users? Is
> it through the use of Filters?

Exactly. The Handlers will usually employ elaborate filtering, so they
can be "plugged together" easily:

- User wants html? Ah, just add the HTMLHandler to the root logger
- User wants verbose output? Ah, just add the VerboseHandler to ...
- ...

> There are times where specific handlers are attached lower down in the
> logger hierarchy (e.g. a specific subsystem) to send information to a relevant
> audience, e.g. the development or support team for that subsystem.

Guess I never had the need for that.

> Technically you can achieve this by attaching everything to the root
> and then attaching suitable Filters to those handlers, but it may be
> easier in some cases to attach the handlers to a lower-level logger
> directly, without the need for Filters.

Which is exactly what I do and I think that it fits my particular
mindset. I see the root handler basically as a multiplexer that feeds
LogRecords to various various co-routines (ie handlers) that decide what
to do with them. I like working on the complete set of LogRecords
accumulated from different parts of the application. The
handler/filter/... naming convention is just a more verbose/spelled out
way of defining different parts of the pipeline that the developer might
want to use. I guess I would welcome general purpose hook for each edge
in the logger tree and in particular one hook feeding different
co-routines at the root logger.

> though your configuration would have to leave out any handlers which
> are optionally specified via command-line arguments.
> > * Additionally: The possibility to *override* some parts of the
> > configuration in another file (files?).
>
> That requirement is too broad to be able to give a one-size-fits-all
> implementation.

I was thinking along the line of ConfigParser.read([file1, file2, ...]),
so that you could have:

--- /etc/foo/logging.conf ---
...
formatters:
default:
format: '%(asctime)s %(levelname)-8s %(name)-15s %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
...
--- snip ---

and:

--- ~/.foo/logging.conf ---
formatters:
# You can adapt the message and date format to your needs here.
# The following placeholder can be used:
# asctime - description
# ...

default:
format: '%(levelname)-8s %(name)-15s %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
--- snip ---

So that if I call:

logging.config.fromFiles(['/etc/foo/logging.conf,
os.path.expanduser(
'~/.foo/logging.conf')])

The user adaptations will overrule the defaults in the shipped
configuration. I know that I could implement that myself using
{}.update() and the like, but the use case might be common enough to
justify inclusion in the logging module.

> > * The possibility to enable/disable certain parts of the configuration.
>
> You can do that by changing levels in an incremental call. Can you give more
> details about what else you might want to enable/disable?

I will give an example.. The basic problem I have with *all* config file
based configuration right now is that I have to *register* every single
handler/filter with a logger *within* the configuration or their
configuration will be lost.

Assume the following configuration:

--- snip ---
handlers:
h1: #This is an id
# configuration of handler with id h1 goes here
h2: #This is another id
# configuration of handler with id h2 goes here
loggers:
foo.bar.baz:
# other configuration for logger 'foo.bar.baz'
handlers: []
--- snip ---

In this configuration the handlers will be lost. There is no way to
retrieve he configured handlers later on. (or is there?).

What I would like to do is:

--- snip ---
...
if options.user_wants_h1:
try:
someLogger.addHandler(logging.getConfiguredHandler('h1'))
except HandlerNotFound as handler_err:
# handle exception

if options.user_wants_h2:
try:
someLogger.addHandler(logging.getConfiguredHandler('h2'))
except HandlerNotFound as handler_err:
# handle exception
--- snip ---

... same for loggers, filters, etc.

That would enable me to:

* Create a comprehensive logging building block configuration in its
entirety in a nice configuration format. (ie. config file)

* Easily combine these blocks programmatically

In a way I see three members to the party in the development/usage of
logging:

* Logging Expert

Will design the logging system for an application/library. Knows the
requirements and will be able to design different parts of the system.
She will then tell another developer (see below) which blocks are
available.

* Developer

A person that knows about the blocks and combines them
programmatically, designs the user interface and complains about
bugs/new requirements in/for the logging system to the "Logging
Expert".

* User

A user gets exposed to different ways in which to change the logging
system:

- command line options (switches to turn whole blocks off/on)
- configuration files

These *may* a subset of the configuration options that the developer
wants to expose to the user (format, dateformat, ...)
(see above)

> Although I can see how configuration in general can benefit from a building-
> block style approach (I developed an alternative hierarchical configuration
> system, see http://www.red-dove.com/python_config.html - though that uses its
> own JSON-like format and offers some nice features, it's not standard enough
> to consider using for the logging package.)

Thanks for that link! I will certainly investigate that library.

> The use of dicts means that users can combine portions of the final dict from
> different sources, PEP391 doesn't prescribe exactly how this is done. The dict
> presented to dictConfig() must be complete and consistent, but where all the
> different bits come from is up to the application developer/system
> administrator.

Which is one point I like about PEP 391. Just wanted to give some
feedback :-) . You can basically write everything yourself, it is just
that I think that a usage pattern that is frequently implemented on top
of a stdlib should be eventually incorporated into said library.

have a nice day

Wolodja

signature.asc

Vinay Sajip

unread,
Oct 25, 2009, 6:48:07 AM10/25/09
to pytho...@python.org
Wolodja Wentland <wentland <at> cl.uni-heidelberg.de> writes:

> Could a HTMLHandler be added to the standard set? Preferably one that
> leaves the choice of the template engine to the user.

I haven't done this precisely because users' requirements will be very
different for such a handler. For the same reason, there's no XMLHandler
in the stdlib either. Users can easily define their own handlers to do
whatever they want, which typically will be very application-specific.

>
> logging.config.fromFiles(['/etc/foo/logging.conf,
> os.path.expanduser(
> '~/.foo/logging.conf')])
>

I think this sort of requirement varies sufficiently across developers
and applications that it's best not to bake it into the stdlib. With PEP391
developers are free to put together a configuration from whatever sources
and conventions make sense to them, and then expect logging to just follow
instructions which are very specific to logging configuration, and make no
assumptions about company ans application environments and conventions.

> The user adaptations will overrule the defaults in the shipped
> configuration. I know that I could implement that myself using
> {}.update() and the like, but the use case might be common enough to
> justify inclusion in the logging module.
>

Unfortunately, not in a standard enough way. If in the future it becomes clear
that a standard approach has emerged/evolved, then this can always be added
later.

> I will give an example.. The basic problem I have with *all* config file
> based configuration right now is that I have to *register* every single
> handler/filter with a logger *within* the configuration or their
> configuration will be lost.

[snip]


> In this configuration the handlers will be lost. There is no way to
> retrieve he configured handlers later on. (or is there?).

You are right, unless handlers (and filters, formatters etc.) are given
names which can be used to refer to them across multiple configuration calls.
This is something I am thinking about and will probably update PEP 391
with my thoughts.

> What I would like to do is:
>
> --- snip ---
> ...
> if options.user_wants_h1:
> try:
> someLogger.addHandler(logging.getConfiguredHandler('h1'))
> except HandlerNotFound as handler_err:
> # handle exception
>
> if options.user_wants_h2:
> try:
> someLogger.addHandler(logging.getConfiguredHandler('h2'))
> except HandlerNotFound as handler_err:
> # handle exception
> --- snip ---
>
> ... same for loggers, filters, etc.
>
> That would enable me to:
>
> * Create a comprehensive logging building block configuration in its
> entirety in a nice configuration format. (ie. config file)
>
> * Easily combine these blocks programmatically

I think your way of working is entirely reasonable, but IMO is not likely to
be so widespread as to make it worthwile baking into the stdlib. You can
easily build your own configuration from which you build the dict to pass
to dictConfig().

> In a way I see three members to the party in the development/usage of
> logging:
>
> * Logging Expert
>
> Will design the logging system for an application/library. Knows the
> requirements and will be able to design different parts of the system.
> She will then tell another developer (see below) which blocks are
> available.
>
> * Developer
>
> A person that knows about the blocks and combines them
> programmatically, designs the user interface and complains about
> bugs/new requirements in/for the logging system to the "Logging
> Expert".
>
> * User
>
> A user gets exposed to different ways in which to change the logging
> system:
>
> - command line options (switches to turn whole blocks off/on)
> - configuration files
>
> These *may* a subset of the configuration options that the developer
> wants to expose to the user (format, dateformat, ...)
> (see above)
>

Those three roles appear reasonable, but I would say that the expert-designed
blocks would be specialised handlers, filters and formatters. That's not a
full-time job, though ;-)

In addition there are system admin users, who can tweak logging configurations
in response to user community feedback about problems, and to help developers
diagnose faults. In some companies and environments, there are strict walls
between developers and production support teams. In order not to assume too
much about such environmental, non-technical constraints, logging configuration
should not try to be too clever.


Thanks for your thoughts,

Vinay Sajip

Wolodja Wentland

unread,
Oct 25, 2009, 8:25:14 AM10/25/09
to pytho...@python.org
On Sun, Oct 25, 2009 at 10:48 +0000, Vinay Sajip wrote:
> Wolodja Wentland <wentland <at> cl.uni-heidelberg.de> writes:
[ HTMLHandler, multiple configuration files ]

OK! I agree that these parts are hard to standardise and do not really
belong in the *logging* module.

Maybe a kind soul implements a "configuration" module in the future that
accepts configuration files in a plethora of formats and uses
dictionaries as the lingua franca for final configuration.

> > I will give an example.. The basic problem I have with *all* config file
> > based configuration right now is that I have to *register* every single
> > handler/filter with a logger *within* the configuration or their
> > configuration will be lost.

> You are right, unless handlers (and filters, formatters etc.) are given


> names which can be used to refer to them across multiple configuration calls.
> This is something I am thinking about and will probably update PEP 391
> with my thoughts.

[ usage example ]

> I think your way of working is entirely reasonable, but IMO is not likely to
> be so widespread as to make it worthwile baking into the stdlib. You can
> easily build your own configuration from which you build the dict to pass
> to dictConfig().

Are these two statements not a bit contradictory? If it would be
possible to refer to all major components in logging by *unique* names
would that not mean that the usage example I gave is possible?

I think we managed to single out the sole requirement I would have
towards 'logging' that is missing today.

Id est: The possibility to refer/retrieve/... all major components used
by logging (loggers, handlers, filters, formatters, adaptors) by a
*unique* name. That would enable the developer to deal with them in a
consistent way irregardless of the way they were initially defined
(configuration file, programmatically).

Is this way to deal with logging really that uncommon? I guess I have
to read a lot code to see how other people do it as this would be the
way that feels most natural to me.

BTW, the LoggerAdaptor class looks really useful. I just discovered it
and I have the feeling that I might use it frequently.

> > * Logging Expert
> > * Developer
> > * User

> Those three roles appear reasonable, but I would say that the expert-designed
> blocks would be specialised handlers, filters and formatters. That's not a
> full-time job, though ;-)

I completely agree. I know that the logging expert and the developer
will most likely be the same person. I just wanted to point out that the
design of the logging system and its components is a different step in
program development than the usage of said system by a developer and
different users.

Thanks again for taking this discussion to the users list. I could have
commented in the -dev thread, but did not. (I ask myself: Why?) I
therefore appreciate it a lot that you try to figure out your users
requirements before implementing them! I just love open source software!

Have a great day and let me know whatever you come up with.

Wolodja

signature.asc

Vinay Sajip

unread,
Oct 25, 2009, 9:49:56 AM10/25/09
to pytho...@python.org
Wolodja Wentland <wentland <at> cl.uni-heidelberg.de> writes:

> > You are right, unless handlers (and filters, formatters etc.) are given
> > names which can be used to refer to them across multiple configuration calls.
> > This is something I am thinking about and will probably update PEP 391
> > with my thoughts.
>

> Are these two statements not a bit contradictory? If it would be
> possible to refer to all major components in logging by *unique* names
> would that not mean that the usage example I gave is possible?

Perhaps, depending on the scheme I come up with.

> I think we managed to single out the sole requirement I would have
> towards 'logging' that is missing today.

Named items will certainly make more things possible, and so I am thinking more
seriously about it. I will post here once I've updated PEP 391 with my thoughts.

> Is this way to deal with logging really that uncommon? I guess I have
> to read a lot code to see how other people do it as this would be the
> way that feels most natural to me.

It could be not uncommon, without being common ;-) There are lots of open
source projects around, from some of which (hopefully) you can see what
approaches others have taken.

> Thanks again for taking this discussion to the users list. I could have
> commented in the -dev thread, but did not. (I ask myself: Why?) I
> therefore appreciate it a lot that you try to figure out your users
> requirements before implementing them! I just love open source software!

Well, thanks for responding in such detail, and I hope more people give their
input because it would be a real shame (if they care) not to take the
opportunity to give some feedback. The PEP is perhaps "too much information"
for casual users to bother with, but (as Nick Coghlan said on the dev list)
it's worth putting in the thought a PEP requires, just so that if the
configuration approach is standardised, at least it has had the opportunity
for committer and community review first.

Best regards,


Vinay Sajip


Jean-Michel Pichavant

unread,
Oct 26, 2009, 4:37:17 AM10/26/09
to Vinay Sajip, pytho...@python.org
Vinay Sajip wrote:
> Wolodja Wentland <wentland <at> cl.uni-heidelberg.de> writes:
>
>
>> ----------------------
>>
>> I usually register a logger 'foo' within the application and one logger
>> for each module in the package, so the resulting logger hierarchy will
>> look like this:
>>
>> foo
>> |__bar
>> |__baz
>> |__newt
>> |___witch
>>
>> I set every loggers log level to DEBUG and use the respective logger in
>>
>
> You only need set foo's level to DEBUG and all of foo.bar, foo.baz etc.
> will inherit that level. Setting the level explicitly on each logger is
> not necessary,

A little bit off topic, don't you just need to set the **root** logger
debug level ?
I figured it out quite recently having problem configuring all my
loggers with just one click:
I have an application importing modules, the application is not always
aware of the logging support by the module. The only way to configure
these modules loggers is by configuring the root logger.
If I'm not wrong, this mechanism should have been definitively
described in the documentation within one the examples. The root logger
being of little use (from user POV), it is easy to forget its existence.
Actually it proved to me being very useful.

JM

0 new messages