Suppose I have the following:
import foo
import foobar
print foo()
print foobar()
########### foo.py
def foo:
return 'foo'
########### foobar.py
def foobar:
if foo.has_been_loaded(): # This is not right!
return foo() + 'bar' # This might need to be foo.foo() ?
else:
return 'bar'
If someone is using foo module, I want to take advantage of its
features and use it in foobar, otherwise, I want to do something else.
In other words, I don't want to create a dependency of foobar on foo.
My failed search for solving this makes me wonder if I'm approaching
this all wrong.
Thanks in advance,
Pete
Aha, progress. Comments appreciated. Perhaps there's a different and
more conventional way of doing it than this?
def foobar():
import sys
if 'foomodule' in sys.modules.keys():
import foomodule
return foomodule.foo() + 'bar'
else:
return 'bar'
One way would be
if "foo" in sys.modules:
# foo was imported
However that won't get you all the way, since sys.modules["foo"] will be
set even if the importing statement was
from foo import this, that, the_other
So you might want to add
foo = sys.modules["foo"]
inside the function.
regards
Steve
--
Steve Holden +1 571 484 6266 +1 800 494 3119
PyCon is coming! Atlanta, Feb 2010 http://us.pycon.org/
Holden Web LLC http://www.holdenweb.com/
UPCOMING EVENTS: http://holdenweb.eventbrite.com/
> In a module, how do I create a conditional that will do something based
> on whether or not another module has been loaded?
try:
import foo
except ImportError:
foo = None
def function():
if foo:
return foo.func()
else:
do_something_else()
Or, alternatively:
try:
import foo
except ImportError:
import alternative_foo as foo # This better succeed!
def function():
return foo.func()
--
Steven
print foo.foo()
print foobar.foobar()
'foo' in sys.modules
Hmm how about the module is available, just not imported yet, I would
assume that you still would like to use the module then.
Perhaps playing around with the imp module might get you what you mean
instead of what you say?
--
mph
Just try importing foo, and then catch the exception if it's not installed.
#foobar.py
try:
import foo
except ImportError:
FOO_PRESENT = False
else:
FOO_PRESENT = True
if FOO_PRESENT:
def foobar():
return foo.foo() + 'bar'
else:
def foobar():
return 'bar'
You could alternately do the `if FOO_PRESENT` check inside the
function body rather than defining separate versions of the function.
Cheers,
Chris
--
http://blog.rebertia.com
Except I want to use the module only if the main program is using it
too, not just if it's available for use. I think that I found a way in
my follow-up post to my own message, but not sure it's the best way or
conventional.
Pete
I can certainly see why one might want to use it if it's available but
not yet imported. In that case I could do a try / exception block. But
in this case, I actually don't want to use the module unless the main
program is doing it too. But you've got me thinking, I need to make
sure that's really the desired behavior.
Pete
Excellent, this is what I finally discovered, although I was looking
for 'foo' in sys.modules.keys(), which apparently isn't necessary.
What is your use case for this behavior exactly? You've piqued my curiosity.
I have written my first module called "logger" that logs to syslog via
the syslog module but also allows for logging to STDOUT in debug mode
at multiple levels (to increase verbosity depending on one's need), or
both. I've looked at the logging module and while it might suit my
needs, it's overkill for me right now (I'm still *very* much a python
newbie).
I want to write other modules, and my thinking is that it makes sense
for those modules to use the "logger" module to do the logging, if and
only if the parent using the other modules is also using the logger
module.
In other words, I don't want to force someone to use the "logger"
module just so they can use my other modules, even if the "logger"
module is installed ... but I also want to take advantage of it if I'm
using it.
Now that I've written that, I'm not sure that makes a whole lot of
sense. It seems like I could say, "hey, this person has the 'logger'
module available, let's use it!".
Thoughts?
Except in unusual cases, where merely importing a modules uses
substantial resources, I would say that if it is available, use it.
Overkill in what sense? You just need to write a few lines of code to
be able to use the logging package which comes with Python:
import logging, logging.handlers, sys
logging.basicConfig(level=logging.DEBUG, stream=sys.stdout)
logging.getLogger().addHandler(logging.handlers.SysLogHandler())
# default logs to syslog at (localhost, 514) with facility LOG_USER
# you can change the default to use e.g. Unix domain sockets and a
different facility
So you're experienced enough and have time enough to write your own
logger module, but too much of a newbie to use a module which is part
of Python's included batteries? If you're writing something like
logging to learn about it and what the issues are, that's fair enough.
But I can't see what you mean by overkill, exactly. The three lines
above (or thereabouts) will, I believe, let you log to syslog and to
stdout...which is what you say you want to do.
> I want to write other modules, and my thinking is that it makes sense
> for those modules to use the "logger" module to do thelogging, if and
> only if the parent using the other modules is also using the logger
> module.
>
> In other words, I don't want to force someone to use the "logger"
> module just so they can use my other modules, even if the "logger"
> module is installed ... but I also want to take advantage of it if I'm
> using it.
>
> Now that I've written that, I'm not sure that makes a whole lot of
> sense. It seems like I could say, "hey, this person has the 'logger'
> module available, let's use it!".
>
> Thoughts?
Well, the logging package is available in Python and ready for use and
pretty much battle tested, so why not use that? Are you planning to
use third-party libraries in your Python work, or write everything
yourself? If you are planning to use third party libraries, how would
their logging be hooked into your logger module? And if not, is it
good to have two logging systems in parallel?
Of course as the maintainer of Python's logging package, you'd expect
me to be biased in favour of it. You maybe shouldn't let that sway
you ;-)
Regards,
Vinay Sajip
Thanks for your insights, Vinay, and thank you also for writing
packages such as logging. The word 'overkill' was a poor choice on my
part! I should have said, "I don't quite understand the logging module
yet, but I am comfortable with the syslog module's two functions,
openlog and syslog".
I wrote my own logger module *partly* to gain the experience, and
partly to do the following:
1) In debug mode, send what would have gone to syslog to STDOUT or
STDERR
2) In non-debug mode, use /dev/log or localhost:514 depending on what
is set
3) Allow for multiple levels of logging beyond INFO, WARNING, CRIT ...
essentially allow multiple levels of INFO depending on how much detail
is desired. A high level of messaging when programs are running
poorly is desired, but when programs are running smoothly, I don't
need to send as much to syslog.
I started in with your logging package, but I think I simply got ahead
of myself. I definitely agree that writing my own wrappers around
syslog to do what I want might be a duplication of effort. At this
point I think I'm ready to go back to your logging package and see
what I can do; if you have words of advice regarding 1-3 above, I'd
certainly appreciate it.
Now I'll go to your example above and see what it does. Thank you!
Pete
My own impression of the logging module, formed from trying to use its
documentation in the past, is that it's somewhat unapproachable, and
difficult to use for simple purposes.
I am happy to say that now I see the current (3.1) documentation it has
improved to the point where I would be happy to try using it again.
Thanks for your long-term maintenance of this package.
All the "what if the application is not using my logger module" is dealt
with by the logging module.
And I'm not its designer in any way, so my advice is completely
objective :-)
It's definitely worth spending some time ramping up with it.
JM
Hi Steve,
Thanks for the positive feedback. The initial documentation for the
logging package, because it lives in the library section of the
overall documentation, was focused more on completeness of coverage
for reference usage, rather than a more tutorial-based approach.
Thanks to work by Doug Hellmann and others, the documentation has
grown, over time, more accessible to Python novices. It's still not
perfect, and I hope to be able to improve its clarity in the future,
by getting help where possible from people who are better at technical
writing than I am.
I'm reviewing the documentation at the moment, as it happens, and it
still seems hard to be able to put together a structure which is good
for everyone. A full treatment, it seems to me, would talk a little
about the detail of why things work as they do; but a lot of the time,
people are just interested in getting going with the package, and less
interested in the whys and wherefores. But for people trying to do
more than the basics, that deeper understanding is sometimes
necessary. The hard part is satisfying all audiences in one document!
Regards,
Vinay Sajip
By "debug mode", do you mean the value of the __debug__ variable, or
something else (e.g. a flag in your application)?
You could certainly do something like (in your logging initialization
code):
if __debug__:
handler = logging.StreamHandler()
else:
#use domain socket, UDP, etc.
handler = logging.handlers.SocketHandler(...)
logger.addHandler(handler)
where logger is the root logger or some other high-level logger in
your application.
By the way, are you aware that accessing syslog via openlog etc. may
not thread-safe, at least in some environments? Search the Web for
"syslog openlog thread" for more info.
You can certainly add additional levels to logging (see addLevelName),
but I'm not sure that's what you really need. Generally, I find that
when there are problems to be debugged, I get more benefits from using
the logger hierarchy: I keep the level at logging.DEBUG but just log
different things to different loggers. Just as a fr'instance, if I
were logging the parsing of HTTP requests, I might use loggers named
'request', 'request.headers', 'request.headers.cookies',
'request.body', 'request.body.multipart' etc. When everything is
working well, I have the verbosity of these loggers turned low by e.g.
setting the level for the 'request' logger to WARNING or higher; when
I want to debug header processing in more detail I might set the level
of the 'request.headers' logger to DEBUG, which would output events
from request header processing (but not the body), or just turn up the
'request.headers.cookies' level to look in more detail at what's
happening during processing "Cookie:" headers.
Regards,
Vinay Sajip
Actually, `foo in sys.modules.keys()` is double-slow, because first the
dict must be scanned to create a list, and then the list must be scanned
linearly to test for foo.
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/
"Many customs in this life persist because they ease friction and promote
productivity as a result of universal agreement, and whether they are
precisely the optimal choices is much less important." --Henry Spencer
Simple answer: don't
The main logging docs should be reference material, but the top of the
docs should link to a tutorial (or the other way around, but I think the
Python docs have generally preferred to make the primary doc reference).
Trying to make one page serve all documentation purposes rarely works.