Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is 'isinstance()' the right thing?

1 view
Skip to first unread message

Ralf Juengling

unread,
Apr 30, 2002, 3:04:13 AM4/30/02
to
Hi,

when I want to make sure within a function that it can deal with
the arguments passed, the only choice is to check it's type via
'isinstance()'. However, in general the type of an argument is
not the crux but whether the argument supports a certain
'interface' or 'protocol'. Thus, a function 'hasinterface()'
would appear more natural (more 'Pythonic') to me.

So, what is the proper, 'pythonic' way of argument checking?

Regards,
Ralf

Alex Martelli

unread,
Apr 30, 2002, 4:58:07 AM4/30/02
to
Ralf Juengling wrote:

PEP 246, but I haven't yet been able to communicate the how's
and why's of it to the BDFL.

Meanwhile, http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/52291
may help.


Alex

Erik Max Francis

unread,
Apr 30, 2002, 5:08:58 AM4/30/02
to
Ralf Juengling wrote:

The "Pythonic" way of argument checking is to simply use it and catch
the exceptions that will be thrown if it doesn't behave as planned.

Depending on what you want, some of the is... functions in the operator
module might be to your liking.

--
Erik Max Francis / m...@alcyone.com / http://www.alcyone.com/max/
__ San Jose, CA, US / 37 20 N 121 53 W / ICQ16063900 / &tSftDotIotE
/ \ Life is one long process of getting tired.
\__/ Samuel Butler
Interstelen / http://www.interstelen.com/
A multiplayer, strategic, turn-based Web game on an interstellar scale.

Ralf Juengling

unread,
Apr 30, 2002, 8:34:49 AM4/30/02
to
Erik Max Francis <m...@alcyone.com> writes:

> Ralf Juengling wrote:
>
> > So, what is the proper, 'pythonic' way of argument checking?
>
> The "Pythonic" way of argument checking is to simply use it and catch
> the exceptions that will be thrown if it doesn't behave as planned.

Hm, you might risk to destroy something this way (i.e. when an exception
occurs to late).

>
> Depending on what you want, some of the is... functions in the operator
> module might be to your liking.

Thanks, I wasn't aware of these (looked for them in the type module).
*BUT*, these (eg. isSequenceType) work only for C extension types, right?

Ralf

Bill Dozier

unread,
Apr 30, 2002, 9:34:13 AM4/30/02
to
Erik Max Francis <m...@alcyone.com> wrote in message news:<3CCE5F2A...@alcyone.com>...

> Ralf Juengling wrote:

>

> > when I want to make sure within a function that it can deal with

> > the arguments passed, the only choice is to check it's type via

> > 'isinstance()'.


> > So, what is the proper, 'pythonic' way of argument checking?

>

> The "Pythonic" way of argument checking is to simply use it and catch

> the exceptions that will be thrown if it doesn't behave as planned.

>


IMHO, "isinstance()" is usually not the right thing. Whether it's
"pythonic" or not, I can't say.

When you have code that checks the type of an object and then treats
it accordingly, it's usually a sign that you need to use some
polymorphism. Of course, you don't always have the luxury of fixing
the design.

Bill

Erik Max Francis

unread,
Apr 30, 2002, 1:54:00 PM4/30/02
to
Ralf Juengling wrote:

> Thanks, I wasn't aware of these (looked for them in the type module).

I agree, operator seems a strange place for them.

> *BUT*, these (eg. isSequenceType) work only for C extension types,
> right?

No, they work with any object:

>>> import operator
>>> operator.isSequenceType([1, 2, 3])
1
>>> class MyList(list): pass
...
>>> operator.isSequenceType(MyList())
1

Alex Martelli

unread,
Apr 30, 2002, 5:02:24 PM4/30/02
to
Erik Max Francis wrote:
...

> No, they work with any object:
>
>>>> import operator
>>>> operator.isSequenceType([1, 2, 3])
> 1
>>>> class MyList(list): pass
> ...
>>>> operator.isSequenceType(MyList())
> 1

>>> import operator
>>> class X: pass
...
>>> operator.isSequenceType(X())
1
>>>


Alex

Fredrik Lundh

unread,
Apr 30, 2002, 5:06:52 PM4/30/02
to
Erik Max Francis wrote:

> *BUT*, these (eg. isSequenceType) work only for C extension types,
> > right?
>
> No, they work with any object:
>
> >>> import operator
> >>> operator.isSequenceType([1, 2, 3])
> 1
> >>> class MyList(list): pass
> ...
> >>> operator.isSequenceType(MyList())
> 1

define "work" :

>>> class NotAList: pass
...
>>> operator.isSequenceType(NotAList())
1
>>> for item in NotAList(): print item
Traceback (most recent call last):
AttributeError: NotAList instance has no attribute '__getitem__'

(the isSequenceType predicate checks if the object's type
implements the C-level __getitem__ slot. that's not always
what you want...)

</F>


Delaney, Timothy

unread,
Apr 30, 2002, 8:24:00 PM4/30/02
to
> From: Ralf Juengling [mailto:juen...@informatik.uni-freiburg.de]

>
> Erik Max Francis <m...@alcyone.com> writes:
>
> > Ralf Juengling wrote:
> >
> > > So, what is the proper, 'pythonic' way of argument checking?
> >
> > The "Pythonic" way of argument checking is to simply use it
> and catch
> > the exceptions that will be thrown if it doesn't behave as planned.
>
> Hm, you might risk to destroy something this way (i.e. when
> an exception
> occurs to late).

Perhaps - but you also have the same problem if something implements an
interface, but doesn't adhere to the contract that is stated for that
interface.

Your unit tests should catch these problems.

Tim Delaney


Alex Martelli

unread,
May 9, 2002, 7:56:10 AM5/9/02
to
Ralf Juengling wrote:
...

>> The "Pythonic" way of argument checking is to simply use it and catch
>> the exceptions that will be thrown if it doesn't behave as planned.
>
> Hm, you might risk to destroy something this way (i.e. when an exception
> occurs to late).

I have observed this, in about 5% to 10% of cases -- when I need a function
to be somewhat "atomic", performing on an object argument either 'all' of a
set of state alterations, or none -- whatever the type of the actual
argument. 90%+ of the time, of course, such "failsafe" behavior is not
needed -- calling the function with a weird argument is a programming error
and it doesn't matter if some state is altered before throwing an
exception. However, it's exactly for that small but important remaining
slice of cases, that I wrote the "accurate LBYL" Cookbook recipe I pointed
you to earlier (on ActiveState's Python Cookbook site).


>> Depending on what you want, some of the is... functions in the operator
>> module might be to your liking.
>
> Thanks, I wasn't aware of these (looked for them in the type module).
> *BUT*, these (eg. isSequenceType) work only for C extension types, right?

Not quite, but what it checks if is the object's type implements the
C_API level "sequence" interface -- which the BDFL is musing about
deprecating anyway, since, with slice accesses now best obtained through
item methods, and iterators as a separate concept, the "sequence" stuff is
just duplicating mapping and numerical stuff. So, I wouldn't rely on it.

PEP 246 would allow MORE than just checking if an object, exactly as it
is, currently implements an interface: it would allow the object itself
or a third-party adapter to interpose an Adapter that lets you use the
object "just as if" it implemented the interface. All the benefits of
COM's QueryInterface approach (vastly superior to C++'s dynamic_cast,
since it lets the object build needed parts of itself JIT, on-the-fly)
PLUS those of COM's IServiceProvider (since no identity constraint on
the returned object). Of course if for some weird reason you needed to
check if an Adapter is in use, that would be easy too:

conformant = adapt(argument, requiredProtocol)
if conformant is argument:
" no adaptation, work on the original object "
else:
" adaptation occurred, whatever you want to do about it"

<sigh>. So far I think I managed to explain it well enough to get the
BDFL to agree this would be good IF Python had 'interfaces' as a concept
separate from types and classes, so that 'requiredProtocol' would have
to be an 'interface'. I just can't manage to explain why types or
classes would be just as good in the 'requiredProtocol' role as would
be 'interfaces', so there's no need to wait for the latter and 246
could just be allowed to go ahead now. Oh well... one day I'll get some
inspiration for how to go about it...


Alex

Ralf Juengling

unread,
May 28, 2002, 9:00:19 AM5/28/02
to
Alex Martelli <al...@aleax.it> wrote in message news:<uttC8.40554$zW3.4...@news1.tin.it>...
> Ralf Juengling wrote:
> ...

> <sigh>. So far I think I managed to explain it well enough to get the
> BDFL to agree this would be good IF Python had 'interfaces' as a concept
> separate from types and classes, so that 'requiredProtocol' would have
> to be an 'interface'. I just can't manage to explain why types or
> classes would be just as good in the 'requiredProtocol' role as would
> be 'interfaces', so there's no need to wait for the latter and 246
> could just be allowed to go ahead now. Oh well... one day I'll get some
> inspiration for how to go about it...

I must admit, I don't know the precise definition of an 'interface'
nor that of a 'protocol' (are there widely accepted ones?).

My idea of a protocol is that of a 'set of functions or methods'
which are to be used in a certain way (the functions signatures) for
a certain purpose (what the functions do). The specification of the
latter is done by the inventor of protocol.

Until recently, I used 'interface' and 'protocol' synonymic, but just
realized that 'interfaces' seem to be a more general concept, comparable
to classes. (One can inherit from an existing interface and modify it
and so on.)

In the rest, I focus on 'protocols' as roughly defined above. What I
still do not understand:
Why aren't types not the right machinery for specifying the support
of a protocol? Whenever someone introduces a new protocol, he would
set up a new abstract type (or class), say (to become more Python related
at the end)

class _iterator_:
"""This is an abstract class defining the iterator protocol
"""
def __iter__(self): raise NotImplementedError
def next(self): raise NotImplementedError

for the iterator protocol. Any class implementing the iterator
protocol would be a subclass of '_iterator_' (probably among others)
to signal, it is implementing this protocol.
To make sure, that an argument of my function is indeed an iterator,

assert isinstance(arg, _iterator_)

would be sufficient then.
Since we have multiple inheritance, it would not be a big deal to
support this kind of type-based protocol checking.
So, why isn't it there?

Regards,
Ralf

Ralf Juengling

unread,
May 28, 2002, 9:13:41 AM5/28/02
to
"Fredrik Lundh" <fre...@pythonware.com> wrote in message news:<MHDz8.34856$n4.74...@newsc.telia.net>...

But even docs can't tell me, what I want.
Snippet from the operator module documentation:

"
isSequenceType(o)
Returns true if the object o supports the sequence protocol. This
returns true for all objects which define sequence methods in C, and
for all instance objects. Warning: There is no reliable way to test if
an instance supports the complete sequence interface since the
interface itself is ill-defined. This makes this test less useful than
it otherwise might be.
"

Why is the sequence interface 'ill-defined'? This is just a matter of
agreement on a definition, I guess?

BTW: The Python docs use 'interface' and 'protocol' synonymic?

Ralf

Kragen Sitaker

unread,
May 28, 2002, 7:18:48 PM5/28/02
to
juen...@informatik.uni-freiburg.de (Ralf Juengling) writes:
> Why aren't types not the right machinery for specifying the support
> of a protocol? Whenever someone introduces a new protocol, he would
> set up a new abstract type (or class), say (to become more Python related
> at the end)
>
> class _iterator_:
> """This is an abstract class defining the iterator protocol
> """
> def __iter__(self): raise NotImplementedError
> def next(self): raise NotImplementedError
>
> for the iterator protocol. Any class implementing the iterator
> protocol would be a subclass of '_iterator_' (probably among others)
> to signal, it is implementing this protocol.

The problem is that once you provide a mechanism for requiring
arguments to be derived from a particular type, people will start
requiring them to be derived from lots of different types, and some of
those types won't be interface types.

The result, in languages that support this, like C++ and Java, is
tight coupling between classes: class A requires that arguments to its
bletch() method be derived from class B, which typically means they
have to inherit class B's bugs and stupid design mistakes, and also
means that they have to be written by somebody who knew about class B.

In one Python application I currently work on, we've been able to
switch between using lists and Numeric arrays in several places with
very little difficulty.

James T. Dennis

unread,
Jun 12, 2002, 11:52:25 AM6/12/02
to
Kragen Sitaker <kra...@pobox.com> wrote:

> juen...@informatik.uni-freiburg.de (Ralf Juengling) writes:
>> Why aren't types not the right machinery for specifying the support

>> of a protocol? ...

> The problem is that once you provide a mechanism for requiring
> arguments to be derived from a particular type, people will start
> requiring them to be derived from lots of different types, and some of
> those types won't be interface types.

> The result, in languages that support this, like C++ and Java, is
> tight coupling between classes: class A requires that arguments to its
> bletch() method be derived from class B, which typically means they
> have to inherit class B's bugs and stupid design mistakes, and also
> means that they have to be written by somebody who knew about class B.

> In one Python application I currently work on, we've been able to
> switch between using lists and Numeric arrays in several places with
> very little difficulty.

However, we still want *some* way to use the introspection features
in appropriate ways (sometimes for sanity checking, more often for
application specific reasons).

It seems like we can usually replace type tests with try: except:
blocks or we can use hasattr() to test for the method or attribute
that implements the desired object feature. For example, the only
sequence types for which hasattr(x,'isalpha') is true are the string
and user string types. Thus, if I want to write a generalized function
that can recursively iterate over nested containers (lists , tuples, etc
possibly CONTAINING lists, tuples, etc ad nauseum) but DON'T wish to
iterate over strings (which are iterable but are NOT really containers
in most applications) then I can use hasattr(x,'isalpha') as a way to
stop my recursion.

The problem with such generality is that there are always corner
or unaccounted for cases. For my "recursive container iteration"
example, I don't know what I'd use to avoid recursing into mmap objects
if one of those were passed to me. Maybe I'd want to interate on
file or database containers but NOT on chars in an mmap? Obviously
this depends greatly on the task at hand. If at all possible I'll
restructure a specific case into an exception handling block.

From what I can tell hasattr() should work through any form of GoF
Decorator, Adapter, Proxy or similar delegated composition objects
*iff* you can find the appropriate attribute to test for.

Any thoughts on that? Counterexamples?


0 new messages