flush overflow counter debug

284 views
Skip to first unread message

Jason

unread,
Dec 4, 2012, 3:04:24 PM12/4/12
to sqlal...@googlegroups.com
After upgrading to SQLAlchemy 0.7.9 I know receive an error  "FlushError: Over 100 subsequent flushes have occurred within session.commit() - is an after_flush() hook creating new objects?" which is was introduced by http://docs.sqlalchemy.org/en/latest/changelog/changelog_07.html#change-75a53327aac5791fe98ec087706a2821 in the changelog.

I don't have any after_flush event handlers.  I do have a before_flush event handler that changes the state of a related object, but that doesn't sound like what the error is talking about.

How can I debug this further? I am doing this within a Pyramid application, so I am somewhat removed from the commit logic.

Thanks,

Jason

Michael Bayer

unread,
Dec 4, 2012, 3:16:24 PM12/4/12
to sqlal...@googlegroups.com
this error traps the condition that dirty state remains in the Session after a flush has completed.    This is possible if an after_flush hook has added new state, or perhaps also if a mapper.after_update/after_insert etc hook, or even a before_update/before_insert has modified the flush plan, which is not appropriate in any case.

the best way is to actually create an after_flush() hook with a "pdb.set_trace()" in it; in there, you'd just look at session.new, session.dirty, and session.deleted to ensure that they are empty.


Jason

unread,
Dec 4, 2012, 3:35:40 PM12/4/12
to sqlal...@googlegroups.com
Does this mean there is a limit to the number of queries I can run in a transaction?

For example I am looping about 20 times. For each loop I insert one or two rows and do at least one query. There might be some more implicit queries when accessing relationship properties. If I set Session.autoflush to false before the loop and then set it back to true afterwards it works.

Jason

unread,
Dec 4, 2012, 3:45:52 PM12/4/12
to sqlal...@googlegroups.com
Disregard that, I spoke too soon. There is something going on after it starts the commit process.

Jason

unread,
Dec 4, 2012, 4:15:03 PM12/4/12
to sqlal...@googlegroups.com
Ok I figured out the cause, but not the solution. I am using a mutable type for hstore columns. I have a UserDefinedType for Hstore that just passes everything through to psycopg2's hstore type:

class HStore(UserDefinedType):
    """ SQLAlchemy type that passes through values to be handled by a psycopg2 
        extension type. 
    """
    type_name = 'HSTORE'
    
    def get_col_spec(self):
        return self.type_name
    
    def bind_processor(self, dialect):
        return None
    
    def result_processor(self, dialect, coltype):
        return None
    
    def is_mutable(self):
        return True
    
    def copy_value(self, value):
        return copy.copy(value)
class MutationDict(Mutable, dict):
    
    @classmethod
    def coerce(cls, key, value):
        "Convert plain dictionaries to MutationDict."

        if not isinstance(value, MutationDict):
            if isinstance(value, dict):
                return MutationDict(value)

            # this call will raise ValueError
            return Mutable.coerce(key, value)
        else:
            return value

    def __setitem__(self, key, value):
        "Detect dictionary set events and emit change events."

        dict.__setitem__(self, key, value)
        self.changed()

    def __delitem__(self, key):
        "Detect dictionary del events and emit change events."

        dict.__delitem__(self, key)
        self.changed()

The column definition I use is:

some_attrs = Column(MutationDict.as_mutable(HStore))

Then in the actual transaction I am copying one object with that column definition to another object with the same definition:

newobject.some_attrs = other_object.some_attrs

If I comment out that line there is only a single flush at the commit time.  

It looks correct according to the examples I have seen, but if you know why it keeps setting them as dirty please let me know.

On Tuesday, December 4, 2012 3:16:24 PM UTC-5, Michael Bayer wrote:

Jason

unread,
Dec 4, 2012, 4:38:54 PM12/4/12
to sqlal...@googlegroups.com
Looks like copy.copy will call __setitem__ on dict to be copied resulting in it being marked dirty. I had to change the copy code to:

    def copy_value(self, value):
        try:
            # Use dict.copy(), because copy.copy will call __setitem__ on the
            # value causing it to be marked dirty, which could result in
            # an infinite loop of flushing dirty copies if an hstore is
            # copied to another hstore column. 
            return value.copy()
        except Exception:
            return None 

Michael Bayer

unread,
Dec 4, 2012, 5:22:45 PM12/4/12
to sqlal...@googlegroups.com
have you tried 0.8 which now provides HSTORE built in ?

it's not apparent from this code fragment why your flush process is producing residual state.  I'd need a fully runnable and succinct test case to analyze exactly what's going on.


--
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/q7YGmmDyhxsJ.
To post to this group, send email to sqlal...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.

Reply all
Reply to author
Forward
0 new messages