DelayedJob and before_destroy callback issue

40 views
Skip to first unread message

Chris McCann

unread,
Dec 17, 2014, 7:55:26 PM12/17/14
to sdr...@googlegroups.com
SD Ruby,

I'm using delayed_job_active_record 4.0.2 in a Rails 4.1.5 app.  I came across some behavior in DJ that I think is a bug but would like other opinions.

My app pushes data to two third-party services, so when the related ActiveRecord object is deleted data related to the deleted object needs to be cleaned up.  This is a perfect task for a background job.

I have a before_destroy callback on the SourceImage object:

before_destroy :destroy_recognition_targets

And the callback looks like this:

Delayed::Job.enqueue RecognitionTargetDestroyerJob.new(src_image) 

The problem I found is that DJ fails when it tries to deserialize the data that's serialized as YAML in the job's :handler field (you can see it in the database).  Since it can't deserialize the object the job just hangs (I would expect an actual error to be thrown, but that's another issue).

There was a Github issue posted about this that was closed.  I've added a new one.

Can anyone attest to whether this used to work in DJ?  It seems to me that it did, and the linked issue above seems to say the same.  

FYI, the workaround I put in place was to just pass the relevant attributes to the DJ job via an OpenStruct, and that works fine.

Cheers,

Chris

Ylan Segal

unread,
Dec 17, 2014, 8:08:08 PM12/17/14
to sdr...@googlegroups.com
Chris,

I have been burned by the serialization / de-serialization issues in delayed job before. As a matter of fact, I have a shared spec for all my job classes that makes sure that makes sure that serialization and de-serialization work fine to try to find this before shipping.

I usually have a shared example like:

shared_examples 'a job implementation' do
it 'can be serialized with Marshal' do
expect {
Marshal.load(Marshal.dump(subject))
}.to_not raise_error
end

it 'can be serialized with YAML' do
expect {
YAML.load(YAML.dump(subject))
}.to_not raise_error
end
end

And then call that from in my job class specs:

describe SomeJob do
subject { SomeJob.new(whatever) } # Ensure you are setting up correctly

it_behaves_like 'a job implementation’

# Other tests for class go here.
end

Hope this helps,


Ylan Segal
yl...@segal-family.com
> --
> --
> SD Ruby mailing list
> sdr...@googlegroups.com
> http://groups.google.com/group/sdruby
> ---
> You received this message because you are subscribed to the Google Groups "SD Ruby" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sdruby+un...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Ian Young

unread,
Dec 17, 2014, 10:07:37 PM12/17/14
to sdr...@googlegroups.com
Yeah, your workaround of passing simpler data is IMO actually a best
practice. I have grown to hate DelayedJob's "feature" of serializing
whole class objects. Throwing it away in favor of simpler and more
reliable JSON is one of the best decisions Resque made.

One has not known true despair until one has stared into a Ruby 1.8->1.9
migration in which all the DelayedJob records will break because of
differences in the YAML parsers.

Richard Bishop

unread,
Dec 18, 2014, 1:59:01 PM12/18/14
to sdr...@googlegroups.com
I'm just here to 2nd/3rd/4th/5th/whatever not relying on serialization/deserialization of Ruby objects into databases and message queues. Stick to integers and then do a find.

Chris McCann

unread,
Dec 18, 2014, 2:08:27 PM12/18/14
to sdr...@googlegroups.com
The issue in this particular case is that the :before_destroy callback fires off an asynchronous background task that needs data from the object being deleted in order to do its thing.  That ActiveRecord model object has been destroyed by the time the background task executes, so DB lookup isn't an option.

Here's what I ended up using, in case someone else finds themselves in the same situation:

  # a callback to delete 3rd party recognition 
  # service target information.  Having to pass in 
  # an OpenStruct because DJ doesn't seem to be able 
  # to properly deserialize a deleted object
  def destroy_recognition_targets
    src_image = OpenStruct.new(id: self.idtarget_id: self.target_id)
    Delayed::Job.enqueue RecognitionTargetDestroyerJob.new(src_image)
  end

Using OpenStruct here leaves the code in the Job unchanged in terms of reading the object attributes, such as:

src_image.target_id

Also, one of the DJ maintainers on Github closed the issue I opened, as shown here.  I don't disagree with his logic.


--
You received this message because you are subscribed to a topic in the Google Groups "SD Ruby" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sdruby/k7mYlGDoaWI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sdruby+un...@googlegroups.com.

Richard Bishop

unread,
Dec 20, 2014, 1:20:35 PM12/20/14
to sdr...@googlegroups.com
> The issue in this particular case is that the :before_destroy callback fires off an asynchronous background task

I don't think the combination of before_destroy and asynchronous background job is the right solution. How many places in your application can this particular model be destroyed from? Maybe you don't need a callback at all. One solution could be scheduling the background job where ever you would normally destroy the model and then actually destroy the model in the background job. Optionally you could use a boolean of some sort to decide whether to render that model instance in the UI to handle the discrepancy between when the user triggered the destroy and when the background job runs.
Reply all
Reply to author
Forward
0 new messages