OneToMany cascade depth

379 views
Skip to first unread message

Rien

unread,
Jun 16, 2010, 10:51:17 AM6/16/10
to Ebean ORM
Hi,

We have an object A with a OneToMany relation to object B.

Object B has a OneToMany relation to object C.

All cascade types are ALL.

When we remove object A, all children B are removed, but the
grandchildren C are not.

Is there a maximum to the cascade depth? Or are we missing something
else here?

Thx,

Rien

Rob Bygrave

unread,
Jun 17, 2010, 5:37:15 AM6/17/10
to Ebean ORM

If the beans are loaded there is no effective maximum depth - but for
relationships that are not loaded it will only delete 1 level of non-
loaded OneToMany.

So for A, B and C above... when deleting A ... if the B's are not
loaded then Ebean will delete A and it's B's (using a bulk delete
statement - not loading the B's) ... but it will not delete the C's.

If the B's where loaded... then the A, the B's and C's would all be
deleted.


Thinking about this (cascading down OneToMany's ...) we could delete
the C's without loading the B's - but Ebean does not do this no.

Rien

unread,
Jun 17, 2010, 5:44:33 AM6/17/10
to eb...@googlegroups.com
So you're saying that the same delete command will in one case delete A
& B and in another case A & B & C only depending on the path the code
has taken (so unpredictable)?

If so, I don't think we want ambiguity like that in our code ever, I
would call this a bug.

Is there any reason why Ebean can't delete the C objects every time (no
need to load them)?

Cheers,

Rien

Rob Bygrave

unread,
Jun 17, 2010, 6:28:10 AM6/17/10
to Ebean ORM
> So you're saying that the same delete command
Yes.

> I don't think we want ambiguity like that
I hear that.


> Is there any reason why Ebean can't delete the C objects every time ...

Deleting 'down the tree'/OneToMany relationships is relatively
straight forward. Deleting 'up the tree'/ManyToOne relationships is
more difficult meaning we have to fetch any unloaded foreign keys. For
example, if C has cascade delete on a ManyToOne relationship then it
has got harder.

However... that all said... we recently added a deleteMany() that
takes a collection of Id's... so we could look to replace the current
approach with recursive calls to that method instead. That could be a
better approach with no effective max depth loaded or not.

Daryl Stultz

unread,
Jun 17, 2010, 2:05:16 PM6/17/10
to Ebean ORM

On Jun 16, 10:51 am, Rien <rnent...@gmail.com> wrote:
>
> When we remove object A, all children B are removed, but the
> grandchildren C are not.

The "automatic" aspects of ORMs are rather terrifying, because I
didn't write it and haven't written enough code exercising it, it
often seems quite magical. I'm completely comfortable simply setting
cascade behavior on the database, and I *know* it will work right.
It's hard to let some of that security go.

Since grandchildren C are not deleted, you must not have cascade
delete set on the database, right? I currently have 95% of my database
deletes done in older JDBC code, the rest in OpenJPA hopefully to be
replaced with EBean. It seems improper to have the cascade delete on
the database side as this will mess up the cache, both L1 and L2, yes?
It seems there is a lot to do with an ORM to demonstrate that it works
the way you want it to / think it should. Ebean has done much better
than OpenJPA in this regard, but it's not perfect.

Oh well, thanks for listening.

/Daryl

Rien

unread,
Jun 22, 2010, 5:49:26 AM6/22/10
to Ebean ORM
So the magic in databases is not terrifying, but the magic in ORMs is?

Isn't it a bit naive to think databases don't have bugs?

Problem with Ebean is only that it hasn't been used enough yet to iron
out all the bugs.....

Rien

Rob Bygrave

unread,
Jun 22, 2010, 7:13:31 AM6/22/10
to eb...@googlegroups.com
> So the magic in databases is not terrifying, but the magic in ORMs is?

I think Daryl is suggesting there is more magic with ORM ... which I'd agree with - DB's are pretty explicit with there cascade delete behaviour etc.

This doesn't mean we shouldn't have a look at this issue with the delete cascading. I'm thinking this is a bug in effect and it's approach should change to cascade delete with minimal loading (we will need to load foreign keys etc as we do now).


> Problem with Ebean is only that it hasn't been used enough
> yet to iron out all the bugs

Fair enough. Personally I have not cascade deleted 2+ levels with Ebean and generally don't like 'hard deletes' per say over 'logical deletes' - which is possibly why I never hit this issue. On the other hand there haven't been many bugs logged lately which is a good sign.

Still, Ebean is fairly sophisticated and it would be nice to have a bigger set of active users. In the meantime it's just a matter of plugging away and doing our best.

Daryl Stultz

unread,
Jun 22, 2010, 8:12:36 AM6/22/10
to Ebean ORM

On Jun 22, 5:49 am, Rien <rnent...@gmail.com> wrote:
> So the magic in databases is not terrifying, but the magic in ORMs is?
>
> Isn't it a bit naive to think databases don't have bugs?

I never said there were no bugs in the rest of my software stack.

> Problem with Ebean is only that it hasn't been used enough yet to iron
> out all the bugs.....

Exactly, and this is a significant statement. My database of choice is
PostgreSQL. I personally do cascade deletes every time I run my unit
tests and there are probably a bajillion cascade deletes done in PG
the world over every day. So there are two points to make: PG has lots
of users and lots of code exercising it in lots of ways. Second, I,
personally, have designed lots of databases and exercised PG in lots
of ways. So there's a lot of comfort in that. I don't have that
comfort yet with Ebean or OpenJPA.

Some folks do work in MySQL without using foreign key constraints. I
had a coworker say she didn't use them because it made it too hard to
modify the database schema down the road. This approach terrifies me.
I've had bugs in my code that left orphaned records with this
approach. So I want to use the features of the database to protect the
integrity of the database as much as possible. To some extent, which I
haven't discovered yet, I may have to give up some of the security the
database gives me for "untrusted" alternatives in Ebean (I'm the one
who doesn't trust it given my limited experience and the overall
complexity of it).

/Daryl

Daryl Stultz

unread,
Jun 22, 2010, 8:19:54 AM6/22/10
to Ebean ORM
On Jun 22, 7:13 am, Rob Bygrave <robin.bygr...@gmail.com> wrote:
> Fair enough. Personally I have not cascade deleted 2+ levels with Ebean and
> generally don't like 'hard deletes' per say over 'logical deletes' - which
> is possibly why I never hit this issue.

So what's the right setup for cascade delete? Do you have cascade
delete set on the database or only in entities?

I remember studying queries OpenJPA generated for a cascade delete. I
believe it deleted the children one at a time, perhaps for optimistic
locking purposes. Having used cascade delete in the database as my
only prior solution, this seems rather inefficient. What are the
things Ebean needs to "be sure to do" with cascade delete?

Rien, you said the grandchildren C were not deleted. Were there
foreign key constraints from C to B? Were the C's orphaned or did the
database throw a referential integrity exception? I'm just trying to
understand your database design and compare it to mine.

/Daryl

Rob Bygrave

unread,
Jun 23, 2010, 5:24:07 AM6/23/10
to eb...@googlegroups.com
> Do you have cascade delete set on the database or only in entities?

If you don't use an L2 cache (or Lucene integration shortly) then using DB cascade deletes will be fine and be more efficient than Ebean doing the cascade delete.

 
> I believe it deleted the children one at a time, perhaps for optimistic
> locking purposes.

Ebean will only do the optimistic concurrency checking for the top level delete - the cascade deletes will not have any optimistic locking checks.



> What are the things Ebean needs to "be sure to do" with cascade delete?

Ebean needs to maintain related L2 cache and/or Lucene indexes.

Rien

unread,
Jun 23, 2010, 7:01:23 AM6/23/10
to eb...@googlegroups.com
Daryl,

Let's not get into the database design we are using, let me just say
it's not going to fix this problem for me :-(.

Rien

Daryl Stultz

unread,
Jun 23, 2010, 9:02:48 AM6/23/10
to Ebean ORM


On Jun 23, 5:24 am, Rob Bygrave <robin.bygr...@gmail.com> wrote:
> If you don't use an L2 cache (or Lucene integration shortly) then using DB
> cascade deletes will be fine and be more efficient than Ebean doing the
> cascade delete.

I think you mean if I use cascade delete on the database and NOT on
the entity, right? Otherwise, Ebean would still issue the child
deletes.

> Ebean will only do the optimistic concurrency checking for the top level
> delete - the cascade deletes will not have any optimistic locking checks.

So if I have cascade delete on the database AND the entity, even with
L2 cache on, everything should be ok, right? (Except that Ebean will
issue child/association deletes when the database has already deleted
them...)

I plan to use L2 cache eventually. Perhaps I'm splitting hairs in
wanting to keep the cascade delete on the database since the end
result will be the same. I suppose by the time I get to it I'll be
comfortable with cascade delete on the entities.

/Daryl

Daryl Stultz

unread,
Jun 23, 2010, 9:04:28 AM6/23/10
to Ebean ORM

On Jun 23, 7:01 am, Rien <rnent...@gmail.com> wrote:
> Daryl,
>
> Let's not get into the database design we are using,

As you wish, I was not looking to criticize or make suggestions, but
was looking to draw on your expertise - to see if I'm doing things
right.

/Daryl

Rob Bygrave

unread,
Jun 24, 2010, 5:25:04 AM6/24/10
to eb...@googlegroups.com
> I think you mean if I use cascade delete on the database
> and NOT on the entity, right?

Correct.


> (Except that Ebean will issue child/association deletes
> when the database has already deleted them...)

Yes, so that won't really work IF Ebean needs to know the Id values of those children to maintain the L2 cache.

Daryl Stultz

unread,
Jun 24, 2010, 8:17:56 AM6/24/10
to Ebean ORM

On Jun 24, 5:25 am, Rob Bygrave <robin.bygr...@gmail.com> wrote:
> > (Except that Ebean will issue child/association deletes
> > when the database has already deleted them...)
>
> Yes, so that won't really work IF Ebean needs to know the Id values of those
> children to maintain the L2 cache.

So if I had a cascade delete on the database, and I deleted the
parent, the children might still be hanging out in the L2 cache. Other
the wasting memory, I don't really see this as a problem since no
query will return references to these deleted children. It's possible
a "find" by id would pull a deleted object from the L2, but in my
application the id would have to come from a stale URL. I believe the
cache can be set to purge after a certain amount of time? Is this a
crazy feature idea: every so often have the L2 cache verify its
members still exist in the database.

Ultimately I'm looking for a "transition path" from my largely JDBC
application to Ebean.

/Daryl

Rob Bygrave

unread,
Jun 25, 2010, 6:25:13 PM6/25/10
to eb...@googlegroups.com
> So if I had a cascade delete on the database, and I deleted the
> parent, the children might still be hanging out in the L2 cache.

Yes.

> Other the wasting memory, I don't really see this as a problem since

Yes -  unless the deleted children are in their own Lucene Index.


> Is this a crazy feature idea: every so often have the L2 cache verify
> its members still exist in the database.

Well, I don't think it will be required in practice (at least generally not required so far). My reasoning behind that is that Delete Cascade typically implies a strong 'ownership' type of relationship. The children only exist when the parent exists - they are effectively 'owned' by the parent. It practice this means that generally the parent will be cached and children accessed via the parent and not cached by themselves.

If you categorise tables into 3 groups...
- Lookup/Reference   (e.g. Currencies, Countries, State Codes like Order Status etc)
- Things/Legal Entities (e.g. Vehicle, Customer, Organization)
- Transactional/Events/Documents (e.g. Order, Goods Shipped, Credit Note)

Then you are likely to only have 'hard deletes' (as opposed to logical deletes) and potentially delete cascade on the tables that represent 'Transactions, Events or Documents' ... and those tables generally will not be as good to cache (high cardinality, lower hit ratio) but if you did cache them - you would probably cache the top/root level object and access the children from the root level object (and not bother to cache the children independently).

... well, that is how I see it anyway.

Having Lucene Indexes will likely change the thinking a bit.



Cheers, Rob.

Daryl Stultz

unread,
Jun 28, 2010, 12:33:59 PM6/28/10
to Ebean ORM

On Jun 25, 6:25 pm, Rob Bygrave <robin.bygr...@gmail.com> wrote:
> > Is this a crazy feature idea: every so often have the L2 cache verify
> > its members still exist in the database.
>
> Well, I don't think it will be required in practice (at least generally not
> required so far).

True when starting from scratch with Ebean, useful in my case as a
"transition" from JDBC to Ebean. I would not ask you to spend time on
such a thing.


> Then you are likely to only have 'hard deletes' (as opposed to logical
> deletes)

So a "hard delete" is
Ebean.delete(parent)
and a "logical delete" is a cascade to the children of said parent?

> Having Lucene Indexes will likely change the thinking a bit.

I don't know the first thing about Lucene. I probably don't have much
of a use for it since my app doesn't have anything like "documents".
Still, I search for things with strings, so maybe.

/Daryl

Rob Bygrave

unread,
Jun 28, 2010, 4:35:43 PM6/28/10
to eb...@googlegroups.com
> "hard delete" is ...

A 'hard' delete actually deletes the row. A 'logical' delete updates the status of the row to indicate it is not longer active.

For example, often you don't want to actually delete a 'Customer' because you want to keep all the related transactions (orders etc) so instead you logically delete them making so that they never appear to the application.

At this stage Ebean has no built in support for 'logical' deletes and you have to model and code it yourself. All deletes in Ebean are 'Hard' deletes and actually delete the row(s) and you have to code the update for a 'logical delete' yourself.

Rob Bygrave

unread,
Jun 28, 2010, 4:44:34 PM6/28/10
to eb...@googlegroups.com
> I probably don't have much of a use for it since my
> app doesn't have anything like "documents".

Lucene is technically a bunch of inverted indexes - this is what makes it really interesting in terms of performance of certain queries. We will use it to index beans (rather than 'document's per say) and it can do very relational like predicates on numbers, dates and datetime types ... as well as being very fast with wildcard string type predicates.

... so yes, I expect Lucene to be very useful assuming people need the speed or the horizontal scaling. The cost is that the Lucene Indexes are relatively expensive to update and that they naturally don't fit with OLTP (lots of small transactions) so we have to be prepared to give up a few things imo like guaranteed read-consistency.  Anyway... that's another topic.
Reply all
Reply to author
Forward
0 new messages