In my opinion, the reason why you're not seeing examples of storing complex object graphs is manifold:
- It is very hard to write good examples of complex logic, as examples try to be simple. At the same time, while successful complex object graph persistence applications must exist within corporations (otherwise, RedHat / IBM wouldn't invest so much in maintaining Hibernate), they tend not to share that with the public in a way that is suitable for examples.
- Much like mailing lists have faded away, awesome, in-depth blogs have faded as well, at least in google searches, as content marketers and "influencers" have started sharing thousands of meaningless (but simple!) hello world tutorials. Every new "startup" needs 1-2 devrel people who don't really deeply understand any subject matter, yet they'll start blogging away to generate traffic for their product. Look at jOOQ articles out there. One of the top google results (that aren't from jooq.org) is this: https://www.baeldung.com/jooq-count-query. I mean, really? A dumbass COUNT(*) query? 😅 That isn't why you need jOOQ. You could just use JDBC for that, too. But it makes sense for baeldung to write about this, because baeldung aims for helping beginners getting started, and a beginner probably looks up a COUNT(*) query with jOOQ, not virtual client side computed columns (https://blog.jooq.org/create-dynamic-views-with-jooq-3-17s-new-virtual-client-side-computed-columns/), or client side row level security (https://blog.jooq.org/implementing-client-side-row-level-security-with-jooq/).
- I, for one, have never seen an application that is better suited for an object graph persistence approach than just SQL based ETL. While that's just anecdotal evidence, I do believe that it correlates with the majority of software out there. Hardly anyone needs the most powerful features of JPA. (For the record, before jOOQ, I used to maintain applications that would generate 500-line Oracle execution plans that ran in < 1ms against billions of rows: https://twitter.com/lukaseder/status/1227260757726957568. A big amount of logic was implemented in SQL and PL/SQL.)
But then, when the visible examples are always simple, the perceived value of using object graph persistence fades, and SQL based alternatives become more compelling, objectively. Yet, because of inertia, things tend not to change drastically, and as such, people who probably *should be* using more SQL to become much more effective, continue to use Hibernate instead, because that's what they always did. Vlad Mihalcea from Hibernate fame keeps advocating the Lindy effect:
https://en.wikipedia.org/wiki/Lindy_effect. You could also say: "Nobody ever got fired for choosing IBM", if you're more cynical. Ironically, the Lindy effect also applies to SQL and Java, so there's probably no objective answer to your questions.
Regarding effectiveness, I keep quoting this tweet:
Or my famous talk:
One of the things I used to say in talks is that it's ironic. Back in the 90s, when SQL optimisers still sucked, *everyone* was writing their applications purely in SQL. Today, when optimisers are really really awesome, people have moved to other paradigms than SQL. But luckily, SQL has been seeing some sort of "renaissance" in the industry over the past decade, probably because NoSQL was such a sobering experience.