Relaxed partition locking and the PostgreSQL v12 merge

45 views
Skip to first unread message

Heikki Linnakangas

unread,
Feb 11, 2020, 5:20:32 AM2/11/20
to Greenplum Developers
Hi,

Whenever a partition is locked in GPDB, the lock is released early, and
we hold a lock on the parent table instead. I believe the point of that
has been to reduce the number of locks held in queries that access a lot
of partitions (thousands or more).

That was never quite OK, and it led to issues like
https://github.com/greenplum-db/gpdb/issues/5919.

Now that we're replacing our partitioning code with upstream's, if we
want to keep that behaviour, we need to patch all the upstream code that
opens/closes partitions to relax the locking again. I don't think we
should do it. I think we should drop the "relaxed" locking of
partitions, and adopt the upstream's locking behavior. The downside is
that a query that accesses a lot of partitions will need to hold a lot
of locks. If that becomes a problem, bump up max_locks_per_transactions.

Thoughts?

- Heikki

Zhenghua Lyu

unread,
Feb 11, 2020, 5:58:32 AM2/11/20
to Heikki Linnakangas, Greenplum Developers
Hi,

   I have been working on the related issue before. (https://github.com/greenplum-db/gpdb/issues/8362)
   And in my mind, we should do it the same way as upstream.
   Then we can avoid many issues of deadlock.

   A very troublesome thing needs to mention:
      1. Greenplum supports partition table's leaves to be in different storage types.
      2. Greenplum supports GDD to improve TP performance, but AO table cannot be updated concurrently, we hold ExclusiveLock on any AO tables
      3. We should make sure that all partitions are locked by the same mode when parsing and planning, and the lock order should always be the same.

Now look at the following case: 

      table root
              -->  c1 (heap)
              -->  c2 (AO)
     GDD is enabled, what is the lock behavior of `delete from root`;
   
     Method 1:
        1. lock root in RowExclusive
        2. lock c1 in RowExclusive
        3. lock c2 in Exclusive
       there is locking upgrade, it may lead to local deadlock

     Method 2:
        lock any partition of a parition table in ExclusiveLock mode, ignore GDD is enabled.
      this is overkilling.
   

Best Regards,
Zhenghua Lyu

Heikki Linnakangas

unread,
Feb 11, 2020, 6:31:01 AM2/11/20
to Zhenghua Lyu, Greenplum Developers
On 11/02/2020 12:58, Zhenghua Lyu wrote:
> Hi,
>
>    I have been working on the related issue before.
> (https://github.com/greenplum-db/gpdb/issues/8362)
>    And in my mind, we should do it the same way as upstream.
>    Then we can avoid many issues of deadlock.

Ah, I didn't remember that issue. Yep.

>    A very troublesome thing needs to mention:
>       1. Greenplum supports partition table's leaves to be in different
> storage types.
>       2. Greenplum supports GDD to improve TP performance, but AO table
> cannot be updated concurrently, we hold ExclusiveLock on any AO tables
>       3. We should make sure that all partitions are locked by the same
> mode when parsing and planning, and the lock order should always be the
> same.

BTW, partitioning in PostgreSQL is even more flexible than in Greenplum.
A partition can be partitioned further (subpartitioning), but the
subpartition doesn't need to have the same partitioning key as its
siblings. And you can have foreign tables in the partition hierarchy,
different partitions can have different indexes etc. I'm not sure how
deep such assumptions run in Greenplum. I think most of the code that
assumed that will be replaced with upstream code anyway, but ORCA might
need some extra code to either deal with such heterogeneous partitions,
or at least detect them and fall back.

- Heikki

Zhenghua Lyu

unread,
Feb 11, 2020, 6:41:39 AM2/11/20
to Heikki Linnakangas, Greenplum Developers
Yes. 

I think we can using the upstream's logic first, and test and refine later (Since now Greenplum does not handle such things even).
Let's wait to hear other's idea.

Best Regards,
Zhenghua Lyu

Jesse Zhang

unread,
Feb 11, 2020, 12:26:05 PM2/11/20
to Heikki Linnakangas, Greenplum Developers
Yes please do that. I never sympathize the "relaxed locking" behavior
in Greenplum, it's playing fast and loose with something that cannot
be fast and loose. Neither do I buy the argument that this "saves
memory on the lock manager". Memory on locks are cheap. 10x the
default on the number of locks in Greenplum is roughly the same amount
of memory Instagram takes on my phone.

Go.

Jesse
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-dev+u...@greenplum.org.
>
Reply all
Reply to author
Forward
0 new messages