I also observe that using a webhook doesn't give you any consistency guarantees: e.g. there will be a period of time between the reserved_by field changing, and the device status being updated. There are also potential race conditions if a webhook triggers an immediate write back to Netbox.
1. Write a webhook which keeps a copy of all previous data, and whenever it receives a message, compares old with new and then updates its local copy.
2. Write something using Postgres triggers and stored procedures, and bolt it directly into the database. The trigger/SP would write to an extra table, and an external process monitors that table. (TBH, I wish this is how Netbox webhooks worked in the first place).
3. You deploy some third-party Change Data Capture (CDC) solution for Postgres, e.g. to stream Postgres changes into Kafka, and read that feed.
4. Move your lifecycle management into a separate system, instead of Netbox custom fields, and post updates to Netbox as appropriate.
Option 1 doesn't solve your problem of wanting to prevent the webhook triggering in the first place. However you could build an intermediate webhook with this functionality, which in turn calls a remote webhook when the conditions are right.
Option 2 is entirely your responsibility to build and maintain at low level.
Option 3 may reduce your low-level database work, at the cost of finding and deploying a CDC solution that meets your needs.
Option 4 means building a new system for managing lifecycle.