The original use case that made me create wait_for_downstream was migrating a set of poorly designed pipelines that would reprocess all history every day and would drop and recreate a table at each run. Wait for downstream would prevent the race condition where a task that would depend on the table would query it while the next run (in a scenario of catching up) would drop the table while the downstream process was reading it.
Another way to do what you are interested in is to add a converging point downstream of your pipeline and use an ExternalTaskSensor depend on that very last step from the previous day (ExternalTaskSensor allows to wait for a previous run).
Maybe the long term vision is something around "trigger rules" allowing for arbitrarily complex combination of predefined rules.