I see two use cases for master-only tables.
1. Metadata tables. There are many needs to have fast OLTP interaction for managing data loading. When did a load start, stop, how many rows, error messages, etc are very nice to store in a database. One solution to this is to use a normal distributed table in Greenplum but the interaction is relatively slow compared to say PostgreSQL which is designed for OLTP. Another solution is to create a separate database just to manage ETL functions.
It would be a simple solution to have use the Greenplum master database to store this information for two reasons. First, it reduces overall risk because your ETL processing isn't dependent on yet another database. Secondly, most production systems have a standby-master so now your ETL data has built-in HA.
2. Replication. Longer term, I would like to see the ability to replicate in realtime to the Master database from external databases like Oracle, SQL Server, PostgreSQL, etc. Many tools like Attunity Replicate, are designed for OLTP to OLTP replication. This would allow a tool like that to work in an optimal fashion with the target being Greenplum. We would of course need to drain these tables to distributed tables but this could be done in batches.
A bit off topic, but a third use-case could be for HAWQ. Just like #2 above, HAWQ could benefit from this replication. It would allow UPDATE and DELETE statements and then in batches, drain it to tables stored in HDFS. That would drive adoption for HAWQ and make it easier to target HAWQ for data replication.
If a master-only table is a "feature" instead of an accidental backdoor, then why would it matter if in the future people create master-only tables in this manner?