I agree that any finite horizon dynamic programming solution has to use more storage than its infinite horizon version to keep track of the horizon, but this did not keep such competitors from performing well in the IPPC 2011.
Note that VI inherently derives a finite horizon t-stage-to-go value and policy at iteration t. The only complication over infinite horizon VI is that you need to "keep" the value and policy for all iterations.
Kolobov and Mausam who placed 2nd in the IPPC 2011 with Glutton (quite close to the 1st place competitor) have a nice discussion of finite horizon LRTDP along with a "reverse" approach for improving it:
Kolobov and Mausam also have some nice follow-on work to Glutton that I highly recommend reading.
===
The reason we do not use SSPs alone for the IPPC 2014 is that they are goal-oriented and we want a more general notion of reward (consider traffic, elevators, and many other domains from the IPPC 2011). I am aware of FH and IFH translations to SSPs, but the second problem with SSPs is their infinite (or indefinite) horizon nature.
The problem with infinite horizon objectives is that it is not clear how to evaluate them through simulation in the competition setting.
Finite time evaluation inherently requires some cutoff of time or decision steps, which inherently translates to a finite horizon. So a planner which plans in a way that is aware of the horizon cutoff can always do better than one that does not.
So this is why finite horizon was chosen for IPPC 2011... it aligned exactly with the only way we can evaluate in practice. If evaluation has to be finite horizon then the objective should be as well.
Cheers,
Scott
P.S. Competition planning is getting underway at this time... if there are other suggestions for what people would like to see in IPPC 2014 (evaluation, domains, etc), please post!