Actions depending on the state

24 views
Skip to first unread message

holger.te...@gmail.com

unread,
Dec 6, 2017, 11:19:18 PM12/6/17
to julia-pomdp-users
We are trying to define a POMDP model explicitly and would like to make actions dependent on the state.
It seems that actions(problem::POMDP) does not depend on the state and returns the full action space. However, we are wondering how to constrain the actions that are possible based on the current state we are in?

Thanks already in advance for your help,
Holger

Zachary Sunberg

unread,
Dec 7, 2017, 1:26:47 PM12/7/17
to julia-pomdp-users
If it is an MDP, you just need to implement actions(problem::YourMDP, s::YourState) instead of actions(problem::YourMDP) and I think MCTS and DiscreteValueIteration both support this.

If it is a POMDP, it is a little more complicated because you're dealing with a belief over states, so it's harder to define what actions should be available. Should it be the union of all actions supported by the states with nonzero belief, or the intersection?

action(problem::SomePOMDP, b::SomeBelief) is in the POMDPs.jl interface, but I am not sure if any of the solvers support it. If you want to, it should be pretty easy to fork one of the solvers and hack in a way to support it (just search for where the actions function is called). Or, if you let me know which solver you are trying to use, I can probably hack together support for it this afternoon.

holger.te...@gmail.com

unread,
Dec 7, 2017, 2:15:33 PM12/7/17
to julia-pomdp-users
Thank you very much. That makes sense, that's ok then, we've tried to include this into the transition probabilities, that should work for now.
Reply all
Reply to author
Forward
0 new messages