If it is an MDP, you just need to implement actions(problem::YourMDP, s::YourState) instead of actions(problem::YourMDP) and I think MCTS and DiscreteValueIteration both support this.
If it is a POMDP, it is a little more complicated because you're dealing with a belief over states, so it's harder to define what actions should be available. Should it be the union of all actions supported by the states with nonzero belief, or the intersection?
action(problem::SomePOMDP, b::SomeBelief) is in the POMDPs.jl interface, but I am not sure if any of the solvers support it. If you want to, it should be pretty easy to fork one of the solvers and hack in a way to support it (just search for where the actions function is called). Or, if you let me know which solver you are trying to use, I can probably hack together support for it this afternoon.