Neuro-Dynamic Programming in Julia

287 views
Skip to first unread message

Pileas

unread,
Nov 22, 2014, 11:12:29 PM11/22/14
to julia...@googlegroups.com
Some problems have the so-called curse of dimensionality and curse of modeling. For this reason Bersekas and Tsimtsiklis (at MIT) introduced the so-called Neuro-Dynamic Programing.

Does Julia offer support for the aforementioned and if not, how about the future?

wil...@gmail.com

unread,
Nov 25, 2014, 1:09:49 AM11/25/14
to julia...@googlegroups.com
Reinforcement learning (RL) isn't covered much in Julia packages. There is a collection of RL algorithms over MDP in package: https://github.com/cpritcha/MDP. There is a collection of IJulia notebooks from a Stanford course that cover more RL algorithms: https://github.com/sisl/aa228-notebook/tree/master

Unfortunately, more advanced function approximation techniques, beyond look-up table, that allow to tackle large action-state spaces, are nowhere to find.

Couple a month ago, Shane Conway, the guy behind RL-Glue, talked about developing Julia RL-Glue client. If that happens, it would be quite simple to use various advanced RL algorithms, including value function approximators, in Julia.

John Myles White

unread,
Nov 25, 2014, 10:34:23 AM11/25/14
to julia...@googlegroups.com
Sounds like a cool project. Are the state space representations that RL-Glue uses easy to work with?

 — John

wil...@gmail.com

unread,
Nov 26, 2014, 5:24:43 PM11/26/14
to julia...@googlegroups.com
Defining an RL-agent environment in RL-Glue API is a straightforward task. Apart from (de)initialization calls, an environment respond for the agent action must me defined. This respond should have an appropriate reward for the agent (There are two separate placeholders for integer and real rewards. The way API deals with different types). If environment is dynamic, its internal state could be programmed as you see fit. Examples are here: http://library.rl-community.org/wiki/Category:Environments.
Reply all
Reply to author
Forward
0 new messages