Hi Alejandro,
I was just wondering if you could give me some advice how to get your fork working? I've pulled it fresh, and made a separate build of ALE using your rlglue_controller.cpp file.
Just wondered if you have any ideas?
P.S. I like all the additions you've made to the code :)))
/usr/bin/python2.7 /home/ajay/PythonProjects/deep_q_rl-master_alito/deep_q_rl/ale_run.py
RL-Glue Version 3.04, Build 909
A.L.E: Arcade Learning Environment (version 0.4)
[Powered by Stella]
Use -help for help screen.
Warning: couldn't load settings file: ./stellarc
Game console created:
ROM file: /home/ajay/bin/roms/breakout.bin
Cart Name: Breakout - Breakaway IV (1978) (Atari)
Cart MD5: f34f08e5eb96e500e851a80be3277a56
Display Format: AUTO-DETECT ==> NTSC
ROM Size: 2048
Bankswitch Type: AUTO-DETECT ==> 2K
Running ROM file...
Random Seed: Time
Game will be controlled through RL-Glue.
RL-Glue Python Experiment Codec Version: 2.02 (Build 738)
Connecting to 127.0.0.1 on port 4096...
Initializing ALE RL-Glue ...
Using gpu device 0: GeForce GTX 570
INFO:root:Experiment directory: breakout_2015-02-06-13-46_0p0002_0p95
INFO:root:Task spec: VERSION RL-Glue-3.0 PROBLEMTYPE episodic DISCOUNTFACTOR 1 OBSERVATIONS INTS (100800 0 255) ACTIONS INTS (0 5) REWARDS (UNSPEC UNSPEC) EXTRA Name: Arcade Learning Environment
RL-Glue Python Agent Codec Version: 2.02 (Build 738)
Connecting to 127.0.0.1 on port 4096...
Agent Codec Connected
INFO:root:Layer 1: (32, 4, 84, 84)
INFO:root:Layer 2: (4, 84, 84, 32)
INFO:root:Layer 3: (16, 20.0, 20.0, 32)
INFO:root:Layer 4: (32, 9.0, 9.0, 32)
INFO:root:Layer 5: (32, 32, 9.0, 9.0)
INFO:root:Layer 6: (32, 256)
INFO:root:Layer 7: (32, 6)
/home/ajay/bin/Theano-master/theano/gof/cmodule.py:289: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility
rval = __import__(module_name, {}, {}, [module_name])
INFO:root:OPENING breakout_2015-02-06-13-46_0p0002_0p95/results.csv
INFO:root:Cropping at 19
INFO:root:Received start_epoch 1
INFO:root:training epoch: 1 steps_left: 50000
Traceback (most recent call last):
File "./rl_glue_ale_agent.py", line 670, in <module>
sys.exit(main(sys.argv[1:]))
File "./rl_glue_ale_agent.py", line 666, in main
max_history=parameters.max_history))
File "/usr/local/lib/python2.7/dist-packages/rlglue/agent/AgentLoader.py", line 58, in loadAgent
client.runAgentEventLoop()
File "/usr/local/lib/python2.7/dist-packages/rlglue/agent/ClientAgent.py", line 144, in runAgentEventLoop
switch[agentState](self)
File "/usr/local/lib/python2.7/dist-packages/rlglue/agent/ClientAgent.py", line 138, in <lambda>
Network.kAgentStart: lambda self: self.onAgentStart(),
File "/usr/local/lib/python2.7/dist-packages/rlglue/agent/ClientAgent.py", line 51, in onAgentStart
action = self.agent.agent_start(observation)
File "./rl_glue_ale_agent.py", line 328, in agent_start
self.last_image, raw_image = self.preprocess_observation(observation.intArray)
File "./rl_glue_ale_agent.py", line 341, in _preprocess_observation_cropped_by_cv
image = observation[128:].reshape(IMAGE_HEIGHT, IMAGE_WIDTH, 3)
ValueError: total size of new array must be unchanged
INFO:root:training epoch: 1 steps_left: 49995
INFO:root:training epoch: 1 steps_left: 49993
INFO:root:training epoch: 1 steps_left: 49991