Submission Questions

58 views
Skip to first unread message

Ben Murdoch

unread,
Aug 16, 2016, 4:27:00 PM8/16/16
to Text Adventure AI Competition
Hello, My name is Ben Murdoch I work in a machine learning research lab at BYU Provo, and we have a list of questions and things we would like some clarification about. We're very happy to see someone's created a NLP and logic based competition. 

  • In the about page was posted, "Further details on how to submit will be announced here closer to the deadline."  When will we know where we can submit our agents to?
  • Scoring - will agents be scored on cumulative points over multiple runs, or on the score of the final run, or what? (Especially, how will scoring be handled if the agent has a "learning" phase and an "execution" phase.)
  • How much computational time will programs be given?/ What exactly does computational time mean - number of "game steps" or number of seconds of execution or number of CPU cycles?
  • Is there any way for the agent to detect when our agent has won the game?
  • Will agents be scored based on multiple data runs on each game or only on a single run? (This is especially important to know for our learning agents.)
  • How many agent submissions is each institution allowed to enter into the competition?


I know this list is a little exhaustive, but it would help us put the final touches on our competitive agent. 
If you you don't have all the answers at this time, please feel free to pass along what you do know. 


Thank again!
-Ben




Tim Atkinson

unread,
Aug 17, 2016, 11:22:38 AM8/17/16
to Text Adventure AI Competition
Hi Ben,

Sorry to take a little while to get back to you. We're very excited to hear about your interest! I'll do my best to answer your questions but if I've not clarified something, please do let me know.

  • A submission email address will be provided on the website next week.
  •  An agent will be scored on the final state of the game at the end of its execution. If your agent is, for example, repeatedly restarting the game to learn, then the "learning" and "execution" phases should be contained within your agent so that it leaves the game in the final state it wishes to submit for judging. This week I will look into a global command (across games) that allows an agent to check the hard limit on game actions in a particular run; you should be able to do this to divide up the "learning" and "execution" phases as you see fit. 
  • The amount of computation time available will depend on the trial game in question. As a rule of thumb, the agent should finish a game relative to the speed a human would complete the game. Our interpretation of this means that we will be giving agents up to (anticipated) 10 seconds of execution per action, and then allotting a number of game steps according to human play-throughs of the game (the actual value '10' may vary depending on the number of submissions; due to limitations on computation time). So if we would expect a human to take up to 5 hours to complete a game, that would be 18000 seconds, so we would allow the agent x * 1800 game steps (where x is the factor we opt to give agents in additional executions). These are rough numbers, we will aim to release some definitive values next week.
  • The agent's detection of victory is part of the problem domain. The games themselves inform the player of victory or defeat if such a state is achieved. Some games may well be victory-state less and will instead use a scoring system. If this is the case then in our evaluation we will use the score of the agent once all available game steps are completed, but if your agent reaches a state where it is happy with its score, you can simply spam "" until all game steps are used up. We are aware that this is a very hard problem but these human-like interpretations of the game's output make the problem domain interesting!
  •  To avoid giving an advantage to a technique such as, say MCTS, the agent will be scored based on a single run of each game, but across multiple games. 
  • An institution can make as many entries as it wants to. Generally, submissions should be limited to one per individual/team. An individual may also make a team submission but generally the same individual should not appear in multiple teams. 
If there's any other information I can provide, let me know.

Best wishes,

Tim

Ben Murdoch

unread,
Aug 22, 2016, 5:47:49 PM8/22/16
to Text Adventure AI Competition
Thanks the clarification helps a lot!
-Ben 
Reply all
Reply to author
Forward
0 new messages