The two major ways I see on how the arena physics can be modeled are either a turn-based or a continuous, event-based model.
A simple turn-based model can be that each bot is providing a value of something like 'newtype Step = Step { runStep :: Input -> (Command, Step) }', where Input is the state of the arena and/or the effects of our commands from last turn, Command is a list (or other monoid) representing the action(s) the bot is to take during next step, like accelerate, turn, fire, do a radar scan on a given arc, etc. After each bot returned its answer, the arena applies the commands, advances time, and feed back the result of the turn to the snd of the retval of each bot (thereby the bots can keep an internal state by passing it on between their Steps).
In the event-based model the bots could send in actions (like start/end turning; change acceleration; start scanning), and could register callbacks to certain events (time 'ticks', timeouts, results of scanning, turned to given heading, etc.). The callbacks may change the state of the bot, send in new actions, register new callbacks (or unregister old ones). The engine would maintain the state of the arena and synchronize it with the timeline, either by jumping to the next 'point of interest' on the timeline or by trying to simulate the events in 'real time'.
IMHO the turn based model would be simpler and easier to understand/write bots against, but the event-based feels like having more realism.
Beside the basic physics model, IMO the other decision that has to be made upfront is the hosting/communication model between the arena (engine) and the 'bots. This include interesting questions, like how do we make sure that the contest is 'fair', so that all bots have access to the same CPU resources, or latency or to isolate them from the environment and each other, etc.
If I see it correctly, should we choose to implement the game only in terms of pure computations, so that the same game setup (initial bot placements, same arena, etc) will always lead to the very same result (including the 'log' of the game), it wouldn't be possible to guarantee fairness. There's no way (without introducing side effects) to limit the space or time needed to evaluate a pure function. Also, a badly written or malicious bot may bring the whole game down (consumes all memory, runs into infinite computation). This approach would also limit the execution model to in-process (loading the bots as plugins).
Pros: simple engine model, repeatable game runs. Cons: effective bot algorithms wouldn't be rewarded
If we give up repeatable games, our options may include:
- running the bots in-proc, sequentially, but the bot functions wrapped in timeouts: fair regarding time taken, but would depend too much on the execution environment performance
- running the bots in-proc in their own thread, communicating through channels: may only fit the event-based model, delayed/partial reactions possible
- running the bots as external processes, communicating through sockets: process isolation takes care about memory issues, relies on OS to provide fair scheduling; makes non-Haskell bots possible
- running the bots each on their own virtual environment, like on EC2 instances: virtualization may provide fairness on both memory and CPU; latency could be an issue; most complex deployment/execution model
Of course an alternative way to provide fairness would be that we implement our own virtual machine for the bots, with a haskell-to-lambdawars-VM compiler, but that might be just slightly far-fetched :)
Attila