AMD has announced the release of Schola 1.0, an open source (under MIT License) toolkit/plugin designed to integrate Reinforcement Learning (RL) for controlling objects within Unreal Engine.
This project aims to equip users with the ability to create environments, define agents, and seamlessly connect with Python-based RL frameworks like Gym, RLLib, or Stable Baselines 3, enabling the enhancement of NPCs through RL in gaming scenarios.
Schola 1.0 features include inference support for agents utilizing ONNX models, headless and multi-agent training capabilities, vectorized training, and more.
More: https://gpuopen.com/amd-schola/
Best Regards,
Mehdi Saeedi