I could not tell from the current documentation but if I understand the architecture from the source am I correct in assuming that once the Serengeti server receives a REST a rest, for example to build a new hadoop cluster, then a new task is configured, a chef JSON configuration is created with the appropriate references to recipes and configuration data, then scheduled for delivery across an AMQP channel for execution by the CHEF server?
Also briefly saw some threading code in the source, didn't see any code though to manage the thread lifecycle via any webcontextlisteners or hooks to spring container events. Given that the server portion is running in web container it seems like this could potentially lead to memory leaks if the application is reloaded (wasn't sure currently if that could occur outside the entire Serengeti vm being restarted).
- Jonathan Fontanez