At the last meetup, it seemed that something like the following statement was controversial:
"If an AI is trustworthy enough, and capable enough, why wouldn't someone want to listen to it in pretty much everything to find and occupy roughly the peak of their utility function as much as possible?"
I don't think we discussed it quite enough so maybe here or next time we can do so if anyone else is similarly inclined as me. It'd be quite good to know or not if this is good to choose.
Until then, since I think few know of this; here is a seminal work in this area as some food for thought: