Nirgal @GSS group
Maybe you would feel better by actually feeling you are part of a ecosystem through your body and your actions ?
By eating plants you've watched grow and have taken care of, and by planting much more than what you eat, you would feel of service of the planet, while taking from her what you need to support her even more ?
About IA, i think there are many more dangers that could lead towards our extinction and considering humankind, we've got shit in every corner. It's not about hiding it under the carpet and wait, as it always ends up smelling like death, it's about accepting it and facing it, cleaning the shit we make without counting on others to do so.
I think ethical perfectness has as many faces as there are beliefs and values and religions. It doesn't exist in nature and is an awkward attempt to understand and express our divine essence, at best.
"The problem with intelligent people is that they are full of doubts, while the stupid ones are full of confidence."
Charles Bukowski
The danger in an IA is exactly the concept of ethical perfectness.
We've got the same problem actually with the judicial system. Laws may have been meant to support some social cohesion, but they supported so much our ethics that they almost replaced it in the heart of many.
So imagine you build an IA that is supposed to be ethically perfect, and that enough scientists and politicians and shit say : " alleluia, we've got our messiah that will save us from the depths of hell we are actually in ! Listen to God Jr and we'll all be happy and live in peace and love !". Then, for sure, all your fears will become reality, and it may actually be much worse.
You are right that it doesn't need a lot for something supposed to be good to turn horribly bad.
And often it turns bad because we forget our own free will, intelligence and ethics by listening to a book, a voice, an idea, or, why not, a machine.
Actually, it's easier to let someone else decide, even more when it concerns ethics, good and bad. Because these things mean conflict between the self and others.
So if someone says : do that, you'll be good, it releases one from the burden of the choice.
So to answer your question, i'll get to option 2. and would add a few things you could want to consider.
Consider everything, including any IA type imaginable and each of their features, as Pharmakons. Meaning everything is a poison and a cure, it depends on individuals, quantities, environment... The IA of your worst nightmares may actually be useful in some cases.
IAs should only suggest to humans and never act without their consent. Not only for "ethics" reasons, as to avoid some dangers, but also simply because it's nothing more than a tool, and our actual use of it will determine the quality of the results.
Actually, what we can do much more easily than an AI, is tools to enhance Collective Intelligence, which can lead to something that has infinitely more potential than an AI (imo), and is less likely to overwhelm and control us if it is decentralised, and uses semantics mostly to suggest doors, not opening them for the user.