Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Germany Drafts World’s First Ethical Guidelines For Self-Driving Cars

6 views
Skip to first unread message

Shawnna Breutzmann

unread,
Dec 28, 2023, 11:30:10 PM12/28/23
to
German regulators have been working on drafting a set of ethical guidelines to dictate how the artificial intelligence in autonomous vehicles will deal with worst-case scenarios. According to Reuters, an expert government committee is writing what will be the basis for software guidelines that will decide the best possible course of action to protect human life above all else in an autonomous car emergency. While this doesn't sound very exciting and glamorous, the resulting guidelines and software could prove to be the most important (and possibly controversial) aspect of driverless cars.



Germany Drafts Worlds First Ethical Guidelines for Self-Driving Cars

Download https://t.co/ddQJoh1ArF






"The interactions of humans and machines is throwing up new ethical questions in the age of digitalization and self-learning systems," German transport minister Alexander Dobrindt said. "The ministry's ethics commission has pioneered the cause and drawn up the world's first set of guidelines for automated driving."


The guiding principle of the new German guidelines is that human injury and death should be avoided at all costs. In situations where injury or death will be unavoidable, the car will have to decide which action will result in the smallest number of human casualties. The regulators have also stipulated that the car may not take the "age, sex or physical condition of any people involved" into account when choosing what to do. Instead, self-driving cars in germany will prioritize hitting property or animals over humans in the event of a worst-case scenario.


Germany has drafted the world's first set of ethical guidelines for self-driving car programming. The guidelines were developed by the Ethics Commission at the German Ministry of Transport and Digital Infrastructure. The report stipulates 15 rules for software designers, to make "safety, human dignity, personal freedom of choice and data autonomy," a priority, according to Professor Udo Di Fabio, chairman of the authoring ethics committee.






This is a major techno-social threshold as more car manufacturers consider adding automation to their future fleets than ever before. Some vehicles made by Tesla, BMW, Infiniti, and Mercedes-Benz already have early iterations of the technology, but overall we're still not at a place where full autonomy is feasible. We first need to consider more than just regulatory guidelines for these vehicles. Traditional drivers often come up against ethical questions, such as to swerve to avoid an animal. This will not change when a computer takes over.


Artificial intelligence (AI) is a big source of controversy, with many experts, like Elon Musk and Stephen Hawking, having opposing views about how big of a threat, if any, it could pose. Others instead laud advances in human-machine integration, and argue that the future will look quite different than what we currently envision. Self-driving cars are just a small part of that equation. As we get closer to having 'deployable' advanced intelligence, we'll need to give serious pause before make important decisions regarding ethical frameworks by which to guide AI. Germany's bold foray into AI-ethics may not answer all of these questions, but it's an important first step.


The report lists 20 guidelines for the motor industry to consider in the development of any automated driving systems. The minister says that cabinet has adopted the guidelines, making it the first government in the world to do so.


In June 2011, the Nevada Legislature passed a law to authorize the use of automated cars. Nevada thus became the first jurisdiction in the world where automated vehicles might be legally operated on public roads. According to the law, the Nevada Department of Motor Vehicles is responsible for setting safety and performance standards and the agency is responsible for designating areas where automated cars may be tested.[71][72][73] This legislation was supported by Google in an effort to legally conduct further testing of its Google driverless car.[74] The Nevada law defines an automated vehicle to be "a motor vehicle that uses artificial intelligence, sensors and global positioning system coordinates to drive itself without the active intervention of a human operator". The law also acknowledges that the operator will not need to pay attention while the car is operating itself. Google had further lobbied for an exemption from a ban on distracted driving to permit occupants to send text messages while sitting behind the wheel, but this did not become law.[74][75][76] Furthermore, Nevada's regulations require a person behind the wheel and one in the passenger's seat during tests.[77]


The new ethics rules, along with German laws passed in the past few years, will influence other countries developing regulatory frameworks around the operation of autonomous vehicles. For instance, Germany, which is home to major automakers such as BMW, Daimler, and Volkswagen, was among the first nations to specify requirements for the testing of self-driving cars.


Here again, I cite that we are stuck in an uncomfortable position vis a vis the need for data and training versus the need to know what morality dictates for us as true: do we want to model moral dilemmas or do we want to solve them? If it is the former, we can do this indefinitely. We can model moral dilemmas and ask people to partake in experiments, but that only tells us the empirical reality of what those people think. And that may be a significantly different answer than what morality dictates one ought do. If it is the latter, I am still extremely skeptical that this is the right framework to be discussing the ethical quandaries that arise with self-driving cars. Perhaps the Trolley Problem is nothing more than an unsolvable distraction from the question of safety thresholds and other types of ethical questions regarding the second or third order effects of automotive automation in society.8


One of the main benefits of the advent of self-driving cars is a reduction in accidents and, as a consequence, fewer fatalities and other injuries. However, it is likely that even the most advanced autonomous cars will not avoid accidents in the future, and it is these accidents that raise ethical questions about how the system should behave at critical moments (Ben-Shahar, 2016; Marchant and Lindor, 2012; Rejcek, 2017; Smith, 2018). In the case of accidents involving human-driven cars, situations in which the driver decides whether, for example, to steer the car off the road to avoid a collision, or, on the contrary, to brake hard, which may cause a collision with the following car, are of a sudden reactive and instinctive nature, in which the will to do harm cannot be observed. But how is the programmer supposed to set up an algorithm in advance to determine which harmful situation to prioritize? In the event of an unavoidable collision, should the car prioritize the lives of the occupants over those of persons crossing the street or over the lives of animals? According to the utilitarian concept, should it always prioritize the situation in which more lives are saved? When choosing between a collision with a helmeted motorcyclist and a non-helmeted motorcyclist, should it prefer to hit the compliant motorcyclist because he or she is more likely to survive, or instead punish the non-compliant motorcyclist even though he or she is likely to suffer a more serious injury? Is it fair to sacrifice the life of an old man to save a child?

0aad45d008



0 new messages