On Tuesday, August 23, 2016 at 2:56:48 PM UTC-7, Peter Trei wrote:
> On Tuesday, August 23, 2016 at 3:22:08 PM UTC-4, Kevrob wrote:
> > On Tuesday, August 23, 2016 at 2:37:36 PM UTC-4, Peter Trei wrote:
> > > On Tuesday, August 23, 2016 at 2:09:48 PM UTC-4,
peterw...@hotmail.com wrote:
> > > > The following paragraph is from an article by Eric Smillie, commenting on the study
> > > > _The Social Dilemma of Autonomous Vehicles_ published in _Science_:
> > > >
> > > > “It’s the near future. A self-driving car is zipping its passengers down a country road
> > > > when, out of nowhere, a handful of pedestrians stroll into its way. There’s no easy way
> > > > out: Either the car plows through them or it swerves into a tree, killing those riding inside.
> > > > What would you rather it do?”
> > >
> > > The standard term for this class of dilemmas is "Trolley Problems", and
> > > are far, far from new.
> > >
> > > > I would say that this situation is a failure of defensive driving. If the sight lines to the side
> > > > of the road are so restricted that people walking at 3-4 miles per hour can be in your path
> > > > before you can stop, you’re driving too fast for the circumstances. One of Asimov’s
> > > > positronic robots (robot chauffeurs were featured in _The Naked Sun_) would not
> > > > be caught out like that.
> > >
> > > OK, explore the limits of this. Just how careful do you have to be? What if
> > > the victim was committing suicide-by-car and throws himself under your
> > > wheels from behind a parked van? How about a child running out between
> > > cars?
> > >
> > > A cross-over accident, or a wrong-way driver?
> > >
> > > Driving cars entails creating, and accepting, a certain degree of risk. We,
> > > socially, accept that, while trying to minimize it.
> > >
> > > But to require *perfection* from a robot car before it can be used is
> > > silly. But I want it to be as good, or better, as the best human drivers.
> > >
> > > So the trolley problem returns; Yes, a car can get in a situation where
> > > *someone* is going to be injured or killed, and the car can 'decide' who.
> > > This isn't unique to cars, but may well be the first familiar place where
> > > we have machines deciding.
> >
> >
> > In early days, it may be wise to limit the operation of the carbots
> > to roads without a lot of blind corners or other impairments to
> > whatever sensors or cameras the car's brain is processing. Given some
> > of the overdeveloped deer tracks that pass for "roads" winding through
> > the hills near where I live, and the sometimes pin-brained pedestrians I encounter, not to mention bicyclists with the salmon-spawning fixation,
> > keeping to numbered state highways, which tend to be straighter, might
> > be a good idea.
> >
> > Then again, considering some of the nuts behind the wheel on the back
> > roads, maybe the autobots* would drive safer than some of those loons
> > do!
>
> I have a strong suspicion that the first place we'll see complete automation
> won't be city streets, but on interstates. Human drivers will take 18-wheelers
> to the on-ramp, then leave them to drive unmanned to the exit ramp, perhaps
> on the other side of the country.
>
> The economies are significant. You don't pay the driver, and big trucks get
> their best mpg around 40 mph, far lower than human drivers (who are paid by
> the mile) will accept. Since you can run the trucks without rest breaks
> (only fuel stops), its about a wash for the total transit time, but much
> cheaper.
I just realized that your argument applies even better to freight trains than to semis, yet we don't have autonomous freight trains.
Mark L. Fergerson