View Single Post
  #82  
Old July 15th 16, 10:38 PM posted to sci.astro.amateur
Chris L Peterson
external usenet poster
 
Posts: 10,007
Default Mars with 3 different scopes - comparison

On Fri, 15 Jul 2016 12:49:25 -0700 (PDT), wrote:

You. You only focus on extreme and unlikely cases, while ignoring the
typical situation where the automated car totally avoids an accident
that a person couldn't. As usual, you refuse to look at the entire
picture.


Humans handle the routine stuff rather well, on the whole. It's the weird, freaky, uncommon things that a human will always handle better than a machine.


Actually, humans almost always fail in the latter case. People don't
reason during the seconds before an accident. They act reflexively.
What happens is largely a matter of chance. A machine will do vastly
better.

I have stated before that most auto fatalities are self-inflicted and avoided by most humans. Do you disagree with that?


Yes. Most auto fatalities are the result of some kind of human error
(as likely someone else's as the person killed), and most humans do to
avoid accidents (although most will not be killed).

Ok, so once we have these crazy robots to deal with, where should we stand in order to avoid their unpredictable behavior?


Some experts think this could be an issue, others not.


We won't really know why the machines will do what they do, or what they will actually do, therefore any "expert" who disagrees with me is no expert.


You keep telling yourself that.

Again, stupid child runs out into street, automated car can't stop, swerves and hits which of two motorcyclists?


A swerve is a somewhat uncontrolled maneuver. The car will probably
avoid it completely and hit nothing.


That is NOT the scenario. There is no time to brake, there is time to swerve.

If it must hit something, it
calculates the equations of motion and figures out which hit is likely
to cause the least harm.


Again, you need to consider the question "The least harm to WHO?"


Least harm. To all parties involved.