A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » Amateur Astronomy
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Dangers of Global Warming



 
 
Thread Tools Display Modes
  #121  
Old October 21st 15, 12:21 AM posted to sci.astro.amateur
[email protected]
external usenet poster
 
Posts: 9,472
Default Dangers of Global Warming

On Sunday, October 18, 2015 at 11:57:11 AM UTC-4, Quadibloc wrote:
On Sunday, October 18, 2015 at 6:47:58 AM UTC-6, wsne... wrote:

The "tunnel problem" is the scenario in which a driver has only a split second
to decide whether to hit someone (a child, usually) who has suddenly ventured
out into the road and into the car's path OR to swerve to avoid but then
hitting the wall surrounding the tunnel's entrance.


Ah.

Currently, we have self-driving trucks that require a driver at the wheel to
operate legally on the roads.

If we were dealing with Isaac Asimov's positronic brains, there would be
problemms, but those, of course, are strictly science fiction.

We have elevators in our buildings. One way self-driving cars could be
accomodated even if the technology was primitive, would be to turn the roads into something sort of like elevator shafts, inaccessible to pedestrians.


Most of today's conventional vehicles do not even NEED a road, just a reasonably flat surface with enough traction. A purely driver-less car would not be practical off road.

A Jetson-style flying car could perhaps be able to do what is claimed that driverless ground cars are imagined to be able to do. Eventually.


A computer would indeed be hard put to distinguish a small human child from,
say, a deer - or an adult miscreant, for that matter.


The basic problem is that there are too many ambiguities on the ground for the computer to handle in the split second time frame available. The more you have to control the environment to enable the computer to cope, the less the car looks like personal transportation.

There are technologies that could be applied to cars with human drivers. Following distance alarms/overrides, speed alarms, sensors that detect vehicles on a collision course, red light/stop sign detectors, built in breathalyzers for habitual drunks, etc, that can guide a human driver or keep him or her from driving the car, are a few examples.
  #124  
Old October 21st 15, 01:10 AM posted to sci.astro.amateur
[email protected]
external usenet poster
 
Posts: 9,472
Default Dangers of Global Warming

On Tuesday, October 20, 2015 at 7:02:11 PM UTC-4, Chris L Peterson wrote:
On Tue, 20 Oct 2015 15:46:49 -0700 (PDT), wsnell01 wrote:

On Sunday, October 18, 2015 at 10:07:08 AM UTC-4, Chris L Peterson wrote:
On Sun, 18 Oct 2015 05:47:55 -0700 (PDT), wsnell01 wrote:

The "tunnel problem" is the scenario in which a driver has only a split second to decide whether to hit someone (a child, usually) who has suddenly ventured out into the road and into the car's path OR to swerve to avoid but then hitting the wall surrounding the tunnel's entrance.

Why do you think a car can't deal with that?


The CAR will simply be doing what a programmer or a bureaucrat decided it should do. No matter what the car does in the scenario, it will be doing the wrong thing.


I think not. What the car will be doing is acting is a consistent way,
which is better than what a driver would do.


Incorrect. There is no reason for consistent action. In fact, the actions taken must fit the particular situation.



There are different
approaches, and all come down to a repeatable algorithm, which can be
under the control of the car designer or under the control of the
driver.


The car designer should not be making such decisions.


That's not necessarily the case.


Incorrect.

But I also offered the option that
the driver could provide input.


You offered nothing actually, but be that as it may, the "input" would have to be decided upon ahead of time and might not be appropriate for each situation.

There's no reason that an algorithm
can't accept a driver's advance directive with regards to such
scenarios (e.g. always maximize driver safety vs. reduce personal
safety).


The owner could tell the computer ahead of time to always run over anything that suddenly appears in the road, but a jury would then be inclined to find the car's owner responsible and negligent if it was a child that was run over.

If the owner set the car the veer off into the wall, then the families of innocent passengers would have a case.


If the driver can configure the car ahead of time to do one thing or the other then he becomes liable for the outcome.


The driver is liable for his decisions now. The liability is actually
reduced if the car makes the decision and follows a lawful procedure.


Incorrect. See my comments above.

The car can still respond better and more accurately than the
driver, so it can achieve a better outcome.


First, one would need to justify one outcome as being better than the other.


There's no reliable way to do that right now, without robot drivers.


And there's no way to do that WITH robot drivers either.


Even a 50/50 random choice by the car's programming is still unethical.


I wouldn't say so. Indeed, in the absence of input to shift the
decision making process to something else, I can't think of anything
more ethical than flipping a coin.


Actually, about 2/3 of people surveyed would prefer NOT to hit the wall, ie run over the child instead. So a coin flip would be unethical.


A variation on the problem would be the decision by a computer-controlled car to hit a helmeted motorcyclist instead of the helmet-less rider next to him, assuming that no third course of action is available in the avoidance of a more serious accident. (This assumes that the computer is sufficiently advanced to detect that one rider has a helmet and the other does not.)

Again, why is the decision of the car, which is more rigorously
determined, worse than that of the driver?


A driver will not be making such cold calculations. The driver will likely not choose to hit the helmeted rider on purpose, the way the car might be programmed to do.


The driver would not operate rationally at all. And overall, the
result would be even more casualties. The car would at least attempt
to optimize the outcome to minimize harm. Few drivers are capable of
doing that.


Intentionally hitting the innocent helmeted rider is NOT optimizing the outcome, period. If the un-helmeted rider gets killed due to not wearing a helmet, then so be it.



In the tunnel scenario, the human driver will likely just hit the brakes and hope the child isn't hit. That is the ethical thing to do.


I disagree. A car is much more capable than a driver of taking
effective evasive action.


"Effective" will have to be defined first. The computer CAN react faster, but can its programming actually be ethical?

If the scenario is such that braking will
have a possibility of success, it can still do so.


As part of its "decision" not to swerve, braking is likely to occur, anyway..


But it can also
consider the impact on its own passengers and on surrounding vehicles,
in a way that most drivers cannot.


In ways that will perhaps turn out to be unethical.

Ethics aren't really involved in
split second decisions.


Ethics ARE involved in programming the behavior of a self-driving car.
  #125  
Old October 21st 15, 01:22 AM posted to sci.astro.amateur
[email protected]
external usenet poster
 
Posts: 9,472
Default Dangers of Global Warming

On Tuesday, October 20, 2015 at 7:58:20 PM UTC-4, Chris L Peterson wrote:
On Tue, 20 Oct 2015 15:56:20 -0700 (PDT), wsnell01 wrote:

I didn't compare buses to personal transportation. I was only talking
about cars. Vehicles that directly carry people between two points,
based on where they want to go, without intervening stops for other
passengers.


I can go use my car any time I wish. I need stop only a few minutes to refuel it. It is available 24/7/365 except during short, scheduled maintenance periods.


You don't need to own a car to have similar convenience.


I would have to wait for a cab and the trip would be more expensive.
My bicycle is slower and can't carry much.
Walking is even worse.


There is no requirement that these be owned by the user.


If one does not already own or have contractual control over the vehicle then permission must be sought and granted to legally use it. A rented vehicle is arguably NOT personal transportation in many situations. Neither would a taxi be.


I disagree. Both represent transportation reasonably called
"personal".


The process of renting a car is time-consuming, expensive and inconvenient. If you are on a trip or are using the rental as a replacement for your car which needs repair, then it starts to look more personal. Otherwise, usually not.

Taxi cabs are not immediately available and part of the fare you pay has to go to the driver, increasing the expense. It is slightly more convenient than a bus.
  #126  
Old October 21st 15, 01:27 AM posted to sci.astro.amateur
Chris L Peterson
external usenet poster
 
Posts: 10,007
Default Dangers of Global Warming

On Tue, 20 Oct 2015 17:10:25 -0700 (PDT), wrote:

I think not. What the car will be doing is acting is a consistent way,
which is better than what a driver would do.


Incorrect. There is no reason for consistent action. In fact, the actions taken must fit the particular situation.


I disagree. Consistent action doesn't mean the car always does the
same thing, it means that given the same set of inputs, it always does
the same thing. And that's a better result than a human driver can
provide.

But I also offered the option that
the driver could provide input.


You offered nothing actually, but be that as it may, the "input" would have to be decided upon ahead of time and might not be appropriate for each situation.


The input the driver is providing is how to weigh possible responses
to specific scenarios. The driver is able to do this while thinking
clearly and not trying to handle an emergency. That's why I see this
as a more suitable solution.

There's no reason that an algorithm
can't accept a driver's advance directive with regards to such
scenarios (e.g. always maximize driver safety vs. reduce personal
safety).


The owner could tell the computer ahead of time to always run over anything that suddenly appears in the road, but a jury would then be inclined to find the car's owner responsible and negligent if it was a child that was run over.


Assuming that such a choice could be made, which seems unlikely. The
driver input would be more along the lines of how much risk of
personal injury they are willing to assume in exchange for reducing
the harm to somebody in the street where they do not have
right-of-way. People are not currently considered legally negligent if
they hit somebody in the street who should not be there, unless it is
quite certain they could safely avoid that person. And a robotic car
is better able to take safe evasive action than a human driver.

First, one would need to justify one outcome as being better than the other.


There's no reliable way to do that right now, without robot drivers.


And there's no way to do that WITH robot drivers either.


The difference is, the algorithm is public, which means we can have a
formal understanding of how vehicles should respond in different
situations. If court cases change that, the algorithms can change in
response. This is something that only works with robotic cars, not
with human drivers. So we come closer to an ideal of optimized
outcomes.

Even a 50/50 random choice by the car's programming is still unethical.


I wouldn't say so. Indeed, in the absence of input to shift the
decision making process to something else, I can't think of anything
more ethical than flipping a coin.


Actually, about 2/3 of people surveyed would prefer NOT to hit the wall, ie run over the child instead. So a coin flip would be unethical.


You don't understand what the word "unethical" means.

The driver would not operate rationally at all. And overall, the
result would be even more casualties. The car would at least attempt
to optimize the outcome to minimize harm. Few drivers are capable of
doing that.


Intentionally hitting the innocent helmeted rider is NOT optimizing the outcome, period.


That depends on the scenario. If I had to hit one of the riders, I'd
aim for a teenager over a middle-aged person, for instance, on the
grounds that the older person is likely to be "worth" more, in the
sense that society has made a greater investment in him. Which one was
wearing a helmet would not factor into my decision making. I would
consider my decision both ethical and optimal.

In the tunnel scenario, the human driver will likely just hit the brakes and hope the child isn't hit. That is the ethical thing to do.


I disagree. A car is much more capable than a driver of taking
effective evasive action.


"Effective" will have to be defined first. The computer CAN react faster, but can its programming actually be ethical?


It doesn't need to be. It just needs to not be unethical. Many
decisions- even those with life-or-death consequences- do not involve
ethical choices in practice.

Ethics aren't really involved in
split second decisions.


Ethics ARE involved in programming the behavior of a self-driving car.


Only minimally.
  #127  
Old October 21st 15, 01:35 AM posted to sci.astro.amateur
[email protected]
external usenet poster
 
Posts: 9,472
Default Dangers of Global Warming

On Tuesday, October 20, 2015 at 8:02:01 PM UTC-4, Chris L Peterson wrote:
On Tue, 20 Oct 2015 16:05:33 -0700 (PDT), wsnell01 wrote:

I had suggested that in the original series the two Captain Kirk's were independent entities, unaware even of each others existence. Clearly, I am justified in concluding that the purely fictional transporter device kills its passengers every time it is used.


But you have not demonstrated that it matters. I'd argue that the
person who went into the transporter would have died anyway. That we
die continuously. The "you" of now is not the same "you" as a few
milliseconds ago. That "you" is dead.


That's nonsense.

If a person goes into a transporter (of the copy, transmit,
reconstruct type)


Keep in mind, peterson, that transporters are pure fantasy.

Is your transporter destroying the original? (Hint: it disappeared from the transporter room.)

the person that comes out the other end is still the
person who went in, since the only thing that defines a person is his
memories.


Incorrect. It is certainly possible to be aware without the ability to form memories.



  #128  
Old October 21st 15, 01:40 AM posted to sci.astro.amateur
Chris L Peterson
external usenet poster
 
Posts: 10,007
Default Dangers of Global Warming

On Tue, 20 Oct 2015 17:22:22 -0700 (PDT), wrote:

I can go use my car any time I wish. I need stop only a few minutes to refuel it. It is available 24/7/365 except during short, scheduled maintenance periods.


You don't need to own a car to have similar convenience.


I would have to wait for a cab and the trip would be more expensive.


You have such poor imagination. How about a fleet of circulating cars?
How about clusters of cars waiting to be called up, available within
just a few minutes? These models have been tested in some cities, and
work well. What about a car that belongs to a service bureau of some
sort, which you have access to in your garage, but which could be a
different car next time you use it?

BTW, for people who live in large cities, cabs are almost always a LOT
less expensive than owning a car, even if used frequently. That's why
so many people who live in cities choose not to own a car. They use
cabs for day-to-day getting around, and they rent a car when they need
to go someplace distant or go on a vacation. Saves a lot of money.

The process of renting a car is time-consuming, expensive and inconvenient.


Again, only because you lack the imagination to recognize other
models, such as clicking a button in a phone app and having a rental
car at your door within a few minutes.
  #130  
Old October 21st 15, 01:56 AM posted to sci.astro.amateur
[email protected]
external usenet poster
 
Posts: 9,472
Default Dangers of Global Warming

On Tuesday, October 20, 2015 at 8:27:10 PM UTC-4, Chris L Peterson wrote:
On Tue, 20 Oct 2015 17:10:25 -0700 (PDT), wsnell01 wrote:

I think not. What the car will be doing is acting is a consistent way,
which is better than what a driver would do.


Incorrect. There is no reason for consistent action. In fact, the actions taken must fit the particular situation.


I disagree. Consistent action doesn't mean the car always does the
same thing, it means that given the same set of inputs, it always does
the same thing.


That's a tautology, peterson.

And that's a better result than a human driver can
provide.


No, the car will always be making the wrong "decision" no matter which decision it makes.


But I also offered the option that
the driver could provide input.


You offered nothing actually, but be that as it may, the "input" would have to be decided upon ahead of time and might not be appropriate for each situation.


The input the driver is providing is how to weigh possible responses
to specific scenarios. The driver is able to do this while thinking
clearly and not trying to handle an emergency. That's why I see this
as a more suitable solution.


Under the actual emergency there will always be factors that were not considered by the owner and not by the designer either.


There's no reason that an algorithm
can't accept a driver's advance directive with regards to such
scenarios (e.g. always maximize driver safety vs. reduce personal
safety).


The owner could tell the computer ahead of time to always run over anything that suddenly appears in the road, but a jury would then be inclined to find the car's owner responsible and negligent if it was a child that was run over.


Assuming that such a choice could be made, which seems unlikely. The
driver input would be more along the lines of how much risk of
personal injury they are willing to assume in exchange for reducing
the harm to somebody in the street where they do not have
right-of-way. People are not currently considered legally negligent if
they hit somebody in the street who should not be there, unless it is
quite certain they could safely avoid that person. And a robotic car
is better able to take safe evasive action than a human driver.


However, in this scenario the car's evasive action will kill the person rding in the car.

First, one would need to justify one outcome as being better than the other.

There's no reliable way to do that right now, without robot drivers.


And there's no way to do that WITH robot drivers either.


The difference is, the algorithm is public, which means we can have a
formal understanding of how vehicles should respond in different
situations. If court cases change that, the algorithms can change in
response. This is something that only works with robotic cars, not
with human drivers. So we come closer to an ideal of optimized
outcomes.


The only "optimized outcome" here would be if no one was killed or injured. That is not the scenario posed in the tunnel problem.

Even a 50/50 random choice by the car's programming is still unethical.

I wouldn't say so. Indeed, in the absence of input to shift the
decision making process to something else, I can't think of anything
more ethical than flipping a coin.


Actually, about 2/3 of people surveyed would prefer NOT to hit the wall, ie run over the child instead. So a coin flip would be unethical.


You don't understand what the word "unethical" means.


The coin flip method would kill half of hypothetical passengers of the cars, whereas only 1/3 of them would choose that outcome. That is clearly unethical.



The driver would not operate rationally at all. And overall, the
result would be even more casualties. The car would at least attempt
to optimize the outcome to minimize harm. Few drivers are capable of
doing that.


Intentionally hitting the innocent helmeted rider is NOT optimizing the outcome, period.


That depends on the scenario. If I had to hit one of the riders, I'd
aim for a teenager over a middle-aged person, for instance, on the
grounds that the older person is likely to be "worth" more, in the
sense that society has made a greater investment in him.


That is a bizarre notion, peterson. But then you are a liberal.

Which one was
wearing a helmet would not factor into my decision making. I would
consider my decision both ethical and optimal.


Whether you would use the helmet as a criterion is irrelevant to this discussion.


In the tunnel scenario, the human driver will likely just hit the brakes and hope the child isn't hit. That is the ethical thing to do.

I disagree. A car is much more capable than a driver of taking
effective evasive action.


"Effective" will have to be defined first. The computer CAN react faster, but can its programming actually be ethical?


It doesn't need to be. It just needs to not be unethical. Many
decisions- even those with life-or-death consequences- do not involve
ethical choices in practice.


You used the word "effective" now you must define what you mean by it.

Ethics aren't really involved in
split second decisions.


Ethics ARE involved in programming the behavior of a self-driving car.


Only minimally.


No, totally.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
More Global Warming ... Hägar Misc 6 December 10th 13 07:54 PM
What global warming? Hagar Misc 0 April 4th 09 05:41 PM
dinosaur extinction/global cooling &human extinction/global warming 281979 Astronomy Misc 0 December 17th 06 12:05 PM
Solar warming v. Global warming Roger Steer Amateur Astronomy 11 October 20th 05 01:23 AM
Global warming v. Solar warming Roger Steer UK Astronomy 1 October 18th 05 10:58 AM


All times are GMT +1. The time now is 01:45 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.