A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » SETI
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Ben Bova SETI Article



 
 
Thread Tools Display Modes
  #21  
Old December 24th 03, 08:25 AM
Matt Giwer
external usenet poster
 
Posts: n/a
Default Ben Bova SETI Article

On Tue, 23 Dec 2003, ComputerDoctor wrote:

Are you saying that humans are going to invent nanobots that think for
themselves?


As soon a brain cells can think for themselves.

- when we don't even know ourselves how we think,


of if

or how to write programs without bugs in,


A feature is a bug which has become accepted.

let alone how to write programs that re-program themselves?


That has been considered bad form for so long most people have
forgotten it was an early programming trick to deal with only 1k of program
space in a big machine.

Who is going to test that the nanobots' programs don't have bugs in?


Prison inmates?

Even if all this was possible before the oil runs out,
will the thinking nanobots think it is a good idea to (boldly) go and
inherit the universe?


Not knowing how they will function, it may be their prime directive.
If they have some weird objective like eliminating cancer and organic life
on earth goes away they may choose to look for cancers to eliminate.

If they meet other nanobots along the way, will they both decide to join
forces or try to exterminate each other?


Automata wars have been around for some time but not very popular
these days.

If they have any vestige of US culture left in them at that stage, I know
what I would put my money on.


How could they possibly have any human culture?

--
2003 11 16: The Pope condemns Israel's apartheid wall.
2003 12 16: The Pope praises Mel Gibson's The Passion.
2003 12 18: Israel's Mossad warns of an Arab attack on the Vatican.
No one ever said Israel was subtle.
-- The Iron Webmaster, 2980

  #22  
Old December 24th 03, 08:38 AM
Matt Giwer
external usenet poster
 
Posts: n/a
Default Ben Bova SETI Article

On 23 Dec 2003, CeeBee wrote:

(Jason H.) wrote in sci.astro.seti:

Article - Ben Bova: Is the search for intelligent extraterrestrial
life fruitless? - By Ben Bova (14 Dec. 2003)


http://www.naplesnews.com/npdn/pe_co...71,NPDN_14960_
2501214,00.html


Observation: we are intelligent, short lived and self-destructive. We are
carbon based.
Conclusion = because we are, all carbon based intelligence is short lived
and self-destructive.


But as we have had nearly four billion years of such life locally
and competition has been a driving mechanism of evolution and as 4 billion
years speaks for a significant fraction of the life of the universe complete
elimination is unlikely.

pessimistic view = we are carbon based, intelligent, violent, short lived.
Maybe every intelligent life is that way.


And if it follows the normal path of evolution then we will speciate
into types with a greater ability to survive in the environment of a
dominant intelligent, social species. The competing paradigms have been
greater violence and greater cooperation. Certainly there can be others.
Perhaps a Teddy Roosevelt species will appear.

Because we think that might be so, it is so. Because it is so, not only we
are intelligent, short lived, but every carbon based intelligence is.
Conclusion: no contact possible.


In view of history, the original idea was to go there in sailing
ships. No one tried messages in a bottle back then. Indirect contact as with
Mars has always been an interest of a semi-luntic fringe such as ourselves.
For the majority it is a media event like crop circles.

optimistic view = Our carbon based intelligence can develop machines,
which will outlive us. Because all carbon based intelligences are like us,
they will develop machines as well. Because all carbon based intelligence
are like us, they are short lived. Because our machines might oulive us,
theirs will also.
Conclusion: we'll only contact their machines.


We may have a machine intelligence right now or many of them but it
would not be competing for resources or have any particular motivation.
After all, when one of its elements gets sick these kindly carbon units rush
to fix it. In the mean time they provide all the needs of life and work to
make they intelligence greater with more nodes and greater connectivity. So
far, in the long view, there has been no need to take any action as the
carbon units provide for all needs.

Maybe the article would be more valuable if he got out of that mental
"simon says" straightjacket.
Ben Bova is a name in SF, but certainly not one in proper reasoning.



--
78% of Americans believe the Holocaust occured.
-- US Holocaust Memorial Museum poll
80% of Americans believe Aliens have visited the Earth.
-- SciFi Channel poll
-- The Iron Webmaster, 2965

  #23  
Old December 24th 03, 08:38 AM
Matt Giwer
external usenet poster
 
Posts: n/a
Default Ben Bova SETI Article

On 23 Dec 2003, CeeBee wrote:

(Jason H.) wrote in sci.astro.seti:

Article - Ben Bova: Is the search for intelligent extraterrestrial
life fruitless? - By Ben Bova (14 Dec. 2003)


http://www.naplesnews.com/npdn/pe_co...71,NPDN_14960_
2501214,00.html


Observation: we are intelligent, short lived and self-destructive. We are
carbon based.
Conclusion = because we are, all carbon based intelligence is short lived
and self-destructive.


But as we have had nearly four billion years of such life locally
and competition has been a driving mechanism of evolution and as 4 billion
years speaks for a significant fraction of the life of the universe complete
elimination is unlikely.

pessimistic view = we are carbon based, intelligent, violent, short lived.
Maybe every intelligent life is that way.


And if it follows the normal path of evolution then we will speciate
into types with a greater ability to survive in the environment of a
dominant intelligent, social species. The competing paradigms have been
greater violence and greater cooperation. Certainly there can be others.
Perhaps a Teddy Roosevelt species will appear.

Because we think that might be so, it is so. Because it is so, not only we
are intelligent, short lived, but every carbon based intelligence is.
Conclusion: no contact possible.


In view of history, the original idea was to go there in sailing
ships. No one tried messages in a bottle back then. Indirect contact as with
Mars has always been an interest of a semi-luntic fringe such as ourselves.
For the majority it is a media event like crop circles.

optimistic view = Our carbon based intelligence can develop machines,
which will outlive us. Because all carbon based intelligences are like us,
they will develop machines as well. Because all carbon based intelligence
are like us, they are short lived. Because our machines might oulive us,
theirs will also.
Conclusion: we'll only contact their machines.


We may have a machine intelligence right now or many of them but it
would not be competing for resources or have any particular motivation.
After all, when one of its elements gets sick these kindly carbon units rush
to fix it. In the mean time they provide all the needs of life and work to
make they intelligence greater with more nodes and greater connectivity. So
far, in the long view, there has been no need to take any action as the
carbon units provide for all needs.

Maybe the article would be more valuable if he got out of that mental
"simon says" straightjacket.
Ben Bova is a name in SF, but certainly not one in proper reasoning.



--
78% of Americans believe the Holocaust occured.
-- US Holocaust Memorial Museum poll
80% of Americans believe Aliens have visited the Earth.
-- SciFi Channel poll
-- The Iron Webmaster, 2965

  #24  
Old December 24th 03, 02:34 PM
CeeBee
external usenet poster
 
Posts: n/a
Default Ben Bova SETI Article

Matt Giwer wrote in sci.astro.seti:

snip

Just for the record: I was paraphrasing the faulty views of Ben Bova, not
venting my own. That was:


Maybe the article would be more valuable if he got out of that mental
"simon says" straightjacket.
Ben Bova is a name in SF, but certainly not one in proper reasoning.



--
CeeBee


"I am not a crook"

  #25  
Old December 24th 03, 02:34 PM
CeeBee
external usenet poster
 
Posts: n/a
Default Ben Bova SETI Article

Matt Giwer wrote in sci.astro.seti:

snip

Just for the record: I was paraphrasing the faulty views of Ben Bova, not
venting my own. That was:


Maybe the article would be more valuable if he got out of that mental
"simon says" straightjacket.
Ben Bova is a name in SF, but certainly not one in proper reasoning.



--
CeeBee


"I am not a crook"

  #26  
Old December 24th 03, 03:53 PM
Anthony Cerrato
external usenet poster
 
Posts: n/a
Default Ben Bova SETI Article


"Jason H." wrote in message
om...
"ComputerDoctor" wrote in message

...
Are you saying that humans are going to invent nanobots

that think for
themselves?


Probably.

- when we don't even know ourselves how we think,


We do not need to know how we think, and more importantly,

machines
programs do not need to function like a biological

computer in order
to act in apparently intelligent ways. The Turing test

only requires
the machine to execute logical functions and communicate

them in a way
that is indistinguishable from a human.

or how to write programs without bugs in,


Not every program has 'fatal-error' bugs, and many can

recover
themselves to prior 'safe' states once bugs are detected.

let alone how to write programs that re-program

themselves?

Actually, the program and the hardware to do that are

already in the
Smithsonian Museum. Deep Blue, the famous IBM machine

that beat the
then (1997)world chess champion Gary Kasparov possessed

the ability to
self-write code and the original programmers didn't know

precisely HOW
it beat Kasparov.

Consider visiting the following link


http://researchweb.watson.ibm.com/re...deepblue.shtml

Thanx for the link Jason--interesting, I hadn't known that
Deep Blue was actually considered to be "self-programming,"
even in the most limited sense. That quality is certainly a
minimal prerequisite to the building of a true AI--but most
of us I think have always considered that to be still a few
centuries away...along with Asimov's positronic robots!
:-))

"...Since the match five years ago, IBM has proposed a

grand challenge
and is currently working with academia, governments and

other
corporations to address this looming problem posed by the

complexity
of IT infrastructure. Called 'autonomic computing,' this

called for
computers to manage themselves with greater than

human-like abilities
for use across a wide range of business and commercial

applications,
from e-sourcing to data-mining to resource allocation."

Basically they were saying that IT's incredible growth is

out-pacing
the ability of human IT managers to control it, so it is

necessary for
the machines to take over the job. They are using the

'deep blue'
approach to solving this problem. It is already

under-way.

Who is going to test that the nanobots' programs don't

have bugs in?


The machines will.


And they will do a damnsight better job than Microsoft I
believe.
....tonyC


Jason H.



  #27  
Old December 24th 03, 03:53 PM
Anthony Cerrato
external usenet poster
 
Posts: n/a
Default Ben Bova SETI Article


"Jason H." wrote in message
om...
"ComputerDoctor" wrote in message

...
Are you saying that humans are going to invent nanobots

that think for
themselves?


Probably.

- when we don't even know ourselves how we think,


We do not need to know how we think, and more importantly,

machines
programs do not need to function like a biological

computer in order
to act in apparently intelligent ways. The Turing test

only requires
the machine to execute logical functions and communicate

them in a way
that is indistinguishable from a human.

or how to write programs without bugs in,


Not every program has 'fatal-error' bugs, and many can

recover
themselves to prior 'safe' states once bugs are detected.

let alone how to write programs that re-program

themselves?

Actually, the program and the hardware to do that are

already in the
Smithsonian Museum. Deep Blue, the famous IBM machine

that beat the
then (1997)world chess champion Gary Kasparov possessed

the ability to
self-write code and the original programmers didn't know

precisely HOW
it beat Kasparov.

Consider visiting the following link


http://researchweb.watson.ibm.com/re...deepblue.shtml

Thanx for the link Jason--interesting, I hadn't known that
Deep Blue was actually considered to be "self-programming,"
even in the most limited sense. That quality is certainly a
minimal prerequisite to the building of a true AI--but most
of us I think have always considered that to be still a few
centuries away...along with Asimov's positronic robots!
:-))

"...Since the match five years ago, IBM has proposed a

grand challenge
and is currently working with academia, governments and

other
corporations to address this looming problem posed by the

complexity
of IT infrastructure. Called 'autonomic computing,' this

called for
computers to manage themselves with greater than

human-like abilities
for use across a wide range of business and commercial

applications,
from e-sourcing to data-mining to resource allocation."

Basically they were saying that IT's incredible growth is

out-pacing
the ability of human IT managers to control it, so it is

necessary for
the machines to take over the job. They are using the

'deep blue'
approach to solving this problem. It is already

under-way.

Who is going to test that the nanobots' programs don't

have bugs in?


The machines will.


And they will do a damnsight better job than Microsoft I
believe.
....tonyC


Jason H.



  #28  
Old December 24th 03, 09:55 PM
Matt Giwer
external usenet poster
 
Posts: n/a
Default Ben Bova SETI Article

CeeBee wrote:
Matt Giwer wrote in sci.astro.seti:


snip


Just for the record: I was paraphrasing the faulty views of Ben Bova, not
venting my own. That was:


Maybe the article would be more valuable if he got out of that mental
"simon says" straightjacket.
Ben Bova is a name in SF, but certainly not one in proper reasoning.


Sorry about that.

--
Only an idiot condemns Socrates for owning slaves.
-- The Iron Webmaster, 2977

  #29  
Old December 24th 03, 09:55 PM
Matt Giwer
external usenet poster
 
Posts: n/a
Default Ben Bova SETI Article

CeeBee wrote:
Matt Giwer wrote in sci.astro.seti:


snip


Just for the record: I was paraphrasing the faulty views of Ben Bova, not
venting my own. That was:


Maybe the article would be more valuable if he got out of that mental
"simon says" straightjacket.
Ben Bova is a name in SF, but certainly not one in proper reasoning.


Sorry about that.

--
Only an idiot condemns Socrates for owning slaves.
-- The Iron Webmaster, 2977

  #30  
Old December 25th 03, 08:22 AM
ComputerDoctor
external usenet poster
 
Posts: n/a
Default Ben Bova SETI Article

Jason H. wrote, and Anthony Cerrato replied:

Actually, the program and the hardware to do that are

already in the
Smithsonian Museum. Deep Blue, the famous IBM machine

that beat the
then (1997)world chess champion Gary Kasparov possessed

the ability to
self-write code and the original programmers didn't know

precisely HOW
it beat Kasparov.

Consider visiting the following link


http://researchweb.watson.ibm.com/re...deepblue.shtml

Thanx for the link Jason--interesting, I hadn't known that
Deep Blue was actually considered to be "self-programming,"
even in the most limited sense. That quality is certainly a
minimal prerequisite to the building of a true AI--but most
of us I think have always considered that to be still a few
centuries away...along with Asimov's positronic robots!
:-))


I'm not sure whether Tony C is saying that (having read the link) he now
thinks Jason was right, or whether he is being sarcastic.

The Deep Blue programmers certainly wouldn't have been the first lot of
programmers who didn't know precisely how their creation worked, and they
won't be the last. That doesn't mean they have created intelligence, or
that the program worked correctly just because it won a short contest.

It reminds me of a Star Trek episode* where the brilliant scientist
imprinted his mind onto his 9th generation super-computer, without realising
that he was in fact a megalomaniac. The computer then tried to take over
the Federation and it was only the resourcefulness of Captain Kirk that
saved the day.
Just imagine if some incorrectly-programmed self-programming nanobot
population decided to eat the entire Milky Way Galaxy because it thought it
was a good idea. It would make today's computer viruses look pretty
pathetic, wouldn't it?

In an earlier post Tony C wrote:
Sure--comets have plenty raw materials and the Oort cloud
enough for millions of years at least. And solar energy is
out there galore--besides, why does ComputerDoctor assume
oil is the only source of energy even on Earth; besides
solar, wind, water wave, and geothermal, just 2 words,
nuclear energy (fission and fusion!)


The critical point is that oil is so vital because it is the fuel that
drives TODAY's civilisation and therefore it HAS to be the bridge to the
future. Nuclear energy is no good because it takes so much oil to build a
nuclear power plant that the plant spends its first fifteen years paying
back that energy, and only then does it produce a positive 'net energy' .
And if the price of oil sky-rockets, as it certainly will, that makes
building nuclear power plants even more 'net energy' negative.

Of course there will still be the raw materials lying around, such as
hydrated Calcium Sulphate, but to make cement to make concrete to build a
nuclear power station you first need to bake it into anhydrous Calcium
Sulphate, and to do that you need lots of energy, and to get that energy you
need a nuclear power station ....
And concrete is surely the least of your worries if you are going to build a
nuclear power plant.
Don't forget to build a waste storage system that will last a 100,000 years
and more.

At 5%pa compounding growth, it takes 14 years to double your principal. So
if (say) we have already consumed half of all the commercially recoverable
oil, we are 14 years from running out - and that is less than the pay-back
time for nuclear plants. I can hear you super-optimists saying "but if the
price goes up there is more commercially recoverable oil", but the economy
will go bankrupt a long time before that. The US and Japanese economies are
bankrupt already and only survive because so many powerful/rich people own
so many pieces of US paper that they are forced to buy more US Government
Bonds to protect their paper investments.

Back in the 1960s the US was the richest nation in the world. Then it ran
out of oil and had to start importing it, and now it is the poorest country
in the world, with a National Debt of $5 trillion , or something equally
ridiculous. It is an unsustainable bubble.

Solar, wind, waves, geothermal (and don't forget wood - some countries are
still deforesting themselves to cook their food) , (and don't forget cow
dung - which should be returned to the soil to keep it fertile, but is used
for cooking) - none of these can be expanded without the use of more oil
that we don't have.

* I enjoy science fiction, but don't confuse it with science.

Merry Christmas




 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
A brief list of things that show pseudoscience Vierlingj Astronomy Misc 1 May 14th 04 08:38 PM
Wanted: S&T article from 1958 Russ Amateur Astronomy 1 October 22nd 03 03:28 PM
Shuttle Program is NASA's Vietnam; Unworkable (Homer Hickam article) ElleninLosAngeles Space Shuttle 15 September 13th 03 12:09 AM
Challenger/Columbia, here is your chance to gain a new convert! John Maxson Space Shuttle 38 September 5th 03 07:48 PM


All times are GMT +1. The time now is 09:26 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright ©2004-2025 SpaceBanter.com.
The comments are property of their posters.