"ComputerDoctor" wrote in message ...
Are you saying that humans are going to invent nanobots that think for
themselves?
Probably.
- when we don't even know ourselves how we think,
We do not need to know how we think, and more importantly, machines
programs do not need to function like a biological computer in order
to act in apparently intelligent ways. The Turing test only requires
the machine to execute logical functions and communicate them in a way
that is indistinguishable from a human.
or how to write programs without bugs in,
Not every program has 'fatal-error' bugs, and many can recover
themselves to prior 'safe' states once bugs are detected.
let alone how to write programs that re-program themselves?
Actually, the program and the hardware to do that are already in the
Smithsonian Museum. Deep Blue, the famous IBM machine that beat the
then (1997)world chess champion Gary Kasparov possessed the ability to
self-write code and the original programmers didn't know precisely HOW
it beat Kasparov.
Consider visiting the following link
http://researchweb.watson.ibm.com/re...deepblue.shtml
"...Since the match five years ago, IBM has proposed a grand challenge
and is currently working with academia, governments and other
corporations to address this looming problem posed by the complexity
of IT infrastructure. Called 'autonomic computing,' this called for
computers to manage themselves with greater than human-like abilities
for use across a wide range of business and commercial applications,
from e-sourcing to data-mining to resource allocation."
Basically they were saying that IT's incredible growth is out-pacing
the ability of human IT managers to control it, so it is necessary for
the machines to take over the job. They are using the 'deep blue'
approach to solving this problem. It is already under-way.
Who is going to test that the nanobots' programs don't have bugs in?
The machines will.
Jason H.