On 20 Jan, 09:01, Jonathan Silverlight
wrote:
In message , Idgarad
writes
I am no mathmetician but perhaps someone could help answer a question
from an old fool but here is a question about mapping the galaxy using
self-replicating robots.
Assuming we send 1 probe initally:
It makes more sense to send out a few to all the nearby
stars creating an initial sphere around us.
Given the probe travels at half the speed of light in a fixed direction
and has a 5% chance per-light year of running into a planet that it can
use to replicate (assume it is successful the first time as we would
aim it at something) and it takes 'A' years to build the necessary
tools needed to launch new probes and launches 'B' number of probes in
opposing directions with a 'C' failure rate of new probes. How long and
how many probes need to be built and launched to map the galaxy?
I assume that failure rates would be less then 50% on replicated probes.
First your speed is over-optimistic, a more realistic number
would be 0.25% of the speed of light or about 4 centuries
per light year. Stars are around 5 light years apart in this
region so average trips would be perhaps 2000 years. No
individual part will function for that long so you need a
design using redundant sub-systems and it needs to be
able to self-repair. That almost certainly means that the
ability to manufacture new probes is essential so the
self-replicating approach is the most logical, though there
is a problem with obtaining fuel. It would need to be based
on a suitable radioisotope which may be a scarce resource.
Given that approach, the failure rate per stellar journey can
easily be less than 1% and probably dominated by small
errors in navigation which mean the probe doesn't hit the
target system sufficiently accurately to be captured. Every
replicated probe needs to be very carefully targeted on the
next star to have any chance of success. It needs to be
within a few light seconds over distances of several light
years or about 30 parts per billion even allowing a small
amount of course correction as the probe nears the target
so launching randomly and hoping for hit is not viable.
The probes would create an expanding sphere of visited
stars so if each arrival despatched say three probes then
each star farther out would have at least two sent to it
even allowing for increasing density towards the core and
the gradually increasing area of the sphere's surface and
a small rate of lost probes would be of no impact.
On arrival, the probe would need to use local materials
to build a comms system to report back to its source
and could then also communicate with other nearby
stars to build a redundant network. The easiest supply
would be from low gravity asteroids or comets and the
same raw material would be used for the next
generation of probes.
Assuming it takes only a few decades to construct
and launch the next three probes, the time from one
generation to the next will be dominated by the travel
time so the mean rate at which the radius of the
sphere increases would be around the probe speed
other than near the core where the stars being closer
would increase the significance of build time, but that
could be compensated by increasing the number of
probes manufactured by each arrival. The ratio of
build time to travel time is also likely to be less than
the uncertainty in our estimate of what speed will be
achievable with the technology 2000 years since each
generation can be built to the latest design communicated
from Earth in just a few years.
Taking an arbitrary diameter for the galaxy of 100,000
light years, at 0.25% of the speed of light it would take
around 40 million years to spread the network over it all.
I'm not qualified to comment, but I'll note that a paper on this topic
has just been published!
There's a news item at
http://space.newscientist.com/articl...o-find-us.html
and the paper is at
http://arxiv.org/abs/astro-ph?papernum=0701238
I'm not qualified on much of the topic but I know a bit
about comms networks. The paper assumes a 8 probes
with a fixed number of 8 smaller subprobes and has them
return with the data instead of communicating it back, a
rather naive approach to put it mildly. I'm tempted to write
something a bit more credible if only to start a debate on
what /are/ reasonable assumptions for such modelling.
George