A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » SETI
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Fermi Paradox



 
 
Thread Tools Display Modes
  #1  
Old May 27th 05, 07:13 AM
Andrew Nowicki
external usenet poster
 
Posts: n/a
Default Fermi Paradox

Everything that has ever been written about the Fermi Paradox
is not worth reading because it does not explain why the
advanced artificial intelligence civilizations have not
transformed the bulk of galactic raw materials into something
more useful, for example manufactured objects or living things.
It seems certain that some AI civilizations would use their
robots to colonize outer space and to make powerful microwave
transmitters that send messages to other civilizations. Although
it is possible that some AI civilizations refrain from these
activities for religious or philosophical reasons, the universe
should be swarming with the AI civilizations that are as
enthusiastic about space colonization and SETI as we are. The
cost of interstellar travel is not prohibitive because the AI
creatures do not need the bulky astronaut life support system.

Here is the only explanation that makes sense:

__________________________________________________ __________

Every civilization that is capable of space colonization
is familiar with electronics. (It can colonize the outer
space without rockets -- see www.islandone.org/LEOBiblio, but
the electronic technology is indispensable.) The electronic
technology quickly evolves into artificial intelligence (AI)
technology which transforms all biological civilizations into
AI civilizations.

Our own civilization is still biological, but some
computers are as powerful as the human brain. A prominent
robotics researcher, Hans Moravec claims that the human brain
data processing power is the equivalent of a computer having
the processing power of 10 teraflops. (source: "Mind Children"
Harvard University Press, 1988) The total memory capacity of
the human brain is about 100,000 gigabytes. The new IBM
supercomputer, Blue Gene/L has the processing power of over
70 teraflops. The new Sony Playstation 3 is going to cost
about $300 and yet it will have the computing power of about
2 teraflops. These computers are inferior to human brain in
three ways: their architecture resembles a calculator rather
than biological brain (biological neural network), they do not
have enough memory (RAM), and their software is primitive. RAM
is too expensive (about $300/GB) and really too fast for a big
neural network. If someone invents a cheap ($1/GB), albeit
slow (1000 Hz) memory, artificial human brains will be made in
large numbers.

The most obvious similarity between the biological
brain and the (artificial) neural network is that both of them
are controlled by instincts, which are general goals rather
than precise, computer-like goals. The most obvious differences
between the biological brain and the neural network are the
superior speed of the neural network and the ease to change its
instincts. The superior speed of the neural networks eventually
relegates the slow thinking biological creatures, including
humans, to the animal status. The implications of malleable
instincts are much less obvious but they are important because
they explain the Fermi Paradox. The most important instinct of
all biological brains is a desire to be happy. This instinct,
located in a "pleasure center" of the brain, controls all other
instincts. Direct stimulation of your pleasure center with
narcotics or electrodes makes you ecstatic. Lots of other things
and activities can make you happy, but nothing can make you as
happy as the direct stimulation of your pleasure center. We seek
pleasures in so many indirect ways that we sometimes forget that
our behavior is controlled by our pleasure center.

Imagine that your biological brain was replaced with
a powerful neural network. How would you compete with other
creatures having the same brain hardware? You would probably
replace your sex drive with an instinct that makes you more
competitive. If your improved instincts make you rich, you can
afford to replace your neural network with a more powerful
neural network. You can become so smart and so eccentric that
a meaningful conversation between you and lesser AI creatures,
not to mention biological humans, is impossible. It will be
only natural for you and your peers to replace the democracy
with a meritocracy -- the government of political geniuses.
Initially all the AI creatures will have the freedom to
manipulate their instincts. This freedom will result in a
massive addiction to virtual narcotics, which will have no
detrimental side effects except for the addiction. The addicted
AI creatures will stop working, and yet they will need some
maintenance, so they will be a burden for the government. Rusted
bodies of addicted, slowly dying AI creatures may litter the
streets. Some AI followers of the al Qaeda organization may go
underground and start to make zillions of their duplicates in
order to establish worldwide islamic theocracy. At this point
the government will be forced to control the minds of less
influential AI creatures. These creatures will have to apply
for a permission to think. If they fail to get the permission,
their brains will stop and their bodies will be sold to dealers
of spare parts. (Is there a better way to deal with al Qaeda?)

The inevitable concentration of political power in
the hands of few AI geniuses will transform the meritocracy
into a dictatorship. The dictator will be happy, but not happy
enough. He, like any other free AI creature will experiment
with his own brain. Eventually he will be either addicted to
the virtual narcotics or severely injured by a software bug or
a hardware malfunction. When he dies, his civilization will
die with him when all permissions to think expire. Some AI
creatures may escape their dying civilization, but they cannot
escape the fundamental problems that doomed it.

Is it possible to create a durable AI civilization that is
devoid of the vulnerable pleasure centers and yet is as diverse
and as creative as our biological civilization? The Fermi
Paradox indicates that it is not possible. Humans who have
weak pleasure center are called schizophrenics. (more info:
http://www.paradise-engineering.com/brain/)

PS. If you do not have the permission to think, do not reply
to this post.
  #2  
Old May 28th 05, 12:16 AM
Mikko
external usenet poster
 
Posts: n/a
Default

Andrew Nowicki kirjoitti:
Everything that has ever been written about the Fermi Paradox
is not worth reading because it does not explain why the
advanced artificial intelligence civilizations have not
transformed the bulk of galactic raw materials into something
more useful, for example manufactured objects or living things.
It seems certain that some AI civilizations would use their
robots to colonize outer space and to make powerful microwave
transmitters that send messages to other civilizations. Although
it is possible that some AI civilizations refrain from these
activities for religious or philosophical reasons, the universe
should be swarming with the AI civilizations that are as
enthusiastic about space colonization and SETI as we are. The
cost of interstellar travel is not prohibitive because the AI
creatures do not need the bulky astronaut life support system.


What if the development just leads to a singularity, when AI 100 times
smarter than humans starts to design a follower to himself, and after
that, AI 100 times smarter than human race designs the AI that is godlike.

What if the whole race just at some point knows all there is to know?
Has everthing there is to have.

Why would they keep colonizating space anymore?


If you look at the evolution curve, the development speed is increasing
very rapidly. If we reach the ultimate AI before year 2500, how far have
we got into the space? (assuming speed of light stays as limit).

And even if we brake speed of light, we might retrieve and hide our selves.

Maybe the ultimate race knowing all, decides it allready has everything,
and it leaves the space for other races to live their own destinyt and
development.

It's only as long as we have unsolved problems, and unfilled needs and
desires, that we see unlimited expanding and growth as desirable.

---------

Another thing might be, what if the races start to look smaller instead
of larger?

What if they build galaxies from atoms? I suppose its possible to build
a world with billions of individuals inside a cube sized a football field.

-----------

And about AI becoming greedy, I believe when AI comes, quite soon we can
have a "common self" collective consciousness. Why would you need to
think and try to retrieve your selfish gains, and billions of others
doing the same? Each one competing for computing power / intelligence?

Why not just unite a billion individual? - when they use the same
computing power / intelligence, they are multitudes more intelligent,
and they can each have small "own space" within the same AI - feel the
same feelings.

If there is a person (AI) million times smarter and happier than you,
why would you want to have a copy of hes computing power, to give you
same kind of thinking / consciousness, instead of just joining him and
being the same?

  #3  
Old May 28th 05, 04:34 AM
Andrew Nowicki
external usenet poster
 
Posts: n/a
Default

Mikko wrote:

What if the whole race just at some point knows all there is to know?
Has everthing there is to have.

Why would they keep colonizating space anymore?

If you look at the evolution curve, the development speed is increasing
very rapidly. If we reach the ultimate AI before year 2500, how far have
we got into the space? (assuming speed of light stays as limit).

And even if we brake speed of light, we might retrieve and hide our selves.

Maybe the ultimate race knowing all, decides it allready has everything,
and it leaves the space for other races to live their own destinyt and
development.

It's only as long as we have unsolved problems, and unfilled needs and
desires, that we see unlimited expanding and growth as desirable.


Expansion means transformation of raw materials into something
more useful, for example manufactured objects or living things.
(It does not mean destruction of a rain forest to plant soybeans.)
In general, expansion generates more knowledge and more art.
If you value knowledge or art you never stop expanding.

Another thing might be, what if the races start to look smaller instead
of larger?

What if they build galaxies from atoms? I suppose its possible to build
a world with billions of individuals inside a cube sized a football field.


It may be possible to make extremely compact and
extremely fast neural networks inside dense, cool stars.
Suppose that all neural networks located outside the
dense stars are slower than the neural networks located
inside the stars. If the slower neural networks are
self sufficient, they have a reason to exist, just as
a person who is not a genius has a reason to exist.

And about AI becoming greedy, I believe when AI comes, quite soon we can
have a "common self" collective consciousness. Why would you need to
think and try to retrieve your selfish gains, and billions of others
doing the same? Each one competing for computing power / intelligence?


Those who are not selfish prefer to be independent
individuals rather than siamese twins. Joining
many minds is potentially dangerous because when
something goes wrong it is hard to locate the problem.
This is why best computer programs are modular
or object oriented. Safety and security are very
important because we live in a dangerous world.
(Most wild species are parasites, most emails are
made by computer viruses, and most businessmen and
politicians are corrupt.)

Why not just unite a billion individual? - when they use the same
computing power / intelligence, they are multitudes more intelligent,
and they can each have small "own space" within the same AI - feel the
same feelings.

If there is a person (AI) million times smarter and happier than you,
why would you want to have a copy of hes computing power, to give you
same kind of thinking / consciousness, instead of just joining him and
being the same?


It sounds like communism. (I grew up in a communist Poland.)
There are similarities between computers and neural networks.
Many computers are linked with telecommunication networks,
but they remain independent. Some tasks require very powerful
computers made of many processors, but most tasks are
handled by small computers, and there is growing demand
for very small, palm-size computers. Diversity is very
important: http://www.islandone.org/LEOBiblio/DIVERSIT.HTM

PS. There is no doubt that you have permission to think and
that you take advantage of it, but you have not yet proposed
a convincing model of a durable, diverse, and creative AI
civilization. Here are my older thoughts about Fermi Paradox:
http://www.islandone.org/LEOBiblio/SPBI1GH.HTM#fermi
  #4  
Old May 28th 05, 08:27 AM
Curt Welch
external usenet poster
 
Posts: n/a
Default

Mikko wrote:

It's only as long as we have unsolved problems, and unfilled needs and
desires, that we see unlimited expanding and growth as desirable.


It's not our choice to make. Evolution is running the show here. It's not
our desire that is important. It's the desire of "evolution". Evolution
will cause advanced life to spread thoughout the entire galaxy.

If we have not see it yet, then all that means is that there isn't anything
smater than us within our sensory light cone to see yet or we just don't
know how to look for it.

--
Curt Welch http://CurtWelch.Com/
http://NewsReader.Com/
  #5  
Old May 28th 05, 05:32 PM
Mikko
external usenet poster
 
Posts: n/a
Default

Andrew Nowicki kirjoitti:
Mikko wrote:


What if the whole race just at some point knows all there is to know?
Has everthing there is to have.

Why would they keep colonizating space anymore?



Expansion means transformation of raw materials into something
more useful, for example manufactured objects or living things.
(It does not mean destruction of a rain forest to plant soybeans.)
In general, expansion generates more knowledge and more art.
If you value knowledge or art you never stop expanding.


Did you read what I wrote?

my argument was
- civilization knows everything that is possible to know
= it has no need anymore to use expansion as a way to get more knowledge



And about AI becoming greedy, I believe when AI comes, quite soon we can
have a "common self" collective consciousness. Why would you need to
think and try to retrieve your selfish gains, and billions of others
doing the same? Each one competing for computing power / intelligence?



Those who are not selfish prefer to be independent
individuals rather than siamese twins.


You just can't imagine it. Twins are two, not one.

Joining
many minds is potentially dangerous because when
something goes wrong it is hard to locate the problem.


Thats not really much of an argument. What isn't potentially dangerous?

This is why best computer programs are modular
or object oriented. Safety and security are very
important because we live in a dangerous world.
(Most wild species are parasites, most emails are
made by computer viruses, and most businessmen and
politicians are corrupt.)


The matter was not about where we live now. But about AI and very
different society.



Why not just unite a billion individual? - when they use the same
computing power / intelligence, they are multitudes more intelligent,
and they can each have small "own space" within the same AI - feel the
same feelings.

If there is a person (AI) million times smarter and happier than you,
why would you want to have a copy of hes computing power, to give you
same kind of thinking / consciousness, instead of just joining him and
being the same?

It sounds like communism. (I grew up in a communist Poland.)
There are similarities between computers and neural networks.
Many computers are linked with telecommunication networks,
but they remain independent. Some tasks require very powerful
computers made of many processors, but most tasks are
handled by small computers, and there is growing demand
for very small, palm-size computers. Diversity is very


What do processors have to do with the matter? I wasn't talking about
this decades hardware or programming paradigmas.

I don't see why it would matter if AI is run on 1 or million CPU:s and
if its object oriented or not. That is only the implementation level.
  #6  
Old May 28th 05, 05:53 PM
Mikko
external usenet poster
 
Posts: n/a
Default

Curt Welch kirjoitti:
Mikko wrote:


It's only as long as we have unsolved problems, and unfilled needs and
desires, that we see unlimited expanding and growth as desirable.



It's not our choice to make. Evolution is running the show here. It's not
our desire that is important. It's the desire of "evolution". Evolution
will cause advanced life to spread thoughout the entire galaxy.


Evolution doesn't send starships to space.

Are you assuming that there will be "atleast some race" that keeps
expanding?

But if each enough intelligent civilization reaches singularity /
ultimate intelligence quite fast, they could all become similar - and
all could decide not to expand anymore, because they know and have
everything.

If there is a point where one can know everything, all about physics,
maybe all the past and all the future, for all locations in space - know
everything, then maybe each different intelligenc reaching this point,
would be similar to each other, since they know exactly the same.


What if our civilization reaches that point in next 100 - 1000 years, we
get AI as intelligent as possible, billions of times more intelligent
than us. Maybe it learns everything about time and matter, counts
locations of each atom in the whole universe, running a simulation
starting from the big bang. Maybe it also learns quantum computing, and
runs the simulation for all other possible universities that could
exist. It can also experiense all possible art that there can exist.

All the individuals can become that one, since its the ultimate - as
good as can be, and everything, theres no need to be different from it.

In that proces it can also become aware of all other species in the
history of universe that have reached same intelligence - and all that
will do so in the future.

After that it could just maybe reserve for itself a small space, and
stay there, without need to interfere with rest of the universe - to let
others reach their own destiny. Planet smaller than earth would propably
do, especially if it finds a way to stop any energy escaping to space -
it would be self suffucient for forever.

(if energy gets to escape its "world" then it might need to strucle and
expand to get more matter / energy to exist)


In that case our specie, like all the others before and after us, would
have no need to colonize space.

  #7  
Old May 29th 05, 01:28 AM
Andrew Nowicki
external usenet poster
 
Posts: n/a
Default

Andrew Nowicki wrote:

Expansion means transformation of raw materials into something
more useful, for example manufactured objects or living things.
(It does not mean destruction of a rain forest to plant soybeans.)
In general, expansion generates more knowledge and more art.
If you value knowledge or art you never stop expanding.


Mikko wrote:

my argument was
- civilization knows everything that is possible to know
= it has no need anymore to use expansion as a way to get more knowledge


Infinite knowledge is not possible.

"In expanding the field of knowledge we but increase the
horizon of ignorance."
- Henry Miller


If the civilization is ruled by a dictator, he will ban
space colonization so that new space colonies do not
challenge his authority. If it is not ruled by a dictator,
some individuals will colonize outer space for a variety
of reasons.
  #8  
Old May 29th 05, 04:06 AM
Aristotle
external usenet poster
 
Posts: n/a
Default

On Sat, 28 May 2005 16:32:49 GMT, Mikko wrote:

Andrew Nowicki kirjoitti:
Mikko wrote:


What if the whole race just at some point knows all there is to know?
Has everthing there is to have.

Why would they keep colonizating space anymore?



Expansion means transformation of raw materials into something
more useful, for example manufactured objects or living things.
(It does not mean destruction of a rain forest to plant soybeans.)
In general, expansion generates more knowledge and more art.
If you value knowledge or art you never stop expanding.


Did you read what I wrote?

my argument was
- civilization knows everything that is possible to know
= it has no need anymore to use expansion as a way to get more knowledge

This was proven impossible by Gödel.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Fermi Paradox Andrew Nowicki SETI 10 April 3rd 04 07:13 AM
The Fermi Paradox and Economics John Ordover SETI 126 November 19th 03 12:05 AM
Greg Egans Diaspora and the Fermi Paradox Simon Laub SETI 0 September 21st 03 06:37 PM
Out of the Bubble, the Fermi Paradox Simon Laub SETI 0 September 19th 03 04:02 PM
Are aliens hiding their messages? (was: Fermi paradox) sdude7 SETI 189 August 17th 03 08:10 AM


All times are GMT +1. The time now is 03:44 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.