A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Space Science » History
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Ethics & The Future of Brain Research



 
 
Thread Tools Display Modes
  #11  
Old February 22nd 13, 06:53 PM posted to sci.space.history
David Spain
external usenet poster
 
Posts: 2,901
Default Ethics & The Future of Brain Research

On 2/21/2013 11:05 AM, Immortalist wrote:

DeadUsenet What Can I Say?


Not what can you say, but what can you do.

For example: Try not posting material that is off-topic to specific
newsgroups.

Dave

  #12  
Old February 22nd 13, 09:11 PM posted to alt.philosophy,rec.arts.sf.written,sci.space.history,sci.physics,alt.religion
casey
external usenet poster
 
Posts: 17
Default Ethics & The Future of Brain Research

On Feb 23, 2:00*am, Dare wrote:
[...]

Is a feeling of identity or self related to experiencing Time?
What happens to "self" if there is no time...


Yes I think the self does involve "experiencing time".

I suspect it also requires access to short term memory.

Without the neural mechanism to "experience time", memory
of what just happened and prediction of what will happen
next, there would be no "observer".

My view on blind sight, where someone can post a letter
into a vertical or horizontal slot with no consciousness
of the orientation of the slot, is that it involves a
memory free visual reflex feedback. Turn off the light and
they fail the task. Turn off the light of a normal subject
and they can remember (are conscious of) the orientation
of the slot and thus can continue to post the letter.







  #13  
Old February 22nd 13, 10:31 PM posted to alt.philosophy,rec.arts.sf.written,sci.space.history,sci.physics,alt.religion
Howard Brazee
external usenet poster
 
Posts: 261
Default Ethics & The Future of Brain Research

On Thu, 21 Feb 2013 18:15:10 -0800 (PST), bob haller
wrote:

sooner or later a computer will mimick a human brain, and likely
surpass it.

its not a matter of if but when


Computers can already surpass human brains - for some tasks.

Remember when SF had stories about robots being chauffeurs? That
doesn't seem likely now - why build a humanoid car driver?

If we are wanting to "surpass" the human brain, it is to achieve some
cognitive goal. There is no reason to assume such a design will be
humanoid at all.

--
Anybody who agrees with one side all of the time or disagrees with the
other side all of the time is equally guilty of letting others do
their thinking for them.
  #14  
Old February 22nd 13, 10:49 PM posted to alt.philosophy,rec.arts.sf.written,sci.space.history,sci.physics,alt.religion
casey
external usenet poster
 
Posts: 17
Default Ethics & The Future of Brain Research

On Feb 23, 8:31*am, Howard Brazee wrote:
On Thu, 21 Feb 2013 18:15:10 -0800 (PST), bob haller
wrote:

sooner or later a computer will mimick a human brain, and likely
surpass it.


its not a matter of if but when


Computers can already surpass human brains - for some tasks.


It is the human tasks they cannot match that you need to look at.

We can build a machine to add up numbers very quickly but it is
we who work out what adding is, how to do it, and what to use
it for. The machine just goes through the motions without any
understanding or purpose behind its actions.



Remember when SF had stories about robots being chauffeurs? * *That
doesn't seem likely now - why build a humanoid car driver?

If we are wanting to "surpass" the human brain, it is to achieve some
cognitive goal. * There is no reason to assume such a design will be
humanoid at all.

--
Anybody who agrees with one side all of the time or disagrees with the
other side all of the time is equally guilty of letting others do
their thinking for them.



  #15  
Old February 22nd 13, 11:29 PM posted to alt.philosophy,rec.arts.sf.written,sci.space.history,sci.physics,alt.religion
[email protected]
external usenet poster
 
Posts: 148
Default Ethics & The Future of Brain Research

On Feb 22, 8:57*am, Immortalist wrote:
On Feb 22, 7:00 am, Dare wrote:

On 2/21/2013 7:36 PM, Immortalist wrote:


On Feb 21, 4:29 pm, Howard Brazee wrote:
On Thu, 21 Feb 2013 12:45:13 -0800 (PST), casey


wrote:
Something that would be good for science to answer.


If you found yourself in heaven with a heavenly body
how would you know if it was you who lived that
physical life on Earth or if you simply had the
memories of that now dead human?


If you assume that the 5 year old version of you was "you", despite
you being very, very different now - we need to determine what "you"
means.


If the self is a series of clones throughout life, then there may be
no "version" of your self but instead just a "range" of neural
activities that are a sense of your self.


I concur on the (implied potential) range of activities meme. The
series of clones thing I disagree with- it implies that all cells (as
mentioned elsethread) in a tissue (and by implication the whole body)
get "turned over" every so many years *all at the same time* which is
unreasonable.

We are about process, not state. A so-called state of mind is not a
photograph, it's a three-panel cartoon. Perception, "filter",
reaction. "filter" = particular set of "neural activities" in that
range.

Once those activities go
outside the range of your -selfing- you are not cloned during those
successions of neural events.


Well, a clone is (loosely speaking) an exact replica, but me right
now is not an exact replica of me ten, twenty etc. years ago. What
continues as "I"? I think it's just a particular constellation of
"things I'm good at" and "things I'm bad at" due to brain structure/
disposition(s) from genetics modulo diet, environment, socialization,
yada yada.

I agree with my pal Mahipal- "me" always changes.

As for "activities outside the range of [one's] -selfing-, I refer
you to Lovecraft's _At The Mountains Of Madness_.

from A Treatise of Human Nature Book I, Part 4, Section 6


SECTION VI: OF PERSONAL IDENTITY


There are some philosophers who imagine we are every moment intimately
conscious of what we call our self; that we feel its existence and its
continuance in existence; and are certain, beyond the evidence of a
demonstration, both of its perfect identity and simplicity. The
strongest sensation, the most violent passion, say they, instead of
distracting us from this view, only fix it the more intensely, and
make us consider their influence on self either by their pain or
pleasure. To attempt a further proof of this were to weaken its
evidence; since no proof can be derived from any fact of which we are
so intimately conscious; nor is there any thing of which we can be
certain if we doubt of this.


Unluckily all these positive assertions are contrary to that very
experience which is pleaded for them; nor have we any idea of self,
after the manner it is here explained. For, from what impression could
this idea be derived? This question it is impossible to answer without
a manifest contradiction and absurdity; and yet it is a question which
must necessarily be answered, if we would have the idea of self pass
for clear and intelligible. It must be some one impression that gives
rise to every real idea. But self or person is not any one impression,
but that to which our several impressions and ideas are supposed to
have a reference. If any impression gives rise to the idea of self,
that impression must continue invariably the same, through the whole
course of our lives; since self is supposed to exist after that
manner. But there is no impression constant and invariable. Pain and
pleasure, grief and joy, passions and sensations succeed each other,
and never all exist at the same time. It cannot therefore be from any
of these impressions, or from any other, that the idea of self is
derived; and consequently there is no such idea.


Well yeah, self-examination on the fly is difficult.

That's why we study other people.

From----
http://www.wutsamada.com/alma/modern/humepid.htm


Is a feeling of identity or self related to experiencing Time?
What happens to "self" if there is no time...


Zen adepts claim that self vanishes without time-bound experience.

The second part of your question addresses issues relating to
consciousness and continuity. Can the activities of the brain that are
the self, if stopped be started again? Would it be only a clone that
believes it is you or have we always just been a bunch of clones that
produce this feeling of being one me? But to this continuity dilemma
you raise; there are too many things and processes happening to give
some simple answer. Why would we believe that consciousness can or
cannot be stopped and then started in the first place? If the heart
stops tissues die but when we sleep consciousness seems to stop, so
simple comparisons will probably fail us. Religion and philosophy seem
to be the culprits that make us invent such ideas.


In sleep consciousness is altered; it does not stop. Look up lucid
dreaming and sleep learning for starters.

What if consciousness is full of stops and starts? Again time seems to
be necessary if consciousness is the same thing as activities in a
brain.


Consciousness seems to me to be more like a conversation between
different specialized wetware modules of the brain. It can be a
roaring rock party babble or a low indistinct mutter. If nobody has
anything to say to each other at a party there's a lull, but not
really a stop. Same with our "selves".

......In a sparse distributed network - memory is a type of
perception.....The act of remembering and the act of perceiving both
detect a pattern in a vary large choice of possible patterns....When
we remember we recreate the act of the original perception - that is
we relocate the pattern by a process similar to the one we used to
perceive the pattern originally.


The stored patterns change over time as the physical substrate
they're "written" on (cerebral neurons and their interconnections)
change over time.

Could all parts of our experience and reasoning abilities be very
similar to a type of perception? If the act of remembering and the act
of perceiving both detect a pattern in a vary large choice of possible
patterns and when we remember we recreate the act of the original
perception - that is we relocate the pattern by a process similar to
the one we used to perceive the pattern originally, and trigger areas
of the brain which our senses would, in essence bypassing the senses,
then it seems possible that most of our experience works in a similar
way.


Yes, of course. Some modules perceive sensory input, some only
perceive the output of other modules.

Benjamin Libet famously suggested it takes about half a second for the
brain to get through all the processing steps needed to settle our
view of the moment just past. But this immediately raises the question
of why don't we notice a lag? How does anyone ever manage to hit a
tennis ball or drive a car? The answer is that we anticipate. We also
have a level of preconscious habit which "intercepts" stuff before it
reaches a conscious level of awareness. And yet it really does take
something like half a second to develop a fully conscious experience
of life. You can read about the cycle of processing story and its
controversies in the following....


The implication is that the whole brain "get(s) through all the
processing steps" at the same time. That's unreasonable since
different parts of the brain process information at different rates;
there's no computer-analogous "system clock" for organic brains.

If there is one thing that seems certain about consciousness it is
that it is immediate. We are aware of life's passing parade of
sensations — and of our own thoughts, feelings and impulses — at the
instant they happen. Yet as soon as it is accepted that the mind is
the product of processes taking place within the brain, we introduce
the possibility of delay. It must take time for nerve traffic to
travel from the sense organs to the mapping areas of the brain.


It also takes different amounts of time for each module to process
its allotment of data.

Worse, some data goes through more than one module, in series and or
parallel, introducing more delays.

It must then take more time for thoughts and feelings about these
messages to propagate through the brain's maze of circuitry. If the
processing is complex — as it certainly must be in humans — then these
delays ought to measurable, and even noticeable with careful
introspection.


It's worse- the delays can be negative. There's experimental
evidence that we start to perform physical responses based on sensory
inputs *before* the parts of the brain allegedly responsible for
mediating decisions do their thing. Clearly all our attempts at
modeling the mind are flawed.


Mark L. Fergerson
  #16  
Old February 23rd 13, 12:11 AM posted to alt.philosophy,rec.arts.sf.written,sci.space.history,sci.physics,alt.religion
Howard Brazee
external usenet poster
 
Posts: 261
Default Ethics & The Future of Brain Research

On Fri, 22 Feb 2013 13:49:43 -0800 (PST), casey
wrote:

On Feb 23, 8:31*am, Howard Brazee wrote:
On Thu, 21 Feb 2013 18:15:10 -0800 (PST), bob haller
wrote:

sooner or later a computer will mimick a human brain, and likely
surpass it.


its not a matter of if but when


Computers can already surpass human brains - for some tasks.


It is the human tasks they cannot match that you need to look at.


Why? As I expounded below, being humanoid is only useful if that is
our goal.

We can build a machine to add up numbers very quickly but it is
we who work out what adding is, how to do it, and what to use
it for. The machine just goes through the motions without any
understanding or purpose behind its actions.



Remember when SF had stories about robots being chauffeurs? * *That
doesn't seem likely now - why build a humanoid car driver?

If we are wanting to "surpass" the human brain, it is to achieve some
cognitive goal. * There is no reason to assume such a design will be
humanoid at all.

--
Anybody who agrees with one side all of the time or disagrees with the
other side all of the time is equally guilty of letting others do
their thinking for them.



--
Anybody who agrees with one side all of the time or disagrees with the
other side all of the time is equally guilty of letting others do
their thinking for them.
  #17  
Old February 23rd 13, 12:22 AM posted to alt.philosophy,rec.arts.sf.written,sci.space.history,sci.physics,alt.religion
casey
external usenet poster
 
Posts: 17
Default Ethics & The Future of Brain Research

On Feb 23, 10:11*am, Howard Brazee wrote:
On Fri, 22 Feb 2013 13:49:43 -0800 (PST), casey

wrote:
On Feb 23, 8:31*am, Howard Brazee wrote:
On Thu, 21 Feb 2013 18:15:10 -0800 (PST), bob haller
wrote:


sooner or later a computer will mimick a human brain, and likely
surpass it.


its not a matter of if but when


Computers can already surpass human brains - for some tasks.


It is the human tasks they cannot match that you need to look at.


Why? * *As I expounded below, being humanoid is only useful if that is
our goal.


Well in order to do the things we do a machine needs a sensory
input and a motor output regardless of how that is implemented
but I was really talking about cognitive tasks - such as the
ability to invent and use mathematics. Machines that add are
not smarter only faster. And a good memory is not the same as
the ability to make use of that memory in a intelligent way.

In other words computers do not surpass human brains when it
comes to cognitive tasks they *mindlessly* carry out tasks
invented by humans - we use them because the do it faster not
because they do it smarter.




Anybody who agrees with one side all of the time or disagrees with the
other side all of the time is equally guilty of letting others do
their thinking for them.


  #18  
Old February 23rd 13, 04:15 AM posted to alt.philosophy,rec.arts.sf.written,sci.space.history,sci.physics,alt.religion
Howard Brazee
external usenet poster
 
Posts: 261
Default Ethics & The Future of Brain Research

On Fri, 22 Feb 2013 15:22:32 -0800 (PST), casey
wrote:


It is the human tasks they cannot match that you need to look at.


Why? * *As I expounded below, being humanoid is only useful if that is
our goal.


Well in order to do the things we do a machine needs a sensory
input and a motor output regardless of how that is implemented
but I was really talking about cognitive tasks - such as the
ability to invent and use mathematics. Machines that add are
not smarter only faster. And a good memory is not the same as
the ability to make use of that memory in a intelligent way.

In other words computers do not surpass human brains when it
comes to cognitive tasks they *mindlessly* carry out tasks
invented by humans - we use them because the do it faster not
because they do it smarter.


Which doesn't mean those cognitive tasks have to be modeled after
human thinking. Just as smart cars don't have to be driven by
humanoid robots, and computer playing chess don't have to think the
way people think, there is no reason to suppose that the optimal
thinking machine finding the answer to life, the universe, and
everything has to be modeled on human thinking.

Unless humans have the ultimate perfect brains for all thinking tasks,
which obviously we don't.

--
Anybody who agrees with one side all of the time or disagrees with the
other side all of the time is equally guilty of letting others do
their thinking for them.
  #19  
Old February 23rd 13, 05:42 AM posted to alt.philosophy,rec.arts.sf.written,sci.space.history,sci.physics,alt.religion
casey
external usenet poster
 
Posts: 17
Default Ethics & The Future of Brain Research

On Feb 23, 2:15*pm, Howard Brazee wrote:
On Fri, 22 Feb 2013 15:22:32 -0800 (PST), casey

wrote:

It is the human tasks they cannot match that you need to look at.


Why? As I expounded below, being humanoid is only useful if that is
our goal.



Well in order to do the things we do a machine needs a sensory
input and a motor output regardless of how that is implemented
but I was really talking about cognitive tasks - such as the
ability to invent and use mathematics. Machines that add are
not smarter only faster. And a good memory is not the same as
the ability to make use of that memory in a intelligent way.


In other words computers do not surpass human brains when it
comes to cognitive tasks they *mindlessly* carry out tasks
invented by humans - we use them because the do it faster not
because they do it smarter.



Which doesn't mean those cognitive tasks have to be modeled
after human thinking.


But the programs would have to be equivalent in some way to match
all the types of thinking human are capable of.

Just as smart cars don't have to be driven by humanoid robots,
and computer playing chess don't have to think the way people
think,


Sure but note that smart cars and chess programs follow the
steps *worked out by a human* with human goals. These programs
are at most reflex thinkers like perhaps a very simple insect.
Playing chess may seem to require great intelligence but in
fact all the machine does is carry out instructions *invented
by humans* that any human could carry out if given enough time
without requiring any understanding of chess at all.

... there is no reason to suppose that the optimal
thinking machine finding the answer to life, the universe,
and everything has to be modeled on human thinking.


But it would be a good start. If you don't understand how
humans think what chance have you of building a machine
that can think even better?

Unless humans have the ultimate perfect brains for all
thinking tasks, which obviously we don't.


For some tasks they are still better than any machine.

For a machine to go beyond human thinking abilities I
would think they would have to evolve those abilities.

What machines can do better than us is: Carry out tasks
we give them at great speed which means they can do
things we can't because we can't write that fast and
we at this stage can't rewire our brain for those tasks.
Also our neurons are slow compared with electronic
switching devices which is a physical limitation not
a cognitive limitation. We can process visual data very
fast because we have parallel wiring for that task.

So let us not confuse speed of execution with intelligence.


--
Anybody who agrees with one side all of the time or disagrees with the
other side all of the time is equally guilty of letting others do
their thinking for them.

  #20  
Old February 23rd 13, 07:44 AM posted to alt.philosophy,rec.arts.sf.written,sci.space.history,sci.physics,alt.religion
Rod Speed
external usenet poster
 
Posts: 387
Default Ethics & The Future of Brain Research

casey wrote
Howard Brazee wrote
casey wrote


It is the human tasks they cannot match that you need to look at.


Why? As I expounded below, being humanoid is only useful if that is
our goal.


Well in order to do the things we do a machine needs a sensory
input and a motor output regardless of how that is implemented
but I was really talking about cognitive tasks - such as the
ability to invent and use mathematics. Machines that add are
not smarter only faster. And a good memory is not the same as
the ability to make use of that memory in a intelligent way.


In other words computers do not surpass human brains when
it comes to cognitive tasks they *mindlessly* carry out tasks
invented by humans - we use them because the do it faster
not because they do it smarter.


Which doesn't mean those cognitive tasks have to be modeled
after human thinking.


But the programs would have to be equivalent in some way
to match all the types of thinking human are capable of.


Nope, just do a lot better for example.

Just as smart cars don't have to be driven by humanoid robots,
and computer playing chess don't have to think the way people
think,


Sure but note that smart cars and chess programs follow
the steps *worked out by a human* with human goals.


Not all programs are like that.

These programs are at most reflex thinkers like perhaps a very
simple insect. Playing chess may seem to require great intelligence
but in fact all the machine does is carry out instructions *invented
by humans* that any human could carry out if given enough time
without requiring any understanding of chess at all.


There is no real 'understanding of chess' involved with even the
best chess masters.

... there is no reason to suppose that the optimal
thinking machine finding the answer to life, the universe,
and everything has to be modeled on human thinking.


But it would be a good start.


Not necessarily, it may be that a different approach
entirely would produce much better results.

If you don't understand how humans think what chance
have you of building a machine that can think even better?


By trying different approaches and seeing if they produce
an even better result than you can get with humans.

Unless humans have the ultimate perfect brains
for all thinking tasks, which obviously we don't.


For some tasks they are still better than any machine.


For now. It remains to be seen if that's always true.

For a machine to go beyond human thinking abilities
I would think they would have to evolve those abilities.


Nope, they can just head off with a completely different altogether.

What machines can do better than us is: Carry out tasks
we give them at great speed which means they can do
things we can't because we can't write that fast and
we at this stage can't rewire our brain for those tasks.


They can also do tasks which require much more
reliable memory than any human can have too.

That's why navigation system do much better than any
human can ever do, they can use immense databases
which produce a much better result when say producing
an optimum route to cover a very wide variety of places
that must be visited with say a complex delivery run etc.

Also our neurons are slow compared with electronic
switching devices which is a physical limitation not
a cognitive limitation. We can process visual data very
fast because we have parallel wiring for that task.


But it remains to be seen if we can do much better
with a machine that say uses measurement for facial
recognition when the image quality is poor etc.

So let us not confuse speed of execution with intelligence.


Sure, but not all computing is about speed of execution.


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
ETHICS IN THE ERA OF POSTSCIENTISM Pentcho Valev Astronomy Misc 12 December 8th 09 03:22 PM
Ethics For Physicists Immortalist History 16 November 16th 06 09:27 PM
That's a fak, Jak!... See-thru ethics Painius Misc 0 May 22nd 06 03:36 AM
The Ethics of Terraforming Eric Nave Policy 83 December 13th 03 05:10 AM
Boeing Ethics ed kyle Policy 7 December 5th 03 05:41 PM


All times are GMT +1. The time now is 04:42 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.