A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Space Science » Technology
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Rover brains?



 
 
Thread Tools Display Modes
  #21  
Old February 2nd 04, 10:08 AM
Jan C. Vorbrüggen
external usenet poster
 
Posts: n/a
Default Rover brains?

For instance, you absolutely need to know how much time each task
will need to complete, but in Java you can't predict when the garbage
collection will run, making the whole system unpredictable.


Just for the record, there are garbage-collection algorithms that are
compatible with real-time systems.

Jan
  #22  
Old February 4th 04, 06:02 PM
Lex Spoon
external usenet poster
 
Posts: n/a
Default Rover brains?

Kevin Willoughby writes:
A lot of research has been done on garbage collection. It is possible to
limit the CPU time consumed in garbage collection, allowing a real time
system (at the expense of perhaps requiring a bit more memory).


To be pedantic, RT garbage collectors cost *time*, not memory. Also,
they often require a awkward parameters to be set regarding the ratio
of various activities such as reading, writing, allocating new memory,
and so on, and if you get these parameters wrong then the garbage
collector loses its timing guarantee.

But negatives aside, they can be much nicer than manual memory
management. Yet, they get pooed by many computer people.

A valid reason might be that most real time software is very simple,
and would not benefit from a garbage collector. That may well be
possible; I don't know.

A common reason voiced is that the CPU usage is too much. But this is
bogus, IMO. CPU efficiency is the "launch mass" for computer people.
Fast CPU's are extremely cheap compared to the cost of a single
programmer's salary, and thus economically you should splurge on the
CPU's. The days are long gone when companies have *one* computer plus
a swarm of programmers hovering around it.



-Lex
  #23  
Old February 5th 04, 12:08 AM
Kevin Willoughby
external usenet poster
 
Posts: n/a
Default Rover brains?

In article ,
says...
Kevin Willoughby writes:
A lot of research has been done on garbage collection. It is possible to
limit the CPU time consumed in garbage collection, allowing a real time
system (at the expense of perhaps requiring a bit more memory).


To be pedantic, RT garbage collectors cost *time*, not memory.


Not necessarily. Long experience has taught us that if you have the
absolute minimum amount of memory, you spend a lot of time doing garbage
collection. Have a bit of extra memory means the garbage collector
doesn't have to work as hard. More important, it allows keeping some
extra free memory available under all conditions, insuring that the main
processing never has to wait for memory. Real-time programmers get
unhappy if their code has to endure unexpected waits. ("Fire the retro
rockets 15 seconds ago" isn't an acceptable result.)

It is possible to collect garbage as a background process. This isn't
new -- Dijkstra worked out the details for one approach back in the mid
1970s. If you have either excess CPU time or can afford a second CPU to
run concurrently, garbage collection need not take much extra time,
perhaps not any at all.


Also,
they often require a awkward parameters to be set regarding the ratio
of various activities such as reading, writing, allocating new memory,
and so on, and if you get these parameters wrong then the garbage
collector loses its timing guarantee.


Yep. As with any bit of engineering, if you misunderstand the
environment, you your designs might not work well.


A valid reason might be that most real time software is very simple,
and would not benefit from a garbage collector. That may well be
possible; I don't know.


I get the impression this was once true but as time goes on the software
gets much more complicated. (Just like non-realtime software!)
--
Kevin Willoughby
lid

Imagine that, a FROG ON-OFF switch, hardly the work
for test pilots. -- Mike Collins
  #24  
Old February 6th 04, 05:30 PM
Lex Spoon
external usenet poster
 
Posts: n/a
Default Rover brains?

Kevin Willoughby writes:

In article ,
says...
Kevin Willoughby writes:
A lot of research has been done on garbage collection. It is possible to
limit the CPU time consumed in garbage collection, allowing a real time
system (at the expense of perhaps requiring a bit more memory).


To be pedantic, RT garbage collectors cost *time*, not memory.


Not necessarily.


You raise good issues, but it is important not to oversimplify.


Long experience has taught us that if you have the
absolute minimum amount of memory, you spend a lot of time doing garbage
collection. Have a bit of extra memory means the garbage collector
doesn't have to work as hard.


Right, GC in general requires more memory than manual allocation.
However, real-time GC does not take any more memory than any other GC.
(As far as I know, at least.) And to top it all off, many GC
strategies perform better as more memory is available; they often have
a performance factor related to the ratio of garbage to good stuff,
and more garbage is better.



More important, it allows keeping some
extra free memory available under all conditions, insuring that the main
processing never has to wait for memory.


Absolutely -- you can allocate some memory manually and some memory
under the GC.



Real-time programmers get
unhappy if their code has to endure unexpected waits. ("Fire the retro
rockets 15 seconds ago" isn't an acceptable result.)


Right, that's why you need a special kind of GC.



It is possible to collect garbage as a background process. This isn't
new -- Dijkstra worked out the details for one approach back in the mid
1970s.


This is roughly what we were talking about. You not only want to do
GC in parallel to the main work, but you want a *guarantee* that
memory will be available when it is needed. The guarantee is hard,
and requires the GC to be written very carefully. The research in
this area continues; if the 1970 paper you quote just says to run it
in a separate thread, then it is not the same thing. And anyway,
there is much newer work than this.

You almost certainly do not want to literally have a separate process
for GC. Processes can *possibly* simplify the code for any algorithm
(I see the opposite more often, due to locking issues), but they tend
to greatly complicate the analysis. Instead, the few systems I have
read about will do a little bit of GC at each of various memory
activities such as reading, writing, and creating new objects.


If you have either excess CPU time or can afford a second CPU to
run concurrently, garbage collection need not take much extra time,
perhaps not any at all.


You probably don't want a second CPU, either, just like you don't want
to use processors. Just use one fast CPU for most problems, and
things will be much simpler.

Also, be careful what you mean by "extra time". Garbage collection
may or may not take more CPU than manual allocation. Real-time
garbage collection, however, is likely to require more time than any
other allocation scheme. But who cares; you can just use a faster
CPU.


-Lex
  #25  
Old February 6th 04, 05:38 PM
Lex Spoon
external usenet poster
 
Posts: n/a
Default Rover brains?

rk writes:

Lex Spoon wrote:

A common reason voiced is that the CPU usage is too much. But this is
bogus, IMO. CPU efficiency is the "launch mass" for computer people.
Fast CPU's are extremely cheap compared to the cost of a single
programmer's salary, and thus economically you should splurge on the
CPU's. The days are long gone when companies have *one* computer plus
a swarm of programmers hovering around it.


But on many spacecraft, the cost of an extra computer is "expensive" and it's
worth some effort for the designers to spend some effort to eliminate
computers that aren't needed.


I wasn't proposing to use more computers, but faster ones.

This example was supposed to be illustrative. A few decades ago,
computers were expensive compared to programmers, and you tended to
have lots of programmers hovering around each computer. Try to
picture that in a modern office -- it's hard, isn't it! Nowadays it's
the other way around, with computers all over the place. Nowadays CPU
time is cheap compared to programmer time, and so the appropriate
design strategy is different: make things easy on the programmers, in
order to conserve your most valuable resource.



-Lex

  #26  
Old February 7th 04, 07:52 AM
Jonathan Griffitts
external usenet poster
 
Posts: n/a
Default Rover brains?

In article , Lex Spoon writes
rk writes:


. .

But on many spacecraft, the cost of an extra computer is "expensive"
and it's
worth some effort for the designers to spend some effort to eliminate
computers that aren't needed.


I wasn't proposing to use more computers, but faster ones.

This example was supposed to be illustrative. A few decades ago,
computers were expensive compared to programmers, and you tended to
have lots of programmers hovering around each computer. Try to
picture that in a modern office -- it's hard, isn't it! Nowadays it's
the other way around, with computers all over the place. Nowadays CPU
time is cheap compared to programmer time, and so the appropriate
design strategy is different: make things easy on the programmers, in
order to conserve your most valuable resource.


That example works fine for office computers, but for "embedded"
applications there are usually some extra constraints.

A faster CPU or larger memory probably uses extra power and generates
more heat. If you are really going upscale it may cost board space and
mass. It possibly generates more electrical noise. If IC geometry is
smaller it may translate to more radiation sensitivity. If you're
getting close to current state of the art, the fast processor may not
yet be qualified for high-reliability applications, or it may even be
buggy. Any of these factors could be critically important.

A faster CPU will usually also cost extra money. Parts cost may be not
so important in most spacecraft but it can sure be a showstopper in some
applications.



You're thinking in terms of office computers, but we're discussing
deep-space probes. I find it particularly easy to picture a spacecraft
computer under development with "lots of programmers hovering around"
it.

The programming of embedded processors requires a different mindset, and
often needs "obsolete" skills like optimizing for speed and memory use.

--
Jonathan Griffitts
AnyWare Engineering Boulder, CO, USA
  #27  
Old February 8th 04, 06:52 PM
Lex Spoon
external usenet poster
 
Posts: n/a
Default Rover brains?

Jonathan Griffitts writes:
That example works fine for office computers, but for "embedded"
applications there are usually some extra constraints.


Yes, I simplified. But all computers are getting cheaper, and even
with the extra costs you are likely to win by splurging on the CPU
usage.



A faster CPU or larger memory probably uses extra power and generates
more heat. If you are really going upscale it may cost board space and
mass. It possibly generates more electrical noise. If IC geometry is
smaller it may translate to more radiation sensitivity.


Yes, there are secondary effects to just increasing the CPU speed, but
at moderate levels most of these seem to boil down to extra $$ without
requiring extra design work.

But it really comes down to this: how much CPU usage are we talking
about? So long as it is way below the fastest CPU's around, then
splurging on CPU costs little and saves a lot of design time. I'd
guess that usually the CPU usage is not that high, especially when you
consider that what is high right now will be medium in two years and
will be low in four years. The CPU usage of the shuttle is
microscopic by today's standards.

FWIW, a quick Google scan turns up an article from 2001 claiming that
100 MIPS/mW is available "soon" (back in 2001):

http://www.electronicstalk.com/news/bop/bop102.html


How many MIPs does a Rover use? How many Watts are available?



If you're
getting close to current state of the art, the fast processor may not
yet be qualified for high-reliability applications, or it may even be
buggy. Any of these factors could be critically important.


Yes. If you are lanching Crays then my argument breaks down.




-Lex
  #28  
Old February 9th 04, 09:08 PM
Jonathan Griffitts
external usenet poster
 
Posts: n/a
Default Rover brains?

In article , Lex Spoon writes
Jonathan Griffitts writes:
That example works fine for office computers, but for "embedded"
applications there are usually some extra constraints.


Yes, I simplified. But all computers are getting cheaper, and even
with the extra costs you are likely to win by splurging on the CPU
usage.

. .

I don't think you have grasped the whole "embedded computer" mindset.
The "bloatware" philosophy is justifiable and economically acceptable
with the general-purpose computers you're familiar with, but this is a
different situation. Let me illustrate:

Your typical design requirement spec for the processor subsystem would
include something like:
- 100 square cm of board space is available.
- You may use a maximum of X Watts, but never for more than XX seconds
continuously, and only when the system state leaves you enough available
power, and the rest of the team will all frown at you every time you do
it.
- You may use a maximum of XXX milliWatts as an average power over X
minutes, again subject to available power.
- Standby power is XX microWatts.
- Thermal environment is . . . , radiation environment . . ., supply
voltage tolerance . . ., fault tolerance . . ., blah, blah, blah.
- Schedule is too fast to allow building or qualifying any custom
chips. All parts must be off-the-shelf.

Be sure the spec numbers are tight enough to give heartburn to the
hardware architect.

Now suppose you arrive at a meeting and say, "We need 50% extra CPU
speed and double the RAM to make the programmer's job easier, because
programmers are expensive and hardware is cheap." That argument may not
be persuasive.


If you're
getting close to current state of the art, the fast processor may not
yet be qualified for high-reliability applications, or it may even be
buggy. Any of these factors could be critically important.


Yes. If you are lanching Crays then my argument breaks down.


I realize this comment was probably meant to be facetious, but you're
illustrating that mindset again. We are talking *chips* and you are
going from office computers to mainframes.

The situation I'm talking about is not limited to space probes, by the
way. Think about the constraints on the processors in disk drives, or
inkjet printers, or automobile engine controllers, or cell phones, or
digital hearing aids, pacemakers, . . . the list is endless. These
processors far outnumber office computers. For many of them, if you
insisted on adding $1 of parts cost to make the programming job easier,
you would be out of a job.

I'm not making this up. I've been involved in this sort of thing for
about 25 years. Even these days, for some jobs I only get 16K of ROM
and 1K of RAM, and like it!
--
Jonathan Griffitts
AnyWare Engineering Boulder, CO, USA
  #29  
Old February 11th 04, 08:30 PM
Lex Spoon
external usenet poster
 
Posts: n/a
Default Rover brains?

Jonathan Griffitts writes:
I don't think you have grasped the whole "embedded computer" mindset.

[...]
Your typical design requirement spec for the processor subsystem would
include something like:
- 100 square cm of board space is available.
- You may use a maximum of X Watts, but never for more than XX seconds

[etc]

Right, but so long as the requirements are moderate, then increasing
these specs should simply require more of some components without
changing the overall design. It takes little design effort and only
moderate $$'s to simple have *more* board space, or *more* shielding,
or *more* overall mass. It only becomes a problem if you increase the
requirements by so much that you need a totally new design approach to
power generation or shielding or whatever.

So long as the requirements stay modest -- and I know of know reasons
why they would not be -- then secondary effects just increase the cost
of components without increasing risk or affecting design time. But
I'd love to know more.



The situation I'm talking about is not limited to space probes, by the
way. Think about the constraints on the processors in disk drives, or
inkjet printers, or automobile engine controllers, or cell phones, or
digital hearing aids, pacemakers, . . . the list is endless.


Mass produced items are different. For mass produced items, the cost
of the components can make a big difference on profits. 10 cents per
wris****ch is a lot, while $10,000 per Mars rover is peanuts. Thus,
for a Mars rover, it is more important to decrease risk and to save
design time, than it is to decrease the cost of components.

Imagine walking into a design meeting and saying you have a way to
save $10,000 in hardware cost, while costing only $10,000 in
development time and increasing the risk of failure by 1%. That would
be persuasive for hearing aids, but not for Mars rovers.



-Lex

  #30  
Old February 11th 04, 10:01 PM
Greg
external usenet poster
 
Posts: n/a
Default Rover brains?

Jonathan Griffitts wrote in message
I'm not making this up. I've been involved in this sort of thing for
about 25 years. Even these days, for some jobs I only get 16K of ROM
and 1K of RAM, and like it!


I have done some work in this feild also. Most consumer electronics is
very cost driven. Also size is important, and battery life.
Programmers time hardly shows up on the total cost of production
because so many units are produced.

But maybe this is the problem. We are using the wrong experiance? It
may be better in expensive one off items like Sprit to spend more on
hardwear to reduce the risk of show stoping bugs in the code?

But lets not go too far. Bloatwear sux...

It should be noted that you can use quite advanced devlopment tools
nowdays for this type of hardwear which makes the job a lot simpler.

Greg
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Mars rover climbing abilities Brian Davis Science 0 May 5th 04 01:34 PM
"Nasa rover breaks down" - Martians did it! Moi Technology 2 January 23rd 04 10:02 AM
Cornell Scientists Develop Method for Using Rover Wheels to Study Martian Soil by Digging Holes Ron Baalke Science 0 December 19th 03 09:38 PM
International Student Team Selected to Work in Mars Rover Mission Operations Ron Baalke Science 0 November 7th 03 05:55 PM
NASA Testing K9 Rover In Granite Quarry For Future Missions Ron Baalke Technology 0 October 31st 03 04:45 PM


All times are GMT +1. The time now is 07:42 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.