![]() |
#31
|
|||
|
|||
![]() |
#32
|
|||
|
|||
![]()
In article ,
Marc 182 wrote: Right, but so long as the requirements are moderate, then increasing these specs should simply require more of some components without changing the overall design. It takes little design effort and only moderate $$'s to simple have *more* board space, or *more* shielding, or *more* overall mass... No no no. I don't design space probes, but it's obvious that this argument is not correct. You don't get to increase these specs at all, they are constraints. More precisely -- speaking as someone who does sometimes design these things -- you don't get to increase those specs unless you can make a *very* good case that the processor subsystem is uniquely deserving and that the whole mission would benefit from increases there. It's very rare that resources, especially mass and power, are abundant. Much more common is to have strict upper limits set by outside constraints (e.g., launcher payload capacity) and basic design (e.g., solar-array area), and to be under considerable pressure to get as much as possible out of them, or to accomplish a mission that only barely fits. That being the case, any extra allocated to one subsystem has to come out of another. In this environment, bloatware is not acceptable. Reasonably efficient use of resources is mandatory, even if it takes somewhat more effort. Although the precise tradeoff depends on the funding environment, usually it is easier to add some more development effort than to argue with the resource constraints. ...A bigger launcher? Now you're talking millions more to launch a bloated spacecraft... This is one issue where compromises are sometimes preferable. It does sometimes happen that a spacecraft which would be a very tight fit on (say) a Delta II launch will see an overall project cost saving by moving up to (say) an Atlas, because the easier engineering saves enough to pay for the difference in launch cost. But it can be politically difficult to negotiate such a change even if the numbers look favorable. -- MOST launched 30 June; science observations running | Henry Spencer since Oct; first surprises seen; papers pending. | |
#33
|
|||
|
|||
![]()
Right, but so long as the requirements are moderate, then increasing
these specs should simply require more of some components without changing the overall design. It takes little design effort and only moderate $$'s to simple have *more* board space, or *more* shielding, or *more* overall mass. It only becomes a problem if you increase the requirements by so much that you need a totally new design approach to power generation or shielding or whatever. No no no. I don't design space probes, but it's obvious that this argument is not correct. You don't get to increase these specs at all, they are constraints. If the constraints are fixed, then possibly, though even then there may well be enough space within the constraints to ease up on the programmers and put in a garbage collector. But why would the constraints be fixed? Surely these probes go through an initial design phase where it is sketched out how much power will be generated and how much shielding will be around and how much mass is expected and so on. During that initial sketch, I see no reason they could not consider copious CPU power as a design option. I am curious what field you work on where constraints are so tight? In university classes the professors I've seen can barely get through a description of the waterfall method without poking holes in it. As well, they usually start their description with a disclaimer like "here's a simplistic model that we will start with". Hire a team of GOOD programers, much cheaper. Yes, but that's an independent issue. Anyway, if you want to have really good programmers, it helps to give them less to do. The more work that must be done, the more programmers you need to hire, and the harder it will be to find programmers of any particular productivity. -Lex |
#34
|
|||
|
|||
![]() |
#35
|
|||
|
|||
![]()
But why would the constraints be fixed? Surely these probes go
through an initial design phase where it is sketched out how much power will be generated and how much shielding will be around and how much mass is expected and so on. During that initial sketch, I see no reason they could not consider copious CPU power as a design option. I don't understand why you don't get this. Constraints are fixed because you're launching a rocket. The initial constraints are fixed by the mission science objectives and the budget. Given those, probably the next step is picking a launcher, at which point your total payload mass is also fixed. You're contraints are in place very early in the game. At some point, someone has to make the decisions about what resources to allocate where. Someone has to make a decision about how much risk is acceptible for how much money. Notice that this is really a design spectrum and you can move the decisions in either direction. I have been talking about spending more on parts in order to decrease risk and complexity and development time. You can also go the other way. You can have less RAM, for example, or you can write your own specialized operating system. The high-level designers have to choose a spot in this spectrum. I am suggesting that there may be a better sweet spot than is commonly chosen. And at any rate, the sweet spot moves as time goes on. Good rules of thumb from thirty years ago should be reevaluated as computers get awesomely less expensive. Hire a team of GOOD programers, much cheaper. Yes, but that's an independent issue. No, it's not independent. Really good programmers can do wonders with hardware that wouldn't have impressed the early PC pioneers. Although some programmers do mass quite a bit, you don't have to launch them anywhere. I do not see the connection. You want good programmers no matter what else you do. As an aside, I thoroughly disagree that good programmers can save any project. If there's any one factor that makes software projects work or fail, it is management practices, ie software engineering. Poor programmers will still make a working and robust program if they have good management; super programmers will still make faulty programs or fail outright if they have poor management. The poor programmers will take longer to do it, but at least their program will actually work correctly. But to get back to the point, keep in mind that the more programmers you use, the lower your percentage of really good programmers will be. Anyway, if you want to have really good programmers, it helps to give them less to do. The more work that must be done, the more programmers you need to hire, and the harder it will be to find programmers of any particular productivity. I'm sorry, but this is totally misguided. You never want to give good programmers (or any good workers) less to do, it reduces productivity and annoyes and/or bores them. That is a possible issue, but not with the scale of changes I am proposing. Using automatic memory management will gid rid of small percentage of the programming effort, perhaps on the order of 10%. But even if you did manage to massively decrease the programming effort, and you kept all the programmers on staff for some reason, I am not sure they will truly get bored. All you have to do is turn them loose on verification. If you avoid 10,000 lines of unnecessary code, then thee programmers can instead write 10,000 lines of simulators and test cases. Alternatively, they could use the saved time for checking, proof checkers, or inspections. In fact, I suspect many programmers would be *happier* this way. Instead of writing 2000 lines of mediocre code, they could write 1000 lines of the most pristine code in the known universe. A small and simple flight computer means a small team working close to the hardware. Well, simple is good, and small teams are good. That's what I'm arguing for. However, working closer to the hardware often works against these issues. That's the problem. No bloatware required. Nothing is *required*. Inferior designs can still succeed. Incidentally, the word "bloatware" is being thrown around inaccurately, I think. It's not exactly a technical term ![]() where I see it used, it refers to software that has a lot of features, not software that requires a lot of resources. MS Word is bloatware, but a simple rocket simulator is not, even though the simulator can take 1000 times more CPU cycles than MS Word. -Lex |
#36
|
|||
|
|||
![]()
Lex Spoon writes:
Incidentally, the word "bloatware" is being thrown around inaccurately, I think. It's not exactly a technical term ![]() where I see it used, it refers to software that has a lot of features, not software that requires a lot of resources. In my experience, it means the EXACT REVERSE of what you seem to think! "Bloatware" is sloppily written software that consumes exponentially more resources than than it has ANY business needing. Consider, for example, the Micro$haft executable that when analyzed was found to consist of over 95% toally non-executable code, i.e., code that could =NEVER= be reached under _ANY_ program state --- which explains why its authors had no problem hiding a five-minute flight-simulator video-game inside it as an "Easter Egg"... -- Gordon D. Pusch perl -e '$_ = \n"; s/NO\.//; s/SPAM\.//; print;' |
#37
|
|||
|
|||
![]()
Jonathan Griffitts wrote:
Examples: - If you're designing a disk drive, the outside dimensions are standardized and fixed. You DON'T get any extra space or power. It won't be a viable product if it busts outside the "form factor". Except height. If the hard disk you come up with is 2.5x the present low height hard disk, people will still buy it if it is otherwise desirable. 15K rpm disk drives used to consume an awful lot of extra electricity compared to 5200 rpm drives. Sure, there are limits - but most modern drives are not pushing those at all. Especially when compared to first generation 8GB 7200 rpm hard disks. - If you're designing anything portable and battery powered, the dimensions are almost always part of the basic product requirements. In this case you also won't get any extra space or power. Think about the constraints on a digital hearing-aid or pacemaker (I've worked on both of those!), or even a cell phone. - If you're working on office equipment, there's always pressure to make things smaller. Ease of maintenance also puts constraints on size and shape of internal modules. Except keyboards. Actually, there seems to be cycles where first things become smaller to fit in which are then followed by a wave of megalomania where everything gets larger. Very obervable in things like copiers. - If you're doing consumer products, cost of manufacturing probably dominates every design decision. Often size is squeezed, too. It apparently sometimes gets way too silly on the costcutting part, to the point of ending up in seriously reduced specifications compared to original. [snip] The "bloatware" philosophy works for PC and workstation software because of the peculiar economics of this time. It's easy to cite cases where this is going overboard and resources are flagrantly wasted, because the designers are accustomed to think the CPU power and memory are free and nearly infinite. When the advance of CPU and storage technology starts to level out, this mindset will probably have to change. And even then, things are very often not as they seem to the casual onlooker who may very well have a very bad grasp of how complex implementing something is or how complex it all ends up being once you actually implement the rest of off-the-top-of-my-head-this-is-how-you-do-the-first-10%. -- Sander +++ Out of cheese error +++ |
#38
|
|||
|
|||
![]()
Jonathan Griffitts writes:
You ask "what field . . . where constraints are so tight?" I'd say that almost every project has many hard, fixed constraints. The argument is getting more and more about terminology, but please recall that someone said something like "you cannot just increase power -- it's a spec". There's a difference between a hard spec, and a design attribute which is more important than usual. - If you're designing a disk drive, the outside dimensions are standardized and fixed. You DON'T get any extra space or power. It won't be a viable product if it busts outside the "form factor". I'll give you the space, but not the power. People routinely buy larger power supplies in response to power-hungry components. - If you're designing anything portable and battery powered, the dimensions are almost always part of the basic product requirements. In this case you also won't get any extra space or power. Think about the constraints on a digital hearing-aid or pacemaker (I've worked on both of those!), or even a cell phone. Ah, you have worked on medical devices. Yes, I could see the space constraints here. I don't think you can generalize so far, however. For a walkman or a laptop, you really can tradeoff battery life versus cost of manufacture, or battery life versus number of features, or battery life versus speed. - If you're working on office equipment, there's always pressure to make things smaller. Ease of maintenance also puts constraints on size and shape of internal modules. I don't see space here being constraining; most office equipment seems to be mostly empty space inside. The usual size spec is that it is the big enough, not that it is small enough. - If you're doing consumer products, cost of manufacturing probably dominates every design decision. Often size is squeezed, too. Yes, but these are not hard specs. You can, for example, decide to make it smaller while using more expensive components. You can increase the battery size to get rid of another component entirely. - If you're designing flight electronics for aerospace, you're often required to fit into an odd shape and size (whatever room was left over after the aerodynamics, mission payload, propulsion, fuel, crew, etc. are accounted for). You can't get extra space for electronics if it would be at the expense of aerodynamic shape, etc. The thermal and power constraints will be defined by the rest of the system. Maintainability and access will be an issue, too. I grant this one. This one is interesting, though: you are making a small part of a bigger system. At some level, a level that is long past by the time you come on the scene, there was a decision made about what the appropriate spaces will be, and at that point the decision could have been made differently. If you are making NASA's next experiment, then of course they will do it from scratch. ![]() Hubble, than for making Hubble itself. Just like in space probes, if you ignore efficiency and use up extra space and power, something will have to trade off for it. It may not be possible to make that tradeoff and still have a viable product. Right. I never said otherwise. The question is, what is a good trade? The "bloatware" philosophy works for PC and workstation software because of the peculiar economics of this time. It's easy to cite cases where this is going overboard and resources are flagrantly wasted, because the designers are accustomed to think the CPU power and memory are free and nearly infinite. When the advance of CPU and storage technology starts to level out, this mindset will probably have to change. Call it peculiar if you like, but CPU cycles and RAM really are cheap, and not just for PC's. Designers should recognize this. Lex Spoon |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Mars rover climbing abilities | Brian Davis | Science | 0 | May 5th 04 01:34 PM |
"Nasa rover breaks down" - Martians did it! | Moi | Technology | 2 | January 23rd 04 10:02 AM |
Cornell Scientists Develop Method for Using Rover Wheels to Study Martian Soil by Digging Holes | Ron Baalke | Science | 0 | December 19th 03 09:38 PM |
International Student Team Selected to Work in Mars Rover Mission Operations | Ron Baalke | Science | 0 | November 7th 03 05:55 PM |
NASA Testing K9 Rover In Granite Quarry For Future Missions | Ron Baalke | Technology | 0 | October 31st 03 04:45 PM |