|
|
Thread Tools | Display Modes |
#21
|
|||
|
|||
NASA orbit simulation software
Pat Flannery wrote:
: :Fred J. McCall wrote:: : Not necessarily, Pat. There are lots and lots of different factors : involved and different simulations may be better in different areas, : so that sometimes one simulation works better and sometimes another : does. : : So there may be no 'best' one and cherry picking from each one to make : a 'best' one is not a simple task. : : :Well, one obvious way to check would to have each of the programs redict exactly where a satellite will end up given its input data :before it is launched, then compare that to what actually happens after :it is launched. o that for several launches, and you should be able to figure out which rogram gives the best data. :If they are each predicting some aspect of its orbit better than the thers...then it's time to write a new program that incorporates the :best aspects of each of the competing programs and standardize on it. : You're not paying attention, Pat. "... and cherry picking from each one to make a 'best' one is not a simple task." -- "The reasonable man adapts himself to the world; the unreasonable man persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." --George Bernard Shaw |
#22
|
|||
|
|||
NASA orbit simulation software
OM wrote: ....Jorge, I thought Pat was really just joking with that one. No, I was not joking, and here's why: If each of the NASA space centers were using multiple programs to determine how a satellite's orbit would end up... and they were all using the _same_ programs, that would be great; as they could spot something suspicious in the the solution one program showed if it varied from the rest, and look into that aspect in more detail. But having different space centers using different programs to try to figure out the same thing is bound to cause communications problems if one program comes up with a different solution than the others, and they want to figure out _why_ that occurred. Then people at two or more different NASA centers have to discuss matters in regards to two or more different programs that they each are not familiar with, as it's not the one they use at "their" space center. At the very least that is going to cause inconvenience...at worst, it could cause misunderstandings between the two centers that could cause a mission to get screwed up...in much the way the Mars Climate Orbiter mission did, as two separate teams had two different understandings of where the spacecraft was and where it was going in relation to the atmosphere of Mars. Pat ======================================= MODERATOR'S COMMENT: (if anyone sees the previous message, which Pat has since cancelled, this is a correction to that message, not a duplicate.) GDM |
#23
|
|||
|
|||
NASA orbit simulation software
Greg D. Moore (Strider) wrote: Sound German engineering?: http://en.wikipedia.org/wiki/Curta_calculator I want one of these someday! It sure would be fun to play with one, but for God's sake don't make the mistake of taking it apart to find out how it works...as apparently everyone who ever did that never managed to get it back together again correctly, as it's around as complex as a Swiss watch internally. There's more data on them he http://www.vcalc.net/cu.htm Including cutaways of the inner workings and a simulator of one. To get this back on-topic, it would be interesting to know what timeframe the orbit simulation programs that NASA uses come from. Are they fairly recent, or do they go back into the early days of computing, and have now been adapted for today's computers? You start figuring in all the variables that go into a satellite's ascent into orbit and its expected orbital elements over a period of time once it reaches that orbit...and it must be a real mess as far a variables to enter into the equations go. Drag caused be different atmospheric densities due to temperatures and humidity variables in the atmosphere during ascent will affect it*, and once it is on-orbit its path will be changed by passing over areas of the Earth with subtly different gravity fields, and altitude changed by variable drag resulting from the heating of the upper atmosphere from solar activity. * Minuteman III MIRVs had three different settings to select from regarding the expected weather over their targets on their arrival, so that they would use slightly different reentry paths in rain or in snow than they would in clear weather. Pat |
#24
|
|||
|
|||
NASA orbit simulation software
Greg D. Moore (Strider) wrote: Umm, no. If they are using hte same software and put in the same inputs, they'd see the same outputs. Ah, but that's where the fun comes in...each center has to figure out those inputs on its own. That's how you find errors in the input data. Software is very deterministic that way. If they DO see something different, it's a hardware issue, not a software issue. If you really want to test the answer, you calculate it two DIFFERENT ways. Yeah, but Jorge seemed to imply that each center had "its" favored program and wasn't going to use the one used by the other center(s). So if you end up with two different outcomes from two different programs, then you know something is obviously wrong, but what exactly and in which program run? Was it a faulty input of information or did the program deal with those particular inputs in a way that led to a incorrect answer? That's where the real mess would start, as center "a" doesn't have much experience with the program center "b" is using and vice-versa. But having different space centers using different programs to try to figure out the same thing is bound to cause communications problems if one program comes up with a different solution than the others, and they want to figure out _why_ that occurred. You're assuming that they are trying to figure out the same thing. (which given NASA is a bureaucracy may be true, but is not generaly the goal). It might be a very good idea to have them try to figure out the same thing, and see if their answers agree. Extra math work is pretty cheap compared to the cost of a spacecraft and its booster. A real-world example of this occurred in the Soviet space program. The math whiz-kids back in Moscow would send the specs to Baikonur of what launch trajectory a booster was to use and what propellant load was needed for it to put its payload into the desired orbit. As one launch approached, one of Korolev's assistants looked at the incoming figures on the amount of propellant to be loaded on a specific launch, and they didn't look right to him based on past experience with other launches and the weight of the payload that would be carried. The payload would not reach orbit due to too little propellants being carried. He ran through the math several times and still kept coming up with a different answer than what Moscow was telling them, so finally went to The Chief Designer and told him there was a problem. Korolev asked him if he was _completly sure_ a mistake had been made, as one didn't question orders without a very good reason, and this could mean delaying the launch. The assistant stuck to his guns, and Korolev called Moscow and demanded that they double-check their math, although headquarters was not used to being talked back to. The assistant felt like his head was on the chopping block as first one and then another hour passed with no return call from headquarters. Finally, a call came in... there had indeed been a "small" error in the amount of propellants to be loaded and new figures were sent to Korolev...which agreed exactly with those the assistant had found with his math work. Not only was the launch now saved by the assistant's hunch and boldness in bringing it to Korolev's attention, but The Chief Designer now had something to rub Moscow's nose in if they ever gave him trouble about planning a mission. The assistant got his own private dacha out of that incident. Having two separate teams figure out the same information in two different ways via original information inputs that they themselves had to determine would be a great idea; but each team should be completly conversant with the method the other team is using so they can quickly find the problem if the two end results differ significantly. Pat |
#25
|
|||
|
|||
NASA orbit simulation software
kevin willoughby wrote: If two of three versions agree, that doesn't mean those two are correct. Though in this case you do get feedback by observing where exactly the satellite's orbit ended up in reality versus what one or more programs predicted. Subtle problems in a program may not manifest themselves fully until something in the program is pushed out to the edge of its abilities.* Tell them to figure out the launch trajectory into a orbit with a apogee of 500 miles and a perigee of 450 miles and most all of the programs will probably give you almost the same exact answer. Toss one with a apogee of 1,000,000 miles and a perigee of 50 miles at them and have them figure in the influence of the gravity of the Moon and Sun during each orbit, as well as air drag at the perigee and that's influence on future orbits...and I'll bet different answers start to emerge from different programs. * A tragic example of this was trying to use the "Crater" program to determine the probable impact damage Columbia suffered, even though "Crater" was never designed to deal with foam impacts of that severity. Pat |
#26
|
|||
|
|||
NASA orbit simulation software
Pat Flannery wrote:
It might be a very good idea to have them try to figure out the same thing, and see if their answers agree. Extra math work is pretty cheap compared to the cost of a spacecraft and its booster. A real-world example of this occurred in the Soviet space program. It's a very nice real world example, other than the niggling detail of having exactly nothing to do with the real world and only being relevant to the nonexistent problem you are handwaving into existence. You seemed to have missed that the different centers have different software _because they have different needs_. A Shuttle will never crash because JPL neglected to check JSC's results. A Jovian probe will never miss because JSC neglected to check JPL's results. D. -- Touch-twice life. Eat. Drink. Laugh. http://derekl1963.livejournal.com/ -Resolved: To be more temperate in my postings. Oct 5th, 2004 JDL |
#27
|
|||
|
|||
NASA orbit simulation software
On May 19, 11:24 pm, Pat Flannery wrote:
kevin willoughby wrote: If two of three versions agree, that doesn't mean those two are correct. Though in this case you do get feedback by observing where exactly the satellite's orbit ended up in reality versus what one or more programs predicted. Subtle problems in a program may not manifest themselves fully until something in the program is pushed out to the edge of its abilities.* Tell them to figure out the launch trajectory into a orbit with a apogee of 500 miles and a perigee of 450 miles and most all of the programs will probably give you almost the same exact answer. Toss one with a apogee of 1,000,000 miles and a perigee of 50 miles at them and have them figure in the influence of the gravity of the Moon and Sun during each orbit, as well as air drag at the perigee and that's influence on future orbits...and I'll bet different answers start to emerge from different programs. The physics of the scenario is pretty much known, and programmable, to levels of accuracy depending on including tinier and tinier factors being placed into the program. Also, skirting the atmosphere has a degree of randomness such as a Bell curve between error bars because the atmosphere fluctuates unpredicatably, but that is why course corrections are expected. Even small things like a change in the mass of polar ice will affect Ballistic trajectories, which I think is referred to in mil-speak as CEP for ICBM's. A famous NASA unprogrammed anomaly is this one, http://en.wikipedia.org/wiki/Pioneer_anomaly where s/c navigation is concerned, involving a high speed hyperbolic orbit. Regards Ken S. Tucker |
#28
|
|||
|
|||
NASA orbit simulation software
Pat Flannery wrote:
: :Having two separate teams figure out the same information in two :different ways via original information inputs that they themselves had :to determine would be a great idea; but each team should be completly :conversant with the method the other team is using so they can quickly :find the problem if the two end results differ significantly. : No, this is pretty much what you do NOT want, since if Team A is intimately familiar with Approach X that is going to condition their approach. You want the teams TOTALLY independent if you're trying to do something other than have someone 'check the math'. The latter can be done without having two groups and two approaches, but won't find some classes of errors. -- "It's always different. It's always complex. But at some point, somebody has to draw the line. And that somebody is always me.... I am the law." -- Buffy, The Vampire Slayer |
#29
|
|||
|
|||
NASA orbit simulation software
Derek Lyons wrote: It's a very nice real world example, other than the niggling detail of having exactly nothing to do with the real world and only being relevant to the nonexistent problem you are handwaving into existence. Jorge R. Frank wrote: You might respond that the answer is for NASA to create some kind of "super-sim" that meets all the centers' requirements. But consider that NASA has been doing this work for five decades and most of NASA's sims consist of legacy code going back 3-4 decades, largely coded in languages that don't facilitate modular re-use. At the time they were coded, there was no alternative to this. Each center came to have a workforce with expertise in the particular sims they were using. Today, there are computer languages that facilitate modular reuse, but the existing sims would have to be completely re-engineered from the ground up in those languages. While this would yield downstream benefits in code maintenance and reusability, the upfront cost of re-engineering the existing sims from the ground up - and then validating them - in order to take advantage of these languages would be colossal. And NASA operates in an annual-budget constrained environment. It is therefore always cheaper to update the legacy code - especially since each center has people who know that legacy code intimately and can update it in a relatively efficient fashion - rather than start over from square one. The only opportunities for making the leap to newer software architectures come at program boundaries, which are fairly infrequent. While JSC would never consider tearing apart its existing shuttle sims so close to the end of the program, they do intend to use JPL's planetary ephemerides in its Constellation sims. So the various centers have workforces that know their particular software very well indeed. But like I said, this means that there is a real problem with exchanging data between the centers as the people at the other center aren't going to know how the data was arrived at if it was derived from software that they are unfamiliar with. Back when the Constellation program first got started, we had someone working on it at NASA contact sci.space.history and ask us where he could find data on the Apollo and Mercury Launch Escape Systems. He wasn't having any luck looking for detailed information on them, as all of the mass of paperwork that NASA generated during those programs was piled away in vast numbers of boxes with almost no indexing. NASA has a real problem with doing the simple, non-flashy stuff if it cuts even a little way into their budgets for The Big Plan. For a couple of million dollars all that data could be converted into pdf's and indexed. For a few million more, standardized computer programs could be written for all the space centers that would allow each of them to be able to do any sort of trajectory simulation they desired and be on the same page when it came to exchanging information...but "meat and potatoes" type projects like that don't generate the big headlines that the PAO wants. So even though they would be very useful down the line, they don't get funded - and things go along as they always have, using 50 year old non-standardized trajectory programs while mice nibble away a little more Apollo paperwork by the day. Pat |
#30
|
|||
|
|||
NASA orbit simulation software
Pat Flannery wrote:
kevin willoughby wrote: If two of three versions agree, that doesn't mean those two are correct. Though in this case you do get feedback by observing where exactly the satellite's orbit ended up in reality versus what one or more programs predicted. That works, but it can be a *very* expensive way to debug a program... -- Kevin Willoughby lid It doesn't take many trips in Air Force One to spoil you. -- Ronald Reagan |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
NASA orbit simulation software | Joćo Gomes | Space Shuttle | 0 | May 14th 09 02:45 PM |
NASA seeks volunteers for a spaceflight simulation | Jacques van Oene | Space Shuttle | 8 | December 21st 05 10:37 PM |
NASA seeks volunteers for a spaceflight simulation | Jacques van Oene | Space Station | 8 | December 21st 05 10:37 PM |
Shuttle re entry simulation software? | Christopher | Policy | 4 | October 14th 03 05:38 PM |