A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Space Science » Policy
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Launch Failure Review for 2006



 
 
Thread Tools Display Modes
  #32  
Old August 16th 06, 07:49 PM posted to sci.space.policy
ed kyle
external usenet poster
 
Posts: 276
Default Launch Failure Review for 2006

Rand Simberg wrote:
On 16 Aug 2006 09:39:19 -0800, in a place far, far away, Louis
Scheffer made the phosphor on my monitor glow in
such a way as to indicate that:

Substitute "Falcon" for plstkj, and "fail" for gnrxf, and you've got
the reasoning behind the 2/3 figure. You assume there is some underlying
failure rate 'p', and make your best guess as to what it is, based on what
you've seen.


I understand that. My problem is that there is an assumption of
independence, which is invalid, and that we have no other knowledge
about the system. If it were truly a black box, then I'd agree, but
it's not. You have a lot of smart and motivated people working to
make sure that the failure rate is on the order of a percent or two at
most, so I see no reason to assume that it's 67 percent. I guess my
point is that a Bayesian approach is not necessarily the best one.


The folks may be working to develop a vehicle with a 1-2% failure
rate, but to date they have demonstrated a 100% failure rate. Is
the true failure rate 100% then?

It is pretty safe to guess that it is lower than 100%, but how low?
Right now, the first-order Bayesian method gives a 67% estimate
mean. That estimate will not fall until SpaceX proves it should
by flying Falcon successfully. I think that is a fair method.

As for the assumption of independence - SpaceX is still the
company building these rockets, right? The person or persons
who specified the failed hardware on the first rocket must also
have contributed to the design of the rest of the rocket. There
may very well be more still-to-be discovered failure modes in
this machine.

Maybe they are improving the design before the next try.
Good. Now prove it.

- Ed Kyle

  #33  
Old August 16th 06, 07:56 PM posted to sci.space.policy
[email protected]
external usenet poster
 
Posts: 115
Default Launch Failure Review for 2006

Dude who cares what your name is, it doesn't really matter. You
cannot arbitrarily lift your intellect above ever body else's, and
therefore declare you do not have to cite your sources or references.
You my friend are no better than anybody else on this board, but you
have taken it upon yourself to act as the information "police" in
the name of science, and attacked people for their posts. You can
challenge somebody on the merits of the science, and that will weed out
what is not true and what is, but interjecting fallacies of reasoning
or ad hominens does not make a valid challenge to some ones
conclusions. So ask for sources and citations, but when you are
pressed for citations or references for your own posts you cannot
insulate yourself from your own demands, or you are practicing
hypocrisy not science. In the scientific world as this is a
sci.space.policy group when somebody makes non-correlated non-validated
assertions and then attacks the person who is requesting clarification
as you have, your actions may bring into question your ethics,
honesty, and integrity.
tom

Rand Simberg wrote:
On 16 Aug 2006 08:24:23 -0700, in a place far, far away, "Eric Chomko"
made the phosphor on my monitor glow in such
a way as to indicate that:

Hey randy,

That's not my name, you moron.


How about Ranty then?


Nope. Even farther off, moron.


  #34  
Old August 16th 06, 08:03 PM posted to sci.space.policy
Rand Simberg[_1_]
external usenet poster
 
Posts: 8,311
Default Launch Failure Review for 2006

On 16 Aug 2006 11:49:27 -0700, in a place far, far away, "Ed Kyle"
made the phosphor on my monitor glow in such a
way as to indicate that:

Rand Simberg wrote:
On 16 Aug 2006 09:39:19 -0800, in a place far, far away, Louis
Scheffer made the phosphor on my monitor glow in
such a way as to indicate that:

Substitute "Falcon" for plstkj, and "fail" for gnrxf, and you've got
the reasoning behind the 2/3 figure. You assume there is some underlying
failure rate 'p', and make your best guess as to what it is, based on what
you've seen.


I understand that. My problem is that there is an assumption of
independence, which is invalid, and that we have no other knowledge
about the system. If it were truly a black box, then I'd agree, but
it's not. You have a lot of smart and motivated people working to
make sure that the failure rate is on the order of a percent or two at
most, so I see no reason to assume that it's 67 percent. I guess my
point is that a Bayesian approach is not necessarily the best one.


The folks may be working to develop a vehicle with a 1-2% failure
rate, but to date they have demonstrated a 100% failure rate. Is
the true failure rate 100% then?


Of course not. The point is that it's foolish to attempt to come up
with a "rate: with a low number of samples (paticularly with a sample
of one). A Bayesian analysis at this point is meaningless.
  #35  
Old August 16th 06, 09:49 PM posted to sci.space.policy
[email protected]
external usenet poster
 
Posts: 22
Default Launch Failure Review for 2006


Rand Simberg wrote:
I guess my
point is that a Bayesian approach is not necessarily the best one.


I don't know if it's best, but it's really hard to fit a more accurate
model with little data.

One more accurate model might have N latent flaws, each capable of
killing a mission with some probability. I'd also assume that people
in this business are good enough to remove each flaw once it is found.
They also may fix some flaws found in other ways - by close calls, or
better analysis, or failures in other similar systems.

If what you have it lots of small flaws, then this looks a lot like the
Bayesian model, since finding and fixing one does not help the overall
failure rate much. This is probably reasonable for rockets with a long
launch history.

But new rockets may well contain big flaws. Assuming the folks who
design these things know what they are doing, and pay attention to the
lessons of previous failures, there should only be a small number of
these big flaws. In this case each failure may considerably *increase*
the odds of the next mission working, as that particular cause is
eliminated.

The problem is that you cannot fit this model with few examples.
Suppose the rocket starts life with M big flaws, each of which kills
the mission with prob 0.5. If M=1, the next launch will succeed 100%
of the time (minus the small random remaining failures.) If M=10, then
the next launch succeeds only 0.2% of the time.

You are arguing that from experience, and your impressions of the
Falcon team, M is likely a small number, and hence the next launch is
more than 33% likely to succeed. But you could equally well argue this
is the first rocket, by a new team, trying for low cost, and hence M is
likely large, and failure on the next launch very likely.
Unfortunately, you cannot tell the difference between these two models
from the data so far, and the range spanned by plausible values of M
ranges from basically 0-100%. So a more accurate model is not very
helpful in predicting the chances of success on the next launch.

So I agree that the Bayesian model is not strictly correct, since it
assumes independence, which is certainly not true in practice. But it
may be the best that can be done with the very limited data available.

Lou Scheffer

  #36  
Old August 16th 06, 10:08 PM posted to sci.space.policy
Rand Simberg[_1_]
external usenet poster
 
Posts: 8,311
Default Launch Failure Review for 2006

On 16 Aug 2006 13:49:24 -0700, in a place far, far away,
made the phosphor on my monitor glow in such a way as
to indicate that:

You are arguing that from experience, and your impressions of the
Falcon team, M is likely a small number, and hence the next launch is
more than 33% likely to succeed. But you could equally well argue this
is the first rocket, by a new team, trying for low cost, and hence M is
likely large, and failure on the next launch very likely.


In the latter case, you have to argue that these smart people also
don't understand that launch failures are bad for business.

Unfortunately, you cannot tell the difference between these two models
from the data so far, and the range spanned by plausible values of M
ranges from basically 0-100%. So a more accurate model is not very
helpful in predicting the chances of success on the next launch.


Again, I don't make my probabilistic assessment from the data alone.

So I agree that the Bayesian model is not strictly correct, since it
assumes independence, which is certainly not true in practice. But it
may be the best that can be done with the very limited data available.


I disagree that the data is as limited as you and Ed state.
  #37  
Old August 16th 06, 11:54 PM posted to sci.space.policy
[email protected]
external usenet poster
 
Posts: 22
Default Launch Failure Review for 2006

Rand Simberg wrote:
On 16 Aug 2006 13:49:24 -0700, in a place far, far away,
made the phosphor on my monitor glow in such a way as
to indicate that:

You are arguing that from experience, and your impressions of the
Falcon team, M is likely a small number, and hence the next launch is
more than 33% likely to succeed. But you could equally well argue this
is the first rocket, by a new team, trying for low cost, and hence M is
likely large, and failure on the next launch very likely.


In the latter case, you have to argue that these smart people also
don't understand that launch failures are bad for business.

Lots of smart people write software, too. And they understand, very
well, that failures are bad for business. Yet software, especially
new software, has lots of bugs, including more than a few fatal bugs.
This is due to lack of time allocated for testing, pure technical
inability to see all possible cases, schedule pressure from
competition, bank accounts draining more rapidly than desired, just
plain mistakes and bad decisions, and many other causes. All these
exist in the rocket community, as well.

So while you can *hope* that there are few bugs since the developers
plan on making money, you cannot infer that there *will* be few bugs.
The developers may know this, understand this, believe this, and they
are doubtless trying their best, but mere desire does not translate
into performance.

Lou Scheffer

  #38  
Old August 17th 06, 12:02 AM posted to sci.space.policy
Rand Simberg[_1_]
external usenet poster
 
Posts: 8,311
Default Launch Failure Review for 2006

On 16 Aug 2006 15:54:26 -0700, in a place far, far away,
made the phosphor on my monitor glow in such a way as
to indicate that:

In the latter case, you have to argue that these smart people also
don't understand that launch failures are bad for business.

Lots of smart people write software, too. And they understand, very
well, that failures are bad for business. Yet software, especially
new software, has lots of bugs, including more than a few fatal bugs.
This is due to lack of time allocated for testing, pure technical
inability to see all possible cases, schedule pressure from
competition, bank accounts draining more rapidly than desired, just
plain mistakes and bad decisions, and many other causes. All these
exist in the rocket community, as well.

So while you can *hope* that there are few bugs since the developers
plan on making money, you cannot infer that there *will* be few bugs.
The developers may know this, understand this, believe this, and they
are doubtless trying their best, but mere desire does not translate
into performance.


That's an interesting point, given how many software developers have
gone into the space hardware business. Hopefully the software
developers will recognize this before it wrecks their businesses...

I should note, based on insider info, that SpaceX does in fact
recognize this, if only because ot the first failure...
  #39  
Old August 17th 06, 02:39 PM posted to sci.space.policy
ed kyle
external usenet poster
 
Posts: 276
Default Launch Failure Review for 2006


Rand Simberg wrote:
On 16 Aug 2006 11:49:27 -0700, in a place far, far away, "Ed Kyle"
made the phosphor on my monitor glow in such a
way as to indicate that:

Rand Simberg wrote:
On 16 Aug 2006 09:39:19 -0800, in a place far, far away, Louis
Scheffer made the phosphor on my monitor glow in
such a way as to indicate that:

Substitute "Falcon" for plstkj, and "fail" for gnrxf, and you've got
the reasoning behind the 2/3 figure. You assume there is some underlying
failure rate 'p', and make your best guess as to what it is, based on what
you've seen.

I understand that. My problem is that there is an assumption of
independence, which is invalid, and that we have no other knowledge
about the system. If it were truly a black box, then I'd agree, but
it's not. You have a lot of smart and motivated people working to
make sure that the failure rate is on the order of a percent or two at
most, so I see no reason to assume that it's 67 percent. I guess my
point is that a Bayesian approach is not necessarily the best one.


The folks may be working to develop a vehicle with a 1-2% failure
rate, but to date they have demonstrated a 100% failure rate. Is
the true failure rate 100% then?


Of course not. The point is that it's foolish to attempt to come up
with a "rate: with a low number of samples (paticularly with a sample
of one). A Bayesian analysis at this point is meaningless.


Though not as meaningful as it will be with more samples, I think
it still offers some information as-is. It provides a gross estimate
that allows comparison with other low-flight launchers.

And heck - if one of the next two launches fail, it will turn out to
be right!

- Ed Kyle

  #40  
Old August 17th 06, 03:46 PM posted to sci.space.policy
Jeff Findley
external usenet poster
 
Posts: 5,012
Default Launch Failure Review for 2006


"Rand Simberg" wrote in message
...
That's an interesting point, given how many software developers have
gone into the space hardware business. Hopefully the software
developers will recognize this before it wrecks their businesses...

I should note, based on insider info, that SpaceX does in fact
recognize this, if only because ot the first failure...


Coming from the software business (finite element analysis software), I can
say that a lot of this depends on the organization and upper management's
emphasis, or lack thereof, of quality in the product gives you a good
indication of the order of magnitude of bugs you're going to find. Also,
the presence or absence of a dedicated software quality team should tell you
a lot.

The worst place to be is in a group where the same people who write the code
supposedly test the code. Even well meaning employees will miss entire
classes of bugs if they test their own code.

Jeff
--
"They that can give up essential liberty to obtain a
little temporary safety deserve neither liberty nor
safety"
- B. Franklin, Bartlett's Familiar Quotations (1919)


 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Space Calendar - April 24, 2006 [email protected] Astronomy Misc 0 April 24th 06 04:24 PM
Space Calendar - April 24, 2006 [email protected] News 0 April 24th 06 04:24 PM
Space Calendar - February 22, 2006 [email protected] History 0 February 22nd 06 05:21 PM
Space Calendar - January 26, 2006 [email protected] News 0 January 28th 06 12:41 AM
Space Calendar - October 27, 2005 [email protected] Astronomy Misc 0 October 27th 05 05:02 PM


All times are GMT +1. The time now is 03:50 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright ©2004-2025 SpaceBanter.com.
The comments are property of their posters.