![]() |
|
|
Thread Tools | Display Modes |
#151
|
|||
|
|||
![]()
greywolf42 wrote:
Bjoern Feuerbacher wrote in message ... George Dishman wrote: "greywolf42" wrote in message . com... [snip] Even plasma fireworks does not require the big bang. The latter adds creation of space. Big bang describes the idea that at large scales, distances between objects are systematically increasing. It doesn't yet go back to creation since the theories break down at least at the Planck time. I think he did not mean the initial creation of space here, but wanted to say that the expansion of the universe required a continuous "creation" of "new" space. [snip] I think we are all on the same page on this one. (Thank goodness.) Good. And now please explain why continuous creation of new space is a problem. Bye, Bjoern |
#152
|
|||
|
|||
![]()
greywolf42 wrote:
Bjoern Feuerbacher wrote in message ... greywolf42 wrote: George Dishman wrote in message ... [snip stuff discussed elsewhere] Ok, I think we have enough to try this with what you've said. The aim is partly show how Ned's test can be applied and also to give a starting point so that you can try to produce a self-consistent model that explains the spectrum of the CMBR. Actually, no. The purpose of these threads was to see if Ned's complaints against tired light theories were valid. And since tired light theories don't need to explain the CMBR, Why not? Becuase the MBR is not part of the theory, of course. The fact that the BB assumes that the MBR is "relic radiation" does not mean that other theories have to explain it. So tired light theories only attempt to explain red shift? They don't try to model other cosmological observations? Ned's claims (based on cosmogenic CMBR) are not valid. What, exactly, do you mean with "cosmogenic"? Created by the origin of the cosmos. Ned's example of a tired light theory does not use a CMBR which is created by the origin of the cosmos. [snip] Bye, Bjoern |
#153
|
|||
|
|||
![]() "Bjoern Feuerbacher" wrote in message ... George Dishman wrote: "greywolf42" wrote in message ... [snip] Even plasma fireworks does not require the big bang. The latter adds creation of space. Big bang describes the idea that at large scales, distances between objects are systematically increasing. It doesn't yet go back to creation since the theories break down at least at the Planck time. I think he did not mean the initial creation of space here, but wanted to say that the expansion of the universe required a continuous "creation" of "new" space. Possibly, but I don't think it makes much sense to describe expansion this way. In the big bang version, photons can be thought of as being "stretched" during their journey by the same factor as space expands but "creation of new space" raises the prospect of a photon getting cut in half if it happened to straddle the location where a bit of "new space" was "created". Both pictures, cutting a piece of vacuum in two and putting more new vacuum inbetween or taking a piece of vacuum and stretching it sound equally odd. Just saying that material objects end up farther apart is true either way. I suspect red shift can be considered in terms of the relationship between worldlines and geodesics at the emitter and receiver again without worrying about what happens in flight (but I can't be sure of that as I haven't studied GR) but it makes more sense to me than thinking of photons as extended objects that get stretched along with the underlying "fabric of space". George |
#154
|
|||
|
|||
![]()
Joseph Lazio wrote in message
... "g" == greywolf42 writes: g George Dishman wrote in message g ... First I'll draw together a few bits that I think sum up most of the relevant parts of the discussion though it's quite possible I'll miss some. g You did miss the fundamental point that the MBR in my favorite g theory comes from the antennae of our measuring devices. This was a favorite statement of Grote Reber, the first radio astronomer. I've never quite known what to make of it. Thanks, I hadn't heard of Reber. Did he do any work with Bell Labs? Reber essentially invented the field of radio astronomy, so I'm inclined to take seriously any of his suggestions. On the other hands, he was wrong at times. So are we all. Moreover, he never suggested a physical mechanism by which the MBR would be produced, and he knew as much about radio antennas as anybody. One doesn't have to have a theory for the mechanism, in order to experimentally identify the difference between an internal signal and an external signal. I also don't understand how this would explain observations of the temperature of the CMBR in other galaxies Since we aren't in other galaxies, there are no such observations. Claims otherwise are based on circular logic. nor how it would explain the SZ effect. (The antenna "knows" when we are looking at a cluster of galaxies and adjusts the resulting signal accordingly?) Quite simply, the claimed observation "SZ effect" is an artifact of circular theories and dedicated theorists. As noted in recent posts, my understanding of the S-Z effect is that the inspiration behind the S-Z effect is fine (if there IS as CMBR, then hot electrons will distort the CMB spectrum toward the blue). The problem arises in execution. Where excessive zeal and sloppy terminology leads one to hunt for miniscule reductions in intensity of specific MBR wavelengths. Literally dozens of experiments were done that "should have been" sufficient precision -- but all they found was noise. A few more recent experiments have "removed systematic errors" by computer processing. And claim resolutions below the physical resolutions of the apparatus. -- greywolf42 ubi dubium ibi libertas {remove planet for return e-mail} |
#155
|
|||
|
|||
![]()
George Dishman wrote in message
... "greywolf42" wrote in message ... George Dishman wrote in message ... snip If the temperature of the aether at 1000AU is T, that at 2000AU should be very close to (but slightly less than) T/sqrt(2) since the power from a black body radiator is proportional to T^4. Oops, here is your error. The aether temperature is not merely a function of the local addition of energy from starlight degradation. (Temperature is a function of E, not of dE.) Starlight degradation is a miniscule contribution to the pre-existing aether energy density. ... Ah, the penny drops, thanks. You're welcome. One more side issue down. Eddington showed energy in starlight was equivalent to circa 2.8K but only a small fraction of that transfers to the aether. Actually, all of it would transfer back to the aether -- eventually. Only a fraction from any one source would transfer at any given dV. Also, even if the aether temperature is much higher than 2.8K, your previous comments would get round this if the transfer of energy to electrons is slow compared to the re-radiation as thermal energy by the electrons. The total power transferred just needs to match that radiated at 2.8K. You have the general idea. Now I'm going to add one more bit of confusion. Thermal radiation from non-fusing sources in thermal equilibrium, such as planetary bodies and cooled collapsed objects will eventually be radiating just as much power as they absorb from gravitation. And that thermal radiation will eventually be re-absorbed into the aether. Which is the part of my model that I believe you now understand. But since we were mentioning starlight, we have to touch on the issue of fusion inside stars. Fusion energy is a temporary additional source. So the amount of heat leaving a fusing star will actually be above that received from the gravitational force. Until it burns out and cools down. The original source of the fusion energy would come from the orignal formation of protons (and anti-protons) out of the original aether. And the origin of the aether (and universe) is as yet undescribed by the model. -- greywolf42 ubi dubium ibi libertas {remove planet for return e-mail} |
#156
|
|||
|
|||
![]()
Joseph Lazio wrote in message
... "g" == greywolf42 writes: g Joseph Lazio wrote in message g ... Looking at Table 4, we see that the error bars on all the measurements are substantial fractions (...) the claimed "measured" value. Which clearly demonstrates my point about noise processing. No, it doesn't. A basic aspect of signal processing is dealing with and extracting signals from data streams for which the signal-to-noise ratio is less than unity. g That's if you know that you have a signal. Because you sent one. I suspect that there are lots of people (not astronomers) who spend time processing data streams to see if signals are present. Indeed, I suspect that the entire field of signal processing would be a lot more boring if one could only process data streams for which one knows a signal to be present. Your suspicions are both unfounded and irrelevant. g If you feed such algorithms random noise, the will still provide g you with an appearance of a signal. I'm sure some algorithms can fail in that manner. There are many algorithms in signal processing, though. I can think of some simple cases where your statement is easily false. I don't know of any that don't produce false signals. They all are designed to enchance slight deviations. However, I do know that the algorithms used in WMAP, for example, are false-signal producers. In discussing with Ted Bunn, we have found that the algoritms could produce false signals. But since WMAP assumes there *are* signals, then Ted figured it doesn't matter. Which is called circular logic. But, to address the issue in your case, we'd have to look at the specific algorithms used in the experiments you claim to have found such signals. Indeed, a simple example is estimating the mean and uncertainty in the mean from a set of data. g Yes. And a sample of random noise *will* give you a mean and an g uncertainty in the mean. It doesn't mean that you have a real g signal. You don't specify the kind of random noise to which you're referring, but, yes, random noise can have a mean. Perhaps the most basic is the normal distribution, which is specified completely by its mean and variance. So? Without a better specified problem, your objection is somewhat meaningless. But I specified the problem. The signals claimed are below the physical resolution of the detector. If I estimate the mean and the uncertainty in the mean from a set of data, find that the mean is consistent with zero, and conclude that there is no signal present, how is that a problem? That isn't a problem. But it is not the case under discussion. Conversely, if I estimate the mean and its uncertainty, find that it is not consistent with zero, and conclude that there is a signal, what have I done wrong? That would depend upon how you "found" that the mean and uncertainty were not "consistent with zero." If -- as in the case under discussion -- you were claiming a result below the resolution of the detector -- then you would be wrong. In the basic case of data with approximately equal uncertainties, the mean is given by (1/N)*sum{x}, where {x} are the data and N is the total number of data, and the uncertainty in the mean is given by s/\sqrt(N), where s is the uncertainty in measuring the individual values of x. g But you don't know the "uncertainty" in measuring the individual g values of a set of data, beforehand. You may know the theoretical g precision of the apparatus. Funny. I seem to recall an exchange between you and Tedd Bunn in which Tedd was expressing concern about the uncertainty in the data, whereas you were quite confident in the results. Odd that you'd recall something like this and not bother to find the reference. More generally, of course, knowing the uncertainties in the individual data is one of the great challenges in experimental and observational sciences. That phrase indicates that someone is challenged in this arena, all right. First, I think you mean "individual datum". Data is plural. And an individual datum has no statistical uncertainty. Groups of measurements of a single parameter (i.e. data) have a resulting statistical uncertainty. You are confusing theoretical precision with experimental uncertainty. There are ways to estimate the uncertainties (for instance, calculating the standard deviation of the data), That is the *only* way to get statistical uncertainty. but they make certain assumptions, There are no specific assumptions at all. Except that one is not varying any independent parameters (i.e. you are measuring the same thing). and one wants to be careful to check that those assumptions are valid. And -- in the case under discussion -- "they" are not checking the assumptions. I see you have no substantive counter-argument to the specific case. Merely retreats to generalities. However, it's also one of the reasons you glib dismissals of experimental results that you don't like rings so hollow among those of us with experience in data processing. Non-sequiteur combined with special plead. -- greywolf42 ubi dubium ibi libertas {remove planet for return e-mail} |
#157
|
|||
|
|||
![]()
Greg Hennessy wrote in message
... In article , greywolf42 wrote: Have you even *READ* the COBE papers? http://aether.lbl.gov/www/projects/c...r_final_apj.ps You snipped the part where it COBE found variations in the data, something you claimed it didn't find. Probably he simply did not understand that your quote addressed the issue. Feel free to provide the excerpt that the claimed results (1 in 100,000) were below the physical resolution of the instruments (1 in 10,000). The claimed result was 13 microK. The physical resultion of the instrument was 4 microK. COBE's detectors don't read in micro K. They read in intensity at a series of microwave wavelengths. And these two statements weren't in the quote about COBE. Which was: ================== We have analyzed the first year of data from the Differential Microwave Radiometers (DMR) on the Cosmic Background Explorer (COBE). The data show the dipole anisotropy, Galactic emission, and instrument noise, and detect statistically significant ( 7sigma) structure that is well-described as scale-invariant fluctuations with a Gaussian distribution. The major portion of the observed structure cannot be attributed to known systematic errors in the instrument, artifacts generated in the data processing or known Galactic emission. The structure is consistent with a thermal spectrum at 31, 53, and 90 GHz as expected for cosmic microwave background anisotropy. The rms sky variation, smoothed to a total 10ffi FWHM Gaussian, is 30 +- 5 microK for Galactic latitude |b| 20deg data with the dipole anisotropy removed. The rms cosmic quadrupole amplitude is 13 +- 4 microK. The angular auto-correlation of the signal in each radiometer channel and cross-correlation between channels are consistent and give an angular power-law spectrum with index n = 1.1 +- 0.5, and an rms-quadrupole-normalized amplitude of 16 +- 4 microK (\Delta T =T ss 6 x 10** -6). These features are in accord with the Harrison-Zel'dovich (scale-invariant, n = 1) spectrum predicted by models of inflationary cosmology. The low overall fluctuation amplitude is consistent with theoretical predictions of the minimal level gravitational potential variations that would give rise to the observed present day structure.^L ================== Your claims (and Bjoern's) are false. -- greywolf42 ubi dubium ibi libertas {remove planet for return e-mail} |
#158
|
|||
|
|||
![]()
George Dishman wrote in message
... "greywolf42" wrote in message . .. George Dishman wrote in message ... most quotes snipped, one moved later "greywolf42" wrote George Dishman wrote: snip If he can come up with one that explains the spectrum of the CMBR Electron vortex noise from the aether. A local effect due to electrons bound in hydrogen gas. your comment on CMBR relevance snipped but addressed at the end -:- Postulates: The CMBR is produced by "Electron vortex noise from the aether, a local effect due to electrons bound in hydrogen gas." This is not a "postulate" of tired light theory. It is not even a postulate of my current favorite theory. It is an unavoidable consequence of the aether-matter model that I favor. It is just what you said, quoted above. The quote above never mentions "postulates". It is a conclusion, not an assumption (postulate). The electrons are producing a blackbody spectrum at an equivalent temperature of roughly 2.73K. Slight correction: the electron imedance noise gives a signature that is equivalent to 2.73 (or 2.81) K. I don't see the difference (unless you mean the specific temperature). Does this "electron vortex noise" have a blackbody spectrum or not? Yes, the spectrum mimics the blackbody shape. No the source is not the temperature of the electron. The solar system is moving through this hydrogen No such assumption is needed. Under any version. The EM waves (photons) emitted are based in the aether. Thus, it is motion through the aether that is important. Not motion relative to hydrogen. and as a result there is a Doppler effect which produces the cosmic dipole moment. The MBR dipole moment comes from the motion of the detectors through the aether. The MBR moves within the local aether fluid. See my other post. If the hydrogen (or whatever is the source of the radiatin) isn't moving relative to the detector then you don't get a Doppler effect and you don't explain the dipole. That's relativity, not aether theory. The electrons themselves are distorted by their motion through the aether. Hence, so is the emission. This "electron hum" is produced everywhere roughly uniformly as the electron density does not affect the emitted intensity. The electron hum is produced mainly within the antennae of the MBR detectors. Sorry, it's not possible to explain the dipole in that case. I'm not pushing the strawman, feel free to change your suggested source in any way you think can explain the dipole. You are incorrect. However, I must apologize for overly simplifying my responses -- to the point where I was apparently not clear. due to grey dust or other possible causes of extinction. Wherever did you get this one? I missed a quote, it was in one of your recent posts. I'll try to find it if you like. You have a short memory. That was you on 12/27: "Extinction is discussed by Perlmutter ss 'grey dust'." Extinction is a totally separate concern. (As it is in all astrophysics.) Black body radiators are also perfect absorbers. Did you have a relevant point to make? -:- To analyse the above using Ned's test, Correction: *YOUR* test. This isn't Ned's test. IMO it is. We've been through that, and agreed that I will listen to *your* version. Oh, and your aren't addressing my model -- which is an artifact of the matter of which our detectors are constructed. But, go ahead with your "distant-source" origin analysis. If valid, it can be applied to other (as yet unknown) theories of the origin of the (C)MBR. we split the universe around the solar system into thin, concentric spherical shells or thickness dR at radius R. The surface area of each shell can be thought of as composed of many small cells of volume dV and the number of such cells is R^2*dR/dV. Ignoring tired light energy loss, the amount of radiation we receive from each cell is proportional to R^-2 (inverse square law) and proportional to dV hence the total rate of photons from each shell is independent of R. Assuming that radiation source density is constant in all dV, throughout the universe. If we limit the region of analysis to a thin shell at distance R_0, then you assume that the radiation source density is constant throughout the shell. I would have thought that too but you said: "greywolf42" wrote in message ... George Dishman wrote in message ... snip The extra factor to be taken into account in this case would be the electron density. Nope. Electron density wouldn't change anything. Again, I'm not pushing the strawman, correct me if if that is in error. Ned's theory requires an electron density for cosmic origin. Your theory requires an electron density for emission at intermediate location. Mine does not ... because it is a signal internal to the antenna. However, depending on the "cross section" of an electron, it may be only a fraction k of the amount that would be emitted by a solid (opaque) surface. This factor k is adjustable. The total radiation we receive is then the sum of the photons from all the shells, however each photon will be measured at a frequency and energy which has been reduced from that at which it was transmitted by the tired light effect. http://www.astro.ucla.edu/~wright/tiredlit.gif Looking at Ned's graph, the local (z ~ 0) electron hum would be measured as the black line other than being scaled down by the factor k. Since the k applies to all source densities (distant and local), the "k" factor here will be a wash. I'm not familiar with that term, what do you mean? "k" is *your* term, above. (Because you have to determine the source density from what you measured.) The factor is present in the equations but may be able to be determined empirically, is that what you meant by "a wash"? Close enough. However, to that we must add contributions from greater distances since there is no appreciable extinction. First think of a series of shells at z=0.1, z=0.2, etc.. Each would produce a curve similar to the black line with the same peak intensity but with the peak frequency moved to the left. And down. Each shell would result in the same (black) curve. No, each shell would produce a different red curve depending on the source temperature, the distance (hence z), the k factor for that shell and the integral of k for all shells closer to us which will partially hide more distant shells. Why are you now assuming that the temperatures are different at each source? For the peak of each curve to match the black curve, each shell must be at a temperature of (1+z)T so the farther back into the past you look, the higher the temperature. But we don't need each individual curve to match the black curve. If you are arguing against your uniform-external-source model, you need to show that the integrated signature doesn't match the shape of the received curve. (Of course you'd also have to justify your Earth-centered universal temperature distribution.) What would work better against that strawman is an integration over constant density and constant temperature sources. The total would then be the sum of an infinite series of such curves. It should be clear that essentially the total observed curve becomes something like a straight line to the left (lower frequency) of the locally generated peak. Of course the series of discrete shells is an approximation as the source is continuous so to find the real prediction let dR tend to zero and integrate instead of summing. The overall intensity of the curve can be adjusted by changing k but the intensity will always be too high at frequencies below the peak for a blackbody. In fact I don't think you will get a peak at all. That is because you have assumed (incorrectly, I believe) that the frequency shifts, without losing energy. The problem is (I think) that you have tried to do your integration in the "per nu" expression. Energy is lost per photon. Not per unit frequency. This changes the results of the integration. I hope my other post cleared that up. The peaks are equal if you allow for the energy loss due to graphing against frequency. They are not equal if you don't. A substantive physics test will not rely upon the type of graphing used. Ned's graph is correct for tired light. Here you violate your own rule that Ned's test / *your* test can only be used against a single theory at a time. Yet here you throw out another blanket claim that "tired light" is disproved. I've spent too much time putting this together and I don't want to spend more time doing the integration, I think I've said enough so you can if you wish. I think we both see the intent of your test. The point is that, with the stated strawman, the observed spectrum will not match a blackbody. Actually, the observed spectrum looks like it will match a blackbody -- because you have tried to work within a "per nu" function. While the strawman requires decay in the photon -- not wavelength. No, tired light reduces the energy by (1+z) while the Stefan-Boltzmann Law increases the total power by (1+z)^4 leaving the discrepancy of (1+z)^3. That is the point of Ned's page. Based on what assumption of source density? You keep ignoring this question. So the question is can you change the strawman, or indeed discard it entirely and replace it with a real tired light theory, and show that you can then match the observed spectrum while still explaining the dipole? Again, tired light theories have no need to explain the MBR spectrum. The question before you, is whether your disproof of the strawman is valid. Then you can keep it in your pocket for use if anyone ever proffers a combined MBR / tired light theory that presumes that matter within space is uniform, and gives rise to the MBR constantly throughout the universe. Yes and no. Think back to how we got the value of 0.024% per MPc for mu in the tired light theory. It comes from the observed redshift versus distance. Of starlight, yes. Now note that in the Plasma Fireworks model, some of the redshift is due to motion Yes. so the amount of energy loss due to tired light would be less Yes. hence mu would have a smaller value Yes. which we could find if we could separate out the motion part. Which we can't, of course without other theoretical calculations. If objects were moving apart fast enough, that could explain all the redshift and hence mu would be zero. Yes. You can have a PF model without tired light. And you can have a tired light model without PF. Redshift-distance alone cannot determine which is real. Since the volume of any region of space increases as the cube of its dimensions, that reduces the photon density and hence the intensity. You've fallen into the BB assumptions again. Photon density remains constant per unit volume as you arbitrarily expand the volume of space you are considering in your "region". Unless you expand space (which is a BB assumption). PF does not expand space. TL does not expand space. If there is no motion If there is no motion, then there is no PF model. Why are you throwing around these self-contradictory arguments? Have you now abandoned discussion of the pure PF model and gone back to pure tired light? then there will be an error in intensity of (1+z)^3 while if the motion causes expansion by a scaling of exactly 1+z then the result exactly matches a black body. Intermediate amounts of motion would give an intermediate intensity factor. Total non-sequiteur. That's the BB model again. In other words, the amount by which the observed intensity deviates from that of a black body is an indirect measure of the value of mu, and if there is no difference then mu=0, and that means light doesn't tire. Mu is based on starlight ... not the CMBR. I don't currently know of any such theories. Which is what I said a long time ago, I'm prepared to consider tired light theories but I don't know of any that can explain the dipole and the spectrum of the CMBR. TIRED LIGHT THEORIES HAVE NO NEED TO EXPLAIN BIG BANG ASSUMPTIONS! Cosmic origin of the MBR is a BB assumption. It does not exist in any tired light theory. BB does explain them and tired light could, in theory, also occur in a BB universe, That is a combination that we have not specifically discussed. But what you are attempting to do is arbitrarily force TL into a BB cosmos. If you have one, you don't *need* the other. but the expansion scales as (1+z)^3 which means that mu has the empirical value of 0 to the limit of the resolution of our measurements. Total non-sequiteur. You have been claiming that you approach this as a "test", that can only be applied to a specific theory at a time. Now, you slip back into several universal claims that "tired light" is disproved. Even when you aren't using any specific tired light theory, and won't identify your assumptions (intensity of source and source of temperature distribution) on the straman theory you claim you are using. Which happens to disagree with all tired light theories that I know. I see no need to go 'round the barn anymore on this one. I believe we've hit everything at least once. -- greywolf42 ubi dubium ibi libertas {remove planet for return e-mail} |
#159
|
|||
|
|||
![]()
"g" == greywolf42 writes:
g Joseph Lazio wrote in message g ... g You did miss the fundamental point that the MBR in my favorite g theory comes from the antennae of our measuring devices. This was a favorite statement of Grote Reber, the first radio astronomer. I've never quite known what to make of it. g Thanks, I hadn't heard of Reber. Did he do any work with Bell g Labs? IIRC, no. Reber essentially invented the field of radio astronomy, so I'm inclined to take seriously any of his suggestions. On the other hands, he was wrong at times. [...] Moreover, he never suggested a physical mechanism by which the MBR would be produced, and he knew as much about radio antennas as anybody. g One doesn't have to have a theory for the mechanism, in order to g experimentally identify the difference between an internal signal g and an external signal. I just read the Penzias & Wilson (1965) paper and an associated Penzias & Wilson (1965) paper. From that, my understanding is that they did distinguish between an internal signal and an external signal. Specifically, they were able to show that, whatever the signal is, it must be entering through the antenna. It is not generated within the electronics at the backend of the antenna. I also don't understand how this would explain observations of the temperature of the CMBR in other galaxies g Since we aren't in other galaxies, there are no such observations. g Claims otherwise are based on circular logic. I'm disinclined to believe proofs by assertion. In lieu of some concrete statements based on papers cited to you, I stand by my objection. nor how it would explain the SZ effect. (...) g Quite simply, the claimed observation "SZ effect" is an artifact of g circular theories and dedicated theorists. g As noted in recent posts, my understanding of the S-Z effect is g that the inspiration behind the S-Z effect is fine (...). The g problem arises in execution. Where excessive zeal and sloppy g terminology leads one to hunt for miniscule reductions in intensity g of specific MBR wavelengths. Literally dozens of experiments were g done that "should have been" sufficient precision -- but all they g found was noise. A few more recent experiments have "removed g systematic errors" by computer processing. And claim resolutions g below the physical resolutions of the apparatus. You haven't demonstrated to me either that you understand the S-Z effect nor that you understand signal processing. Therefore, I stand by objections. -- Lt. Lazio, HTML police | e-mail: No means no, stop rape. | http://patriot.net/%7Ejlazio/ sci.astro FAQ at http://sciastro.astronomy.net/sci.astro.html |
#160
|
|||
|
|||
![]()
"g" == greywolf42 writes:
g Joseph Lazio wrote in message g ... A basic aspect of signal processing is dealing with and extracting signals from data streams for which the signal-to-noise ratio is less than unity. g That's if you know that you have a signal. Because you sent one. I suspect that there are lots of people (not astronomers) who spend time processing data streams to see if signals are present. Indeed, I suspect that the entire field of signal processing would be a lot more boring if one could only process data streams for which one knows a signal to be present. g Your suspicions are both unfounded and irrelevant. I leave it to the reader to determine whether s/he can imagine anybody who might be interested in processing data streams to see if signals are present. [...] Indeed, a simple example is estimating the mean and uncertainty in the mean from a set of data. g Yes. And a sample of random noise *will* give you a mean and an g uncertainty in the mean. It doesn't mean that you have a real g signal. You don't specify the kind of random noise to which you're referring, but, yes, random noise can have a mean. Perhaps the most basic is the normal distribution, which is specified completely by its mean and variance. So? Without a better specified problem, your objection is somewhat meaningless. g But I specified the problem. The signals claimed are below the g physical resolution of the detector. Greg H. has disputed that point, at least as it refers to COBE. [...] Conversely, if I estimate the mean and its uncertainty, find that it is not consistent with zero, and conclude that there is a signal, what have I done wrong? g That would depend upon how you "found" that the mean and g uncertainty were not "consistent with zero." If -- as in the case g under discussion -- you were claiming a result below the resolution g of the detector -- then you would be wrong. Yes, when you start trying to nitpick my statements like this, it leads me to believe that you don't understand what I'm describing. This is Data Analysis 101. Let your detector be anything you want it to be. Let it measure temperature on the sky, volts out of a voltmeter, whatever. If you take a long data stream from it, you can easily measure well below the "resolution" of the detector. More generally, of course, knowing the uncertainties in the individual data is one of the great challenges in experimental and observational sciences. g That phrase indicates that someone is challenged in this arena, all g right. First, I think you mean "individual datum". Data is g plural. Yes, datum is made plural in a manner like that of a second declension Latin noun, IIRC. However, in English, terms should agree in number. As I wrote "uncertainties," the modifying prepositional phrase must agree in number. Is the grammar lesson over yet? g And an individual datum has no statistical uncertainty. Groups g of measurements of a single parameter (i.e. data) have a resulting g statistical uncertainty. Yes, but why are you restricting just to "statistical uncertainty"? I made the more general statement that knowing (estimating would have probably been better) the uncertainties is challenging. There certainly can be a statistical uncertainty associated with measurements, but there can be other kinds, too. (Oh, yeah, I was also taught not to begin a sentence with a conjunction.) There are ways to estimate the uncertainties (for instance, calculating the standard deviation of the data), g That is the *only* way to get statistical uncertainty. What if there's some systematic effect? What if the process is not well described by a gaussian random noise process, in which case the common method of calculating the standard deviation doesn't produce a meaningful measurement of the variance of the underlying distribution? -- Lt. Lazio, HTML police | e-mail: No means no, stop rape. | http://patriot.net/%7Ejlazio/ sci.astro FAQ at http://sciastro.astronomy.net/sci.astro.html |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Breakthrough in Cosmology | Kazmer Ujvarosy | SETI | 8 | May 26th 04 04:45 PM |
Breakthrough in Cosmology | Kazmer Ujvarosy | Astronomy Misc | 3 | May 22nd 04 08:07 AM |