A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » Astronomy Misc
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

Ballistic Theory, Progress report...Suitable for 5yo Kids



 
 
Thread Tools Display Modes
  #51  
Old July 15th 05, 08:50 PM
George Dishman
external usenet poster
 
Posts: n/a
Default


"bz" wrote in message
98.139...
"George Dishman" wrote in
:

....
I agree.

....
likewise.

....
"The large bandwidth of femtosecond pulses
causes experimental difficulties."


I am not surprised. Rapidly keying a radio transmitter also creates
difficulties. Part of the problem is that a high Q circuit element tends
to 'ring'.


See below.

Chopping a pure sinewave creates sidebands
hence increases the bandwidth.


Quite true.... especially if the chopping isn't done at the time of zero
crossing.


Chopping is a severe form of ampitude modulation
so you can think of the process as if you were
multiplying the sine wave with a digital waveform.
That is the same as heterodyning the Fourier
Transform of the chopping waveform with the sine
wave which acts as a carrier.

The antenna would also need to be low Q and non reactive so that current
and voltage would be in phase.


The chopping creates sidebands. The Q of the
circuit then acts as a filter which reduces
the sidebands. You might think you could get
arbitrarily narrow bandwidth and short bursts
but the ringing of the circuit will extend the
burst and the higher the Q, the longer it rings.
In fact you can imagine that the early part of
a burst starts the circuit ringing and the
interference between that and the latter part
of the pulse is what cancels it out and creates
the filter action.

Think of a
Fourier analysis of the chopping waveform.
Now I would think a single photon cannot have
a bandwidth


I agree.

but if you take a single photon
from a stream with a wide bandwidth, then
that would translate into uncertainty about
the energy of the particular photon.


right.

On the other hand, if you have a narrow bandwidth beam of photons and you
'chop' it, into small slices, mechanically, I am NOT sure that we would
generate sidebands, like 'normal' modulation would. [how does one photon
know that those ahead of it or behind it have been absorbed?]


You were talking of pulse lengths of a couple
of cycles. Compared to laser coherence lengths
of metres, we are talking of letting through
a tiny sample of one photon :-)

OK, since they are particles, the way I expect
that to work is that you get a fractional
probability that the photon makes it through
the shutter.

If we chopped it fine enough, we should have a single photon, of known
energy/wavelength/frequency. We would almost certainly NOT know its exact
position, however. I think time would be the expresion of uncertanty.


The relevant factors are dE*dt or dp*dx of course.

If a shutter is used and is open for a very short
time then you know t and x very accurately so dE
and dp become poorly defined. Of course both depend
on the frequency of the photon so I expect a side
effect of the shutter operation would be to scatter
the photons that get through in some way that adds
a random factor to the energy/momentum and hence
broadens the linewidth. However, I haven't used
lasers in thirty years and never worked with very
short pulses so I'm guessing. Perhaps the paper will
clue me in a bit when I get a chance to read it.

George


  #52  
Old July 15th 05, 10:37 PM
bz
external usenet poster
 
Posts: n/a
Default

"George Dishman" wrote in news:db93ro$ogf$1
@news.freedom2surf.net:

On the other hand, if you have a narrow bandwidth beam of photons and you
'chop' it, into small slices, mechanically, I am NOT sure that we would
generate sidebands, like 'normal' modulation would. [how does one photon
know that those ahead of it or behind it have been absorbed?]


You were talking of pulse lengths of a couple
of cycles.


Of LESS than a couple of cycles.

Compared to laser coherence lengths
of metres, we are talking of letting through
a tiny sample of one photon :-)


I see no reason for a photon to be longer than one cycle.

Coherence length isn't the length of the photons, it tells us how big a chunk
of light is 'phase, frequency, polarization, coherent'.

It is in some sense 'the length of the cavity' or at least it is clearly
related to the length of the laser cavity. The longer the cavity, the longer
the coherence length.

My impression is that it usually represents the stimulated emission from a
single pass through the laser cavity.

OK, since they are particles, the way I expect
that to work is that you get a fractional
probability that the photon makes it through
the shutter.


Why do you think that a single photon must be longer than one cycle?

If we chopped it fine enough, we should have a single photon, of known
energy/wavelength/frequency. We would almost certainly NOT know its
exact position, however. I think time would be the expresion of
uncertanty.


The relevant factors are dE*dt or dp*dx of course.

If a shutter is used and is open for a very short
time then you know t and x very accurately


If I don't know, within a small fraction of a cycle, when the photon is, then
I don't know 't' very accurately.

so dE
and dp become poorly defined. Of course both depend
on the frequency of the photon so I expect a side
effect of the shutter operation would be to scatter
the photons that get through in some way that adds
a random factor to the energy/momentum and hence
broadens the linewidth. However, I haven't used
lasers in thirty years and never worked with very
short pulses so I'm guessing. Perhaps the paper will
clue me in a bit when I get a chance to read it.


In the work we did with the optogalvanic effect induced by dye laser pulses
in plasma, we were not working with single photons, nor with extremely short
pulses. That was in the early 90s. I also worked with YAG and CO2 lasers in
the early 70s, using them to cut aluminum oxide and to adjust resistors to
value. One CO2 laser was 50 W, CW, the other was 500 W, CW. The yags were
much lower average power and pulsed.

There is a paper
http://jchemed.chem.wisc.edu/JCEWWW/...Pub.html#ref16
That I disagree with. They appear to believe that photons consist of
wavetrains that are millions of cycles long.

I see no reason for Radio Frequency Photons to be any different from light
photons as to the number of cycles per photon.

If that is true, *and* IF they were right THEN there would be no way for me
to key a 1.8 MHz transmitter at 30 wpm [where keying rate is about 12 dots
per second]. I know for a fact that transmitters opperating at much lower
frequencies (in the long wave marine band between 200 and 500 kHz) have been
operated with keying speed much higher than 30 wpm.

Since transmitters operating at much lower frequencies are regularly keyed at
much higher switching rates, their claims of millions of cycles per photon
[if RF and Light photons are similar] are clearly false.

--
bz

please pardon my infinite ignorance, the set-of-things-I-do-not-know is an
infinite set.

remove ch100-5 to avoid spam trap
  #53  
Old July 16th 05, 12:17 PM
George Dishman
external usenet poster
 
Posts: n/a
Default


"bz" wrote in message
98.139...
"George Dishman" wrote in news:db93ro$ogf$1
@news.freedom2surf.net:

On the other hand, if you have a narrow bandwidth beam of photons and
you
'chop' it, into small slices, mechanically, I am NOT sure that we would
generate sidebands, like 'normal' modulation would. [how does one photon
know that those ahead of it or behind it have been absorbed?]


You were talking of pulse lengths of a couple
of cycles.


Of LESS than a couple of cycles.

Compared to laser coherence lengths
of metres, we are talking of letting through
a tiny sample of one photon :-)


I see no reason for a photon to be longer than one cycle.


That was my original point. Interference effects
still exist when the path length is many wavelengths
and they affect the probability of a single photon
arriving at a location.

Coherence length isn't the length of the photons, it tells us how big a
chunk
of light is 'phase, frequency, polarization, coherent'.


Yes, but it also affects single photons.

It is in some sense 'the length of the cavity' or at least it is clearly
related to the length of the laser cavity. The longer the cavity, the
longer
the coherence length.

My impression is that it usually represents the stimulated emission from a
single pass through the laser cavity.


I don't think lasers with 50m coherence are
necessarily 25m long.

OK, since they are particles, the way I expect
that to work is that you get a fractional
probability that the photon makes it through
the shutter.


Why do you think that a single photon must be longer than one cycle?


I think it is a 'point' particle (which might
be a string at the planck length).

However I also know that a single photon in
the double slit experiment has a negligible
probability of hitting a point where the
path difference is 10.5 wavelengths if the
coherence length is 1000 wavelengths.

If we chopped it fine enough, we should have a single photon, of known
energy/wavelength/frequency. We would almost certainly NOT know its
exact position, however. I think time would be the expresion of
uncertanty.


The relevant factors are dE*dt or dp*dx of course.

If a shutter is used and is open for a very short
time then you know t and x very accurately


If I don't know, within a small fraction of a cycle, when the photon is,
then
I don't know 't' very accurately.


"very accurately" is not well defined ;-) The
smaller dt or dx, the larger dE or dp, but
Planck's Constant is very small.

so dE
and dp become poorly defined. Of course both depend
on the frequency of the photon so I expect a side
effect of the shutter operation would be to scatter
the photons that get through in some way that adds
a random factor to the energy/momentum and hence
broadens the linewidth. However, I haven't used
lasers in thirty years and never worked with very
short pulses so I'm guessing. Perhaps the paper will
clue me in a bit when I get a chance to read it.


In the work we did with the optogalvanic effect induced by dye laser
pulses
in plasma, we were not working with single photons, nor with extremely
short
pulses. That was in the early 90s. I also worked with YAG and CO2 lasers
in
the early 70s, using them to cut aluminum oxide and to adjust resistors to
value.


Neat, I used matched pairs of laser trimmed devices
in an instrumentation amp design many years ago.

One CO2 laser was 50 W, CW, the other was 500 W, CW. The yags were
much lower average power and pulsed.

There is a paper
http://jchemed.chem.wisc.edu/JCEWWW/...Pub.html#ref16
That I disagree with. They appear to believe that photons consist of
wavetrains that are millions of cycles long.


Fascinating. It's something I'll have to study
a bit though. Thanks again!

I see no reason for Radio Frequency Photons to be any different from light
photons as to the number of cycles per photon.

If that is true, *and* IF they were right THEN there would be no way for
me
to key a 1.8 MHz transmitter at 30 wpm [where keying rate is about 12 dots
per second].


Pardon? Data at 12 dots per second is only 24Hz so
could be transmitted on a 30Hz carrier never mind
anything in the MHz. Have you lost a factor of 10^6?

I know for a fact that transmitters opperating at much lower
frequencies (in the long wave marine band between 200 and 500 kHz) have
been
operated with keying speed much higher than 30 wpm.


http://www.fas.org/man/dod-101/navy/...cmp/part07.htm

"The ELF frequencies used, in the 40–80 Hz range, were
selected for their long range signal propagation (i.e.,
global) and ability to penetrate seawater to depths
several hundred feet below the surface."

It is keyed I believe fairly slowly but the
VLF systems are keyed at 50bps and in theory
so could the ELF although the BER would be
dreadful (25Hz modulation on a 40Hz carrier
giving a band from 15Hz to 65Hz).

Since transmitters operating at much lower frequencies are regularly keyed
at
much higher switching rates, their claims of millions of cycles per photon
[if RF and Light photons are similar] are clearly false.


Shannon's Theorem requires a certain bandwidth
to convey the data. Bandwidth translates to
uncertainty of the energy of any particular
photon so knowing 't' to the accuracy of a bit
duration (the photon is transmitted when the
key is on) limits knowledge of the energy but
roughly to the same as the bandwidth.

Basically I am saying Shannon's Theorem in
the classical view is related to Heisenberg's
Uncertainty in the quantum view, though that
sounds rather grandiose.

George


  #54  
Old July 16th 05, 03:05 PM
bz
external usenet poster
 
Posts: n/a
Default

"George Dishman" wrote in
:


"bz" wrote in message
98.139...
"George Dishman" wrote in news:db93ro$ogf$1
@news.freedom2surf.net:

On the other hand, if you have a narrow bandwidth beam of photons and
you
'chop' it, into small slices, mechanically, I am NOT sure that we
would generate sidebands, like 'normal' modulation would. [how does
one photon know that those ahead of it or behind it have been
absorbed?]

You were talking of pulse lengths of a couple
of cycles.


Of LESS than a couple of cycles.

Compared to laser coherence lengths
of metres, we are talking of letting through
a tiny sample of one photon :-)


I see no reason for a photon to be longer than one cycle.


That was my original point. Interference effects
still exist when the path length is many wavelengths
and they affect the probability of a single photon
arriving at a location.

Coherence length isn't the length of the photons, it tells us how big a
chunk
of light is 'phase, frequency, polarization, coherent'.


Yes, but it also affects single photons.


How?


It is in some sense 'the length of the cavity' or at least it is
clearly related to the length of the laser cavity. The longer the
cavity, the longer
the coherence length.

My impression is that it usually represents the stimulated emission
from a single pass through the laser cavity.


I don't think lasers with 50m coherence are
necessarily 25m long.


Of course not. The cavity with mirrors happens to be [is carefully adjusted
to be] the right length so that photons can make several trips. Thermal
instability, vibrations, and probably many other effects reduce the
coherence length from infinity. I would imagine that whenever a
spontainious decay takes place, throwing in a photon that is traveling in
the right direction but out of coherence with the current crowd of photons,
the odd photon starts picking up 'buddies'.

The probability of this happening will determine the average coherence
length.


OK, since they are particles, the way I expect
that to work is that you get a fractional
probability that the photon makes it through
the shutter.


Why do you think that a single photon must be longer than one cycle?


I think it is a 'point' particle (which might
be a string at the planck length).

However I also know that a single photon in
the double slit experiment has a negligible
probability of hitting a point where the
path difference is 10.5 wavelengths if the
coherence length is 1000 wavelengths.


How do you define coherence length for a photon?
Is your statement from experimental data? I would like to read about the
experiment.

I think that there may be some effect due to the thermal phonons of the
slits interacting with the electron clouds at the edge of the slit and
deflecting photons passing close to the edge.

If we chopped it fine enough, we should have a single photon, of
known energy/wavelength/frequency. We would almost certainly NOT know
its exact position, however. I think time would be the expresion of
uncertanty.

The relevant factors are dE*dt or dp*dx of course.

If a shutter is used and is open for a very short
time then you know t and x very accurately


If I don't know, within a small fraction of a cycle, when the photon
is, then
I don't know 't' very accurately.


"very accurately" is not well defined ;-) The
smaller dt or dx, the larger dE or dp, but
Planck's Constant is very small.


Right.

so dE
and dp become poorly defined. Of course both depend
on the frequency of the photon so I expect a side
effect of the shutter operation would be to scatter
the photons that get through in some way that adds
a random factor to the energy/momentum and hence
broadens the linewidth. However, I haven't used
lasers in thirty years and never worked with very
short pulses so I'm guessing. Perhaps the paper will
clue me in a bit when I get a chance to read it.


In the work we did with the optogalvanic effect induced by dye laser
pulses in plasma, we were not working with single photons, nor with
extremely short pulses. That was in the early 90s. I also worked with
YAG and CO2 lasers in the early 70s, using them to cut aluminum oxide
and to adjust resistors to value.


Neat, I used matched pairs of laser trimmed devices
in an instrumentation amp design many years ago.


We did active trimming of some of the resistors we made at Sprague,
trimming until the pulse width or gain or whatever was correct.

One CO2 laser was 50 W, CW, the other was 500 W, CW. The yags were
much lower average power and pulsed.

There is a paper
http://jchemed.chem.wisc.edu/JCEWWW/...Pub.html#ref16
That I disagree with. They appear to believe that photons consist of
wavetrains that are millions of cycles long.


Fascinating. It's something I'll have to study
a bit though. Thanks again!

I see no reason for Radio Frequency Photons to be any different from
light photons as to the number of cycles per photon.

If that is true, *and* IF they were right THEN there would be no way
for me
to key a 1.8 MHz transmitter at 30 wpm [where keying rate is about 12
dots per second].


Pardon? Data at 12 dots per second is only 24Hz so
could be transmitted on a 30Hz carrier


Right. But look at the size of a 30 Hz photon!

The idea was to falsify their thesis that photons were 'millions of cycles
long'. At 1.8 MHz each dot is 74000 cycles long. Much less than 'millions'.

never mind anything in the MHz. Have you lost a factor of 10^6?


No, just falsifying their thesis. Putting an upper bound on photon size,
direct from my own 160 meter transmitter.

I know for a fact that transmitters opperating at much lower
frequencies (in the long wave marine band between 200 and 500 kHz) have
been
operated with keying speed much higher than 30 wpm.


http://www.fas.org/man/dod-101/navy/...cmp/part07.htm

"The ELF frequencies used, in the 40–80 Hz range, were
selected for their long range signal propagation (i.e.,
global) and ability to penetrate seawater to depths
several hundred feet below the surface."

It is keyed I believe fairly slowly but the
VLF systems are keyed at 50bps and in theory
so could the ELF although the BER would be
dreadful (25Hz modulation on a 40Hz carrier
giving a band from 15Hz to 65Hz).


I have heard stories of what can happen when they try to key the ELF
transmitter at a high keying rate. The antenna swr goes up rapidly as you
get away from its design frequency. When you have millions of watts of
power, they have to carefully shape the keying waveform, or all hell breaks
loose.

Since transmitters operating at much lower frequencies are regularly
keyed at
much higher switching rates, their claims of millions of cycles per
photon [if RF and Light photons are similar] are clearly false.


Shannon's Theorem requires a certain bandwidth
to convey the data. Bandwidth translates to
uncertainty of the energy of any particular
photon so knowing 't' to the accuracy of a bit
duration (the photon is transmitted when the
key is on) limits knowledge of the energy but
roughly to the same as the bandwidth.

Basically I am saying Shannon's Theorem in
the classical view is related to Heisenberg's
Uncertainty in the quantum view, though that
sounds rather grandiose.


That sounds rather reasonable to me. I bet they have been compared before.

We should be able to place a rather specific upper bound on photon length
from ELF keying rate information. Of course, ELF communications might be
considered as 'nearfield' and thus the creation of actual 3747 km long
photons might not be very efficient.


--
bz

please pardon my infinite ignorance, the set-of-things-I-do-not-know is an
infinite set.

remove ch100-5 to avoid spam trap
  #55  
Old July 17th 05, 11:36 AM
George Dishman
external usenet poster
 
Posts: n/a
Default


"bz" wrote in message
98.139...
"George Dishman" wrote in
:


"bz" wrote in message
98.139...
"George Dishman" wrote in news:db93ro$ogf$1
@news.freedom2surf.net:

On the other hand, if you have a narrow bandwidth beam of photons and
you
'chop' it, into small slices, mechanically, I am NOT sure that we
would generate sidebands, like 'normal' modulation would. [how does
one photon know that those ahead of it or behind it have been
absorbed?]

You were talking of pulse lengths of a couple
of cycles.

Of LESS than a couple of cycles.

Compared to laser coherence lengths
of metres, we are talking of letting through
a tiny sample of one photon :-)

I see no reason for a photon to be longer than one cycle.


That was my original point. Interference effects
still exist when the path length is many wavelengths
and they affect the probability of a single photon
arriving at a location.

Coherence length isn't the length of the photons, it tells us how big a
chunk
of light is 'phase, frequency, polarization, coherent'.


Yes, but it also affects single photons.


How?


I don't know how, I only know it does for the
reason above, interference effects affect the
probability of photon distribution even with
multi-wavelength path differences.

It is in some sense 'the length of the cavity' or at least it is
clearly related to the length of the laser cavity. The longer the
cavity, the longer
the coherence length.

My impression is that it usually represents the stimulated emission
from a single pass through the laser cavity.


I don't think lasers with 50m coherence are
necessarily 25m long.


Of course not.


Then I misunderstood what you meant when you
said coherence length "usually represents the
stimulated emission from a single pass through
the laser cavity.". Even a two way pass would
need a cavity half the coherence length.

The cavity with mirrors happens to be [is carefully adjusted
to be] the right length so that photons can make several trips. Thermal
instability, vibrations, and probably many other effects reduce the
coherence length from infinity. I would imagine that whenever a
spontainious decay takes place, throwing in a photon that is traveling in
the right direction but out of coherence with the current crowd of
photons,
the odd photon starts picking up 'buddies'.

The probability of this happening will determine the average coherence
length.


I don't see a problem with that though my
knowledge is limited.

OK, since they are particles, the way I expect
that to work is that you get a fractional
probability that the photon makes it through
the shutter.

Why do you think that a single photon must be longer than one cycle?


I think it is a 'point' particle (which might
be a string at the planck length).

However I also know that a single photon in
the double slit experiment has a negligible
probability of hitting a point where the
path difference is 10.5 wavelengths if the
coherence length is 1000 wavelengths.


How do you define coherence length for a photon?


I'm not sure you can, but you can define the
average coherence length for a stream. It would
suggest be something like the path length
difference at which the probability function
was reduced by a certain amount. Think of the
point in an interference pattern where the
contrast ratio between light and dark fringes
is half the central value. It takes more than
one photon to produce a distribution of course.

Is your statement from experimental data? I would like to read about the
experiment.


Nothing special, this is the first hit I got
on Google:

http://tinyurl.com/cknu7

I think that there may be some effect due to the thermal phonons of the
slits interacting with the electron clouds at the edge of the slit and
deflecting photons passing close to the edge.


Could be, I'm not saying I know the mechanism
but experiments like that above tell us
something about single photons that seems to
contradict the idea that it can be a single
cycle or something close to that.

I see no reason for Radio Frequency Photons to be any different from
light photons as to the number of cycles per photon.

If that is true, *and* IF they were right THEN there would be no way
for me
to key a 1.8 MHz transmitter at 30 wpm [where keying rate is about 12
dots per second].


Pardon? Data at 12 dots per second is only 24Hz so
could be transmitted on a 30Hz carrier


Right. But look at the size of a 30 Hz photon!

The idea was to falsify their thesis that photons were 'millions of cycles
long'. At 1.8 MHz each dot is 74000 cycles long. Much less than
'millions'.

never mind anything in the MHz. Have you lost a factor of 10^6?


No, just falsifying their thesis. Putting an upper bound on photon size,
direct from my own 160 meter transmitter.


Ah, I get it. For my argument you would need
very large slits! However I agree with what
you say above, radio photons have to behave
the same as light so I think a pure CW signal
would give many fringes in the interference
pattern but keying it with a PRBS would
eliminate the fringes when the path length
difference to a receiving antenna via the
slits is comparable to the wavelength of
the keying rate.

Shannon's Theorem requires a certain bandwidth
to convey the data. Bandwidth translates to
uncertainty of the energy of any particular
photon so knowing 't' to the accuracy of a bit
duration (the photon is transmitted when the
key is on) limits knowledge of the energy but
roughly to the same as the bandwidth.

Basically I am saying Shannon's Theorem in
the classical view is related to Heisenberg's
Uncertainty in the quantum view, though that
sounds rather grandiose.


That sounds rather reasonable to me. I bet they have been compared before.

We should be able to place a rather specific upper bound on photon length
from ELF keying rate information. Of course, ELF communications might be
considered as 'nearfield' and thus the creation of actual 3747 km long
photons might not be very efficient.


I still think it's easier with a laser, doing
Young's Slits at ELF presents some interesting
engineering challenges if we are to observe the
contrast ratio from the centre to a million
fringes either side :-o

George


  #56  
Old July 17th 05, 03:58 PM
bz
external usenet poster
 
Posts: n/a
Default

"George Dishman" wrote in
:


"bz" wrote in message
98.139...
"George Dishman" wrote in
:


"bz" wrote in message
98.139...
"George Dishman" wrote in
news:db93ro$ogf$1 @news.freedom2surf.net:

On the other hand, if you have a narrow bandwidth beam of photons
and you
'chop' it, into small slices, mechanically, I am NOT sure that we
would generate sidebands, like 'normal' modulation would. [how does
one photon know that those ahead of it or behind it have been
absorbed?]

You were talking of pulse lengths of a couple
of cycles.

Of LESS than a couple of cycles.

Compared to laser coherence lengths
of metres, we are talking of letting through
a tiny sample of one photon :-)

I see no reason for a photon to be longer than one cycle.

That was my original point. Interference effects
still exist when the path length is many wavelengths
and they affect the probability of a single photon
arriving at a location.

Coherence length isn't the length of the photons, it tells us how big
a chunk
of light is 'phase, frequency, polarization, coherent'.

Yes, but it also affects single photons.


How?


I don't know how, I only know it does for the
reason above, interference effects affect the
probability of photon distribution even with
multi-wavelength path differences.


I thought that the distribution of the double slit pattern depended on the
wavelength of the photon, not the coherence length of the laser. One can even
get a double slit pattern from an incoherent source, such as a lightbulb with
a band pass filter.

I know of one place that coherence length is important: laser holography. I
wanted to build a color holographic camera, using 3 laser diodes. During the
research I did on that project I found that it would be useless for taking
holographic pictures of anything at a distance greater than the coherence
length of the diodes. This ruled out a portable color camera.

I still am not sure that the coherence length effects interference patterns
for single photon experiments.

It is in some sense 'the length of the cavity' or at least it is
clearly related to the length of the laser cavity. The longer the
cavity, the longer
the coherence length.

My impression is that it usually represents the stimulated emission
from a single pass through the laser cavity.

I don't think lasers with 50m coherence are
necessarily 25m long.


Of course not.


Then I misunderstood what you meant when you
said coherence length "usually represents the
stimulated emission from a single pass through
the laser cavity.". Even a two way pass would
need a cavity half the coherence length.


http://www.holo.com/holo/book/book6&7.html

The cavity with mirrors happens to be [is carefully adjusted
to be] the right length so that photons can make several trips. Thermal
instability, vibrations, and probably many other effects reduce the
coherence length from infinity. I would imagine that whenever a
spontainious decay takes place, throwing in a photon that is traveling
in the right direction but out of coherence with the current crowd of
photons,
the odd photon starts picking up 'buddies'.

The probability of this happening will determine the average coherence
length.


I don't see a problem with that though my
knowledge is limited.


By the way, did you know that you [anyone] can build a lensless, laser that
uses the nitrogen in the air at atmospheric pressure?
http://repairfaq.ece.drexel.edu/sam/lasercn2.htm

OK, since they are particles, the way I expect
that to work is that you get a fractional
probability that the photon makes it through
the shutter.

Why do you think that a single photon must be longer than one cycle?

I think it is a 'point' particle (which might
be a string at the planck length).

However I also know that a single photon in
the double slit experiment has a negligible
probability of hitting a point where the
path difference is 10.5 wavelengths if the
coherence length is 1000 wavelengths.


How do you define coherence length for a photon?


I'm not sure you can, but you can define the
average coherence length for a stream.


right.

It would
suggest be something like the path length
difference at which the probability function
was reduced by a certain amount. Think of the
point in an interference pattern where the
contrast ratio between light and dark fringes
is half the central value. It takes more than
one photon to produce a distribution of course.


At least two? [quite a few more]

Is your statement from experimental data? I would like to read about
the experiment.


Nothing special, this is the first hit I got
on Google:

http://tinyurl.com/cknu7


a good dual slit, single photon experiment but I see nothing about coherence
length having an effect on the pattern.


I think that there may be some effect due to the thermal phonons of the
slits interacting with the electron clouds at the edge of the slit and
deflecting photons passing close to the edge.


Could be, I'm not saying I know the mechanism
but experiments like that above tell us
something about single photons that seems to
contradict the idea that it can be a single
cycle or something close to that.


Unless, in passing through the slit, it influences the various vibrations in
the structures of the slits, kind of like seismic waves, passing through the
earth, cause measurable effects at a distance.

I see no reason for Radio Frequency Photons to be any different from
light photons as to the number of cycles per photon.

If that is true, *and* IF they were right THEN there would be no way
for me
to key a 1.8 MHz transmitter at 30 wpm [where keying rate is about 12
dots per second].

Pardon? Data at 12 dots per second is only 24Hz so
could be transmitted on a 30Hz carrier


Right. But look at the size of a 30 Hz photon!

The idea was to falsify their thesis that photons were 'millions of
cycles long'. At 1.8 MHz each dot is 74000 cycles long. Much less than
'millions'.

never mind anything in the MHz. Have you lost a factor of 10^6?


No, just falsifying their thesis. Putting an upper bound on photon
size, direct from my own 160 meter transmitter.


Ah, I get it. For my argument you would need
very large slits! However I agree with what
you say above, radio photons have to behave
the same as light so I think a pure CW signal
would give many fringes in the interference
pattern but keying it with a PRBS would
eliminate the fringes when the path length
difference to a receiving antenna via the
slits is comparable to the wavelength of
the keying rate.


Single photon RF experiments should produce similar results to single photon
light experiments. A single photon, at say 100 GHz, is 0.3 cm in wavelength.
It has 6.6e-23 joules of energy (making it hard to detect one, but perhaps in
a cryogenic chamber, it could be done).

A 5 mW transmitter puts out 7.5e19 photons per second. In a single cycle,
7.54e8 photons are emitted at that power level.

Assuming we could switch the transmitter on and off (or switch antenna and
dummy load) at zero crossing, fast enough to pass only 1 cycle to the
antenna, we would need to switch at a 10 ps interval.

That should be possible. At 5 mW, it should give us 7.5e8 photons that are
frequency and phase coherent, and, I predict, no keying transients.

In any case, we should be able to determine the maximum length of a photon.

Shannon's Theorem requires a certain bandwidth
to convey the data. Bandwidth translates to
uncertainty of the energy of any particular
photon so knowing 't' to the accuracy of a bit
duration (the photon is transmitted when the
key is on) limits knowledge of the energy but
roughly to the same as the bandwidth.

Basically I am saying Shannon's Theorem in
the classical view is related to Heisenberg's
Uncertainty in the quantum view, though that
sounds rather grandiose.


That sounds rather reasonable to me. I bet they have been compared
before.

We should be able to place a rather specific upper bound on photon
length from ELF keying rate information. Of course, ELF communications
might be considered as 'nearfield' and thus the creation of actual 3747
km long photons might not be very efficient.


I still think it's easier with a laser, doing
Young's Slits at ELF presents some interesting
engineering challenges if we are to observe the
contrast ratio from the centre to a million
fringes either side :-o


Even if we lived on Jupiter, it would be difficult to do.

We need to do that one in interstellar space.

Even at a slightly more manageable size, like 3 millimeter microwaves, it
would present some problems. But it would be interesting to try.




--
bz

please pardon my infinite ignorance, the set-of-things-I-do-not-know is an
infinite set.

remove ch100-5 to avoid spam trap
  #57  
Old July 17th 05, 08:44 PM
Paul B. Andersen
external usenet poster
 
Posts: n/a
Default

Henri Wilson wrote:
Definition of the BaT: "Light initially moves at c wrt its source".

If a remote light source emits a pulse of light towards a target observer
moving relatively at v1, then, from the point of view of a third observer O3,
the 'closing speed' of that pulse towards the first observer is c+v1.

For another target observer moving at v2, the closing speed is seen as c+v2.
Here is the experimental setup:

S_._._._._._._.p_._._._._._._.v1T1_._._
v2T2



O3

O3 sets up a line of equally separated clocks which measure the speed of a
light pulse emitted by S towards T1 and T2. O3 also measures the speed of T1
and T2 towards S. The readings enable him to calculate the different 'closing
speeds' between the pulse and T1 and the pulse and T2.

I understand that SRians agree on this.

The principle of relativity says it matters not whether the source or target is
considered as moving. Therefore, the above considerations hold just as well for
differently moving sources.

Thus, for a particular target, the 'closing speed' of light from relatively
moving sources is c+v3, c+v4, etc., as seen by O3.

Consider a star of constant brightness moving in some kind of orbit.
From O3's POV, light emitted at different times of (its) year will have
different 'closing speeds' towards any particular target (unless the orbit
plane is normal).
For illustration purposes, let the star emit equally spaced and identical
pulses of light as it orbits. Thus, from O3's POV, some pulses will tend to
catch up with others. Some will tend to move further away. The O3 will detect
bunching and separation at certain points along the light path. Fast pulses
will eventually overtake slow ones if no target intervenes.

Armed with this knowledge, O3 will reason that any target observer will receive
pulses from the star at different rates. This can only mean that OT will, in
reality, perceive the observed brightness of any (intrinsically stable) star in
orbit to be varying cyclically over the star's year, by an amount that will
depend on the distance to the star.

There are thousands of known stars that exhibit this type of very regular
brightness variation. Most of their brightness curves can be matched by my
variable star simulation program:
www.users.bigpond.com/hewn/variablestars.exe


We both know that you have tested your program only once,
namely on HD80715.
What was the result, Henri?

Everybody, notice his answer. :-)

Note: Einstein's unproven claim that the target observer will always MEASURE
the speed of the incoming pulses as being c is completely irrelevant to this
argument.

The BaT acknowleges the existence of extinction and that 'local aether frames'
may exist in the vicinity of matter. These may determine local light speeds.





HW.
www.users.bigpond.com/hewn/index.htm

Sometimes I feel like a complete failure.
The most useful thing I have ever done is prove Einstein wrong.


No progress, then.

Paul
  #58  
Old July 18th 05, 01:33 AM
Henri Wilson
external usenet poster
 
Posts: n/a
Default

On Sun, 17 Jul 2005 21:44:08 +0200, "Paul B. Andersen"
wrote:

Henri Wilson wrote:
Definition of the BaT: "Light initially moves at c wrt its source".

If a remote light source emits a pulse of light towards a target observer
moving relatively at v1, then, from the point of view of a third observer O3,
the 'closing speed' of that pulse towards the first observer is c+v1.

For another target observer moving at v2, the closing speed is seen as c+v2.
Here is the experimental setup:

S_._._._._._._.p_._._._._._._.v1T1_._._
v2T2



O3

O3 sets up a line of equally separated clocks which measure the speed of a
light pulse emitted by S towards T1 and T2. O3 also measures the speed of T1
and T2 towards S. The readings enable him to calculate the different 'closing
speeds' between the pulse and T1 and the pulse and T2.

I understand that SRians agree on this.

The principle of relativity says it matters not whether the source or target is
considered as moving. Therefore, the above considerations hold just as well for
differently moving sources.

Thus, for a particular target, the 'closing speed' of light from relatively
moving sources is c+v3, c+v4, etc., as seen by O3.

Consider a star of constant brightness moving in some kind of orbit.
From O3's POV, light emitted at different times of (its) year will have
different 'closing speeds' towards any particular target (unless the orbit
plane is normal).
For illustration purposes, let the star emit equally spaced and identical
pulses of light as it orbits. Thus, from O3's POV, some pulses will tend to
catch up with others. Some will tend to move further away. The O3 will detect
bunching and separation at certain points along the light path. Fast pulses
will eventually overtake slow ones if no target intervenes.

Armed with this knowledge, O3 will reason that any target observer will receive
pulses from the star at different rates. This can only mean that OT will, in
reality, perceive the observed brightness of any (intrinsically stable) star in
orbit to be varying cyclically over the star's year, by an amount that will
depend on the distance to the star.

There are thousands of known stars that exhibit this type of very regular
brightness variation. Most of their brightness curves can be matched by my
variable star simulation program:
www.users.bigpond.com/hewn/variablestars.exe


We both know that you have tested your program only once,
namely on HD80715.
What was the result, Henri?

Everybody, notice his answer. :-)


The program relies on the concept of 'closing speed of light', as defined by
SR.
How COULD it be wrong?


Note: Einstein's unproven claim that the target observer will always MEASURE
the speed of the incoming pulses as being c is completely irrelevant to this
argument.

The BaT acknowleges the existence of extinction and that 'local aether frames'
may exist in the vicinity of matter. These may determine local light speeds.





HW.
www.users.bigpond.com/hewn/index.htm

Sometimes I feel like a complete failure.
The most useful thing I have ever done is prove Einstein wrong.


No progress, then.

Paul



HW.
www.users.bigpond.com/hewn/index.htm

Sometimes I feel like a complete failure.
The most useful thing I have ever done is prove Einstein wrong.
  #59  
Old July 18th 05, 06:35 PM
Paul B. Andersen
external usenet poster
 
Posts: n/a
Default

Henri Wilson wrote:
On Sun, 17 Jul 2005 21:44:08 +0200, "Paul B. Andersen"
wrote:


Henri Wilson wrote:

There are thousands of known stars that exhibit this type of very regular
brightness variation. Most of their brightness curves can be matched by my
variable star simulation program:
www.users.bigpond.com/hewn/variablestars.exe


We both know that you have tested your program only once,
namely on HD80715.
What was the result, Henri?

Everybody, notice his answer. :-)



The program relies on the concept of 'closing speed of light', as defined by
SR.
How COULD it be wrong?


See? :-)

Henri Wilson won't tell us what the result was
the one time he tested his program with measured data
of a known binary.

Paul
  #60  
Old July 18th 05, 11:11 PM
Henri Wilson
external usenet poster
 
Posts: n/a
Default

On Mon, 18 Jul 2005 19:35:14 +0200, "Paul B. Andersen"
wrote:

Henri Wilson wrote:
On Sun, 17 Jul 2005 21:44:08 +0200, "Paul B. Andersen"
wrote:


Henri Wilson wrote:

There are thousands of known stars that exhibit this type of very regular
brightness variation. Most of their brightness curves can be matched by my
variable star simulation program:
www.users.bigpond.com/hewn/variablestars.exe

We both know that you have tested your program only once,
namely on HD80715.
What was the result, Henri?

Everybody, notice his answer. :-)



The program relies on the concept of 'closing speed of light', as defined by
SR.
How COULD it be wrong?


See? :-)

Henri Wilson won't tell us what the result was
the one time he tested his program with measured data
of a known binary.


All that beer hasn't cured your tendency to rave.


Paul



HW.
www.users.bigpond.com/hewn/index.htm

Sometimes I feel like a complete failure.
The most useful thing I have ever done is prove Einstein wrong.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Ted Taylor autobiography, CHANGES OF HEART Eric Erpelding History 3 November 14th 04 11:32 PM
The Steady State Theory vs The Big Bang Theory Br Dan Izzo Astronomy Misc 8 September 7th 04 12:07 AM
Gravity as Falling Space Henry Haapalainen Science 1 September 4th 04 04:08 PM
Building my own Newtonian Telescope - progress report Dr DNA UK Astronomy 11 March 24th 04 10:06 PM
Hypothetical astrophysics question Matthew F Funke Astronomy Misc 39 August 11th 03 03:21 AM


All times are GMT +1. The time now is 11:49 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.