A Space & astronomy forum. SpaceBanter.com

Go Back   Home » SpaceBanter.com forum » Astronomy and Astrophysics » Astronomy Misc
Site Map Home Authors List Search Today's Posts Mark Forums Read Web Partners

The Gravitational Instability Cosmological Theory



 
 
Thread Tools Display Modes
  #1  
Old August 31st 04, 02:35 AM
Br Dan Izzo
external usenet poster
 
Posts: n/a
Default The Gravitational Instability Cosmological Theory

The Gravitational Instability Cosmological Theory

Saturday, August 21, 2004
The Gravitational Instability Cosmological Theory on the Formation of
the Universe
Sent to:Science Magazine Aug 2 2004

The Gravitational Instability Cosmological Theory
on the Formation of the Universe.

When the Universe started to fall

(1) The expansion of the universe is a result of the " heat '
contained therein;

(2) The source of the " heat " is the cosmic microwave radiation
backround at 3 kelvin,
wherein;

(3) The microwave electro magnetic-nuclear energy was formed as a
result of the
interaction of two different static gravitational vacuum fields,
causing gravitational
instability and the motion, void of matter, at this time,

wherein; static gravitational field (1) began to go into "motion".

Therefore; only (2) static gravitational vacuum fields alone, being
void of E=MC^2

could create E=MC^2; and the matter of the Universe.

When did motion first start ?

Science knows the formation of matter in our universe was caused by
the forces of the

universe.

These forces a

(1) The Force of Gravity

(2) The Force of Electro Magnetism

(3) The Strong Nuclear Force

(4) The Weak Nuclear Force

At some point in time, motion within the universe, had to begin.
The paradox would be, what force could cause motion to begin, without
moving in its
present space-time ?

The Gravitational Cosmological Theory was
developed from an is rooted in the Einstein Steady State Theory and
the Bondi-Gold-Hoyle Steady State Theory,

Wherein the Steady State Theory the universe,
contains more protons than electrons that
create dust particles and
galaxies formed in their current locations and the cosmic
matter is recycled therein at the center of the galaxy furnace.
------------
When the Universe started to fall:
The Gravitational Instability Cosmological Theory on the Formation of
the Universe.
The Theory:
(1) The expansion of the universe is a result of the " heat '
contained therein;
(2) The source of the " heat " is the cosmic microwave radiation
backround at 3 kelvin,
wherein;
(3) The microwave electro magnetic-nuclear energy was formed as a
result of the
interaction of two different static gravitational vacuum fields,
causing gravitational
instability and the motion, void of matter, at this time, wherein;
static gravitational
field (1) began to go into "motion".
Therefore; only (2) static gravitational vacuum fields alone, being
void of E=MC^2
could create E=MC^2; and the matter of the Universe.
Q: When did this motion start?
A: If a neutral particle is able to resist the universal motion, in
theory, that particle
would go back in time. Going back in time the neutral particle would
then enter into (1)
of the (2) motionless-static gravity vacuum fields void of motion, and
cause an unbalance
and gravitational instability and this interaction would create motion
and energy
particles.
Q: What causes a gravitational static vacuum field in the first place
?
A: Pressure force is used to create a vacuum on Earth, perhaps an
exotic something
100,000 times weaker than the force of gravity decays, causing a
static-motionless gravity vacuum field.

Theory by

Rev Daniel Izzo July 2002
512 Onondaga Ave
Syracuse, NY 13207


posted by Rev Dan Izzo @ 1:52 AM

3 Comments:
At 2:16 AM, Rev Dan Izzo said...
Subj: The Steady State Theory verses The Big Bang Theory /
Cosmological alternatives

Alternative Theory (2)


The Steady State Galaxy Theory

by R.Rufus Young

Last revised Dec 30,1996

An Alternative To

The Big Bang Theory

Go to Text only version. INDEX


Introduction
Basic Operation of Galaxies
Mass and Energy
Shape of Galaxies
Red Shift
Microwave Background Radiation
Entropy
Hydrogen-Helium Ratio
Quasars
Summary Introduction

The purpose of this Web Page is to show that the Steady State Galaxy
Theory can provide an alternative to the Big Bang Theory in explaining
the universe around us. It covers the operation of Galaxies and shows
that they recycle both Matter and Energy and are able to carry on
indefinitely. It also explains the Shape of Galaxies, Red Shift,
Microwave Background Radiation, Entropy and the Hydrogen-Helium Ratio.

If the reader takes an open-minded approach and looks at all aspects
of the material presented here before reaching any conclusions, it
will, at least, provide them with some food for thought.

Basic Operation of Galaxies

At the center of each galaxy is a neutroid which acts to constantly
recycle all the matter and energy in the galaxy. This neutroid is
similar to a neutron star but is very much larger and has reached a
size where the pressure and temperature at its surface are great
enough to generate a nuclear fusion process. In the areas of the
neutroid's magnetic poles, the products of fusion are trapped by the
magnetic field and are pushed out along the magnetic field by the
pressure of the nuclear fusion process going on below. This results in
a column of material composed of hydrogen, helium and other light
elements being ejected at each of the neutroid's two magnetic poles.
This material moves out from the neutroid at essentially constant
velocity until it reaches a point where the magnetic field is no
longer strong enough to control it. Once free of the magnetic field
the material then continues under it's own momentum to travel to the
outer edge of the galaxy before starting to fall back toward the
neutroid.

This process enables the neutroid to eject matter from itself and
results in jets of hydrogen and helium ions being produced at each of
the neutroid's two magnetic poles. The larger the neutroid becomes,
the greater the size and velocity of its jets. This becomes a stable
and self-limiting process where the amount of material attracted to
the neutroid will be equal to the amount of material expelled at its
magnetic poles. Eventually if too much material is added to the
system, the velocity of the material being ejected from the magnetic
poles will be sufficient for it to escape from the system altogether,
thus limiting the total mass the system can accumulate. This process
forms the basis of operation of all galaxies. The size and shape of
galaxies are determined by the size of the neutroid at their center
and its rate and plane of rotation. In the case of our own galaxy (The
Milky Way) these jets have sufficient momentum to carry the material
out to 100,000 light years distance from the center.

As the jets of gas stream out from the Neutroid, large clouds of it
condense and form the stars which are predominately located in the
spiral arms of the Galaxies. These stars eventually burn up their
Hydrogen fuel and in the process create the other heavier elements we
find in the universe, all the while continuing to travel to the outer
edge of the galaxy. It has probably been at least 10 Billion years
since the material of which our solar system is composed was initially
ejected from the neutroid. It is now located about 2/3rds the distance
to the edge of the galaxy, but since it is constantly decelerating it
will take it another 20 billion years to reach its maximun distance
from the neutroid. The total transit time from when material is
ejected from the neutroid at the center of the Milky Way to when it
returns to the neutroid will be about 60 Billion years.

Although the material ejected by the neutroid appears to travel in a
spiral arc, in actual fact it is travelling in a straight radial line
out from the neutroid and will eventually travel back along the same
radial path to the neutroid. To help visualize this process, imagine
setting up two super cannons, each on opposite sides of the earth at
the equator and each pointing straight up and each capable of firing a
projectile with sufficient velocity that it will take 12 hours to
reach the top of its projectory. Now, fire a projectile from each
cannon every hour for 12 hours and plot the position of each
projectile at the end of the 12 hours. The result, as shown in figure
1, will be two spiral arms much like the Galactic arms are shaped.

IF we continue the experiment for another 3 hours and draw a new plot,
figure 2, we find that the first projectiles that were fired have now
passed the peak of their altitude and have started to fall back to
earth and the whole spiral pattern appears to have rotated
counterclockwise 45 degrees. However, the only changes in the
positions of projectiles No.1 have been to move slightly closer to the
earth along a radial line and they will continue falling back to earth
along the same radial path and will impact the earth 24 hours after
being fired. They do not themselves travel in a spiral path around the
earth although the loci of their instantaneous positions forms a
spiral which appears to be rotating.

Figure 3 represents a typical small galaxy which is composed of 3
parts, (a) a Central Core (Area 1), (b) 2 Jets of material being
ejected from the core (Areas 1 to 2), and (c) Spiral Arms (Areas 2 to
3). The Central Core consists of a neutroid at the center and an
obscuring mass of material trapped in the Neutroid's magnetic field.
The areas from 1 to 2 are gigantic jets of gas which are being ejected
by the Neutroid and are contained within its magnetic field. Star
formation occurs in these areas. At point 2 the magnetic field of the
Neutroid weakens to the extent that it no longer constrains the
material within it and as the material continues to move outward it
will now trace a spiral arc as per the previous illustrations in Figs.
1 & 2. At point 3 the hydrogen fuel has been consumed and although the
remains of the burned out stars are still there they become invisible
dark matter as they continue to travel to the top of their projectory
and then fall back to the Neutroid.

Thus, the galaxies form huge recycling systems which will carry on
indefinitely.

Hydrogen, helium and other light elements are ejected ejected from the
Neutroid.
Clouds of this material condense to forms stars which emit energy and
in the process form heavier elements.
These stars eventually exhaust their fuel and die. In the process many
of these stars will explode as supernovas. The heavier elements which
we find in our solar system are the remnants from these dead stars.
All this material will travel to the outer edge of the galaxy and will
then start falling back in toward the neutroid.
Upon hitting the neutroid, the force of the impact will be great
enough that the atoms of heavier elements will be split apart and the
temperature and pressure will be great enough that this incoming
matter will be converted to neutrons.
In the areas of the neutroid's magnetic poles, a nuclear fusion
reaction will take place that forces a streams of material to be
expelled thus completing the cycle.

(return to index)

Mass and Energy

Einstein showed that mass and energy are related by the formula
E=MC^2. What this famous formula says is that what we call the mass of
a particle is really nothing more than a measure of the sum total of
all forms of energy associated with that particle. The various forms
of energy include potential energy, kinetic energy, chemical energy,
nuclear binding energy, etc. Of these various forms of energy,
potential energy is the most important and accounts for the largest
part of the mass of particles which constitute our immediate
enviroment.

When a particle is in a deep gravational well, such as in the case of
particles that make up the neutroid at the center of galaxies, they
have very little potential energy,and hence, very little rest mass. As
they are pushed out from the neutroid their potential energy and hence
their rest mass is increased dramatically. When these particles
eventually fall back into the neutroid, this potential energy is
converted to kinetic energy and results in the particles making up the
neutroid having very little rest mass but a tremendous amount of
kinetic energy.

This combination of low rest mass and high kinetic energy prevent the
neutroid from collapsing into a black hole as has been speculated by
many scientists. This combination also makes it relatively easy for a
nuclear fussion process to push material out from the neutroid in the
area of the neutroid's magnetic poles.

(return to index)

Shape of Galaxies

The Concept of the Steady State Galaxy as put forth above can account
for the shape of all galaxies we see in the universe. As explained
above, the spiral is the basic shape of galaxies. The exact shape will
be determined by the size of the neutroid, the tilt of its magnetic
axis with respect to its axis of rotation and its rate of rotation .

Our Milky-Way is typical of large mature galaxies in which it takes
many billions of years for the magnetic poles to make one revolution.
As well, the hydrogen ejected at the magnetic poles has sufficient
velocity to reach a distance of 100,000 light-years from the Neutroid
and it takes it tens of billions of years to reach that distance. If
the rate of rotation of the magnetic poles of the Neutroid were much
greater in relation to the velocity of the hydrogen jets, the spiral
arms would overlap and become nondistinct thus forming an ELIPICAL
Galaxy. If the magnetic axis were slightly less than 90 degrees with
respect to the axis of rotation, a thicker galaxy would result.

BAR Galaxies are small galaxies in which the hydrogen fueling the
Stars is all consumed before the Stars can escape the magnetic field
of the Neutroid's magnetic poles.

Many galaxies such as M104(NGC4594) exhibit a very prominent dust lane
about their edge. This is a feature that is difficult to explain using
presently accepted theories but is to be expected in some types of
galaxies under the steady state galaxy theory.

(return to index)

Red Shift

The Big Bang Theory was originally proposed in order to explain the
'RED Shift' of light received by us from distant galaxies. Light
received from distant stars can be broken down and analyzed as to its
spectral content. It has been found that stars of a similar size and
age produce identical spectral patterns which are related to their
atomic composition. However, it was also found that the wavelength of
the light from distant galaxies was increased in proportion to their
distance from us. Scientists have interpreted the cause of this effect
to be due to a doppler shift, meaning that it is caused by the distant
galaxies moving away from us,-i.e. the expanding universe. This
doppler shift is the same as one gets standing near a railway track
when a train passes blowing its whistle, as the train passes by, the
sound of its whistle appears to drop in frequency.

In reality the universe we live in is not expanding and is in a steady
state where its matter and energy are being constantly recycled. The
so called Red Shift is caused by other factors. We know from a branch
of Physics known as Quantum Mechanics that the Energy of a photon of
light is defined by the equation E=hv where E is the energy of the
photon, h is plancks' constant and v is its frequency. If for any
reason energy is lost from a photon, its frequency will decrease in
accordance with this equation.

Scientists do not as yet have a good understanding of the nature of a
photon as to whether it is a particle or a wave, or some combination
of both. Although experiments done by Michhelson and Morley and others
have been interpreted to rule out the existence of an universal
aether, this is by no means certain. Scientists can't measure what
happens to a photon over a period of a minute, let alone what happens
to to it over a period of a billion years. Based on current knowledge,
there is no way scientists can state with absolute certainty that
photons do not lose energy over time.

The mechanism for the lose of energy by photons over time is still
unclear. It could be by interaction of the photon with the stray atoms
of hydrogen which are dispersed throughout intergalactic space. It is
well known that photons do exert 'radiation pressure' on particles
they encounter and if pressure is exerted, then energy must be
transferred. Another possibility is that there is indeed an aether
which absorbs some energy over time and reradiates it as a black body
radiator having a temperature of 2.8 degrees K. One thing that is
clear is that the radiation density of the starlight photons which
leave own galaxy is equal to the radiation density of the Microwave
Background radiation which is received by our galaxy. This fact is
probably more than a coincidence and is an indication that the
starlight radiation is being converted by some unknown process to the
Microwave Background radiation. It is every bit as reasonable to
assume that the Red Shift is caused by loss of energy of the photon
over time as it is to assume that it is caused by a doppler effect.

Because of the downshifting in the frequency of light for whatever
reason, there is a limit to how far it is possible to image distant
galaxies. The actual universe will be far larger than we can imagine
or detect and will probably be infinite in size.

(return to index)

MicroWave Background Radiation

A second argument which has been made to support the Big Bang Theory
is the microwave background radiation. COBE has shown that the
spectrum of the Microwave Background Radiation (MBR) is that of an
ideal Black Body Radiator having a temperature of about 2.8 degrees K.
It has also shown that this radiation has a Redshift/Blueshift to it,
indicating that the earth is moving about 300Km/s relative to the
shell of matter that emitted the radiation. Since this speed is too
great for the earth's movement within the milky-way galaxy, it
indicates that the source is outside our galaxy and that our galaxy is
moving in relation to that source.

As indicated in the previous section dealing with redshift, the
starlight photons radiated by galaxies gradually lose energy through
some unknown process which then reradiates this energy as the
Microwave Background Radiation. The wavelength of the photons of the
MBR, at the peak of the spectrum radiation curve, will be about 1mm.
Since the rate of loss of energy by photons will be inversely
proportional to the wavelength of those photons, and since the MBR
photons have a wavelength of more than a thousand times that of
visible light, the percentage loss of energy by the MBR photons will
be at a rate of over one thousand times less than that of a visible
photon. (If it takes a visible photon 15 billion years to lose 3/4's
of it's energy, then it would take a MBR photon 15,000 billion years
to lose 3/4's of it's energy). It follows that since MBR photons have
a range of travel of more than one thousand times that of visible
light photons, they are also a thousand times more likely to encounter
a galaxy and be absorbed by the matter of that galaxy then a visible
light photon would.

Thus, energy is radiated by galaxies in the form of starlight photons.
Energy from these photons is gradually converted to MBR photons. These
MBR photons are eventually absorbed by some other galaxy.

Since the intensity of the microwave background radiation will be
relatively constant throughout the universe (assuming an infinite
steady state universe), the amount of energy a galaxy will absorb from
it will be proportional to the size of that galaxy. The amount of
energy a galaxy radiates is also proportional to it's size, thus an
equilibrium will be reached where a galaxy will receive as much energy
in the form of MBR photons as it itself radiates in the form of
starlight photons.

(return to index)

Entropy

A third argument that has been put forward in support of the Big Bang
Theory is entropy, in that, it is argued that the universe must
eventually run down into a state of thermal equilibrium. Energy exists
in various forms such as atomic binding energies, thermal energy,
potential and kinetic energy, etc., all of which are associated with
matter, or it exists in photons which have been radiated by matter and
will eventually be reabsorbed by matter. Under the Steady State Galaxy
Theory as put forth above, since all matter in a Galaxy is recycled
through the Neutroid on a regular basis, all energy contained by that
matter is also recycled at the same time and, thus, the universe does
not run down into a state of thermal equilibrium.

There is a perception that energy only flows from hot bodies to cooler
ones. This is not true for radiant energy. The MBR photons which
exhibit the characteristics of a 2.8 degree black body radiator do get
absorbed by the much hotter material which makes up the galaxies. The
critical factor which determines the direction of net flow of radiant
energy is not the relative temperatures of the bodies but the energy
densities they produce. In the case of our universe, the MBR radiation
has an energy density equal to the starlight radiation energy density
emitted by the galaxies. Thus, there is an equilibrium condition where
galaxies receive as much energy in the form of MBR Radiation as they
radiate in the form of Starlight Radiation and there will be no net
flow of energy from the galaxies to the material in intergalactic
space.

(return to index)

Hydrogen-Helium

A fourth arguement which has been used to support the Big Bang theory
is that it would account for the abundance of helium we find in the
universe. The amount of helium present (24%) cannot be accounted for
by star production and according to Gamow it was generated by the Big
Bang.

Under the Steady State Galaxy theory, the nuclear fusion process which
is expelling the material from the neutroid would generate large
amounts of helium as well as other light elements and is the source of
the excess helium found in the universe.

(return to index)

Quasars

The latest Hubble pictures of quasars show that they are associated
with galaxies and in most cases there is evidence that these galaxies
have recently collided with other galaxies.

In normal galaxies, the neutroid at their center is obscured by a halo
of material trapped in the neutroid's magnetic field. In the case of
quasars, this halo of material has been temporarily destroyed by the
collision with another galaxy and we are seeing the bare neutroid
which is, as expected, extremely energetic.

(return to index)

Summary

The Steady State Galaxy Theory as put forth above can provide the
basis for the operation of the Universe as it is seen to exist. It can
not only account for the shape of all galaxies we see in the universe
which is something no other theory as proposed so far can accomplish
but it can also explain the existence of quasars.

As more data is gathered by the Hubble Space Telescope and other
sources, it is becoming increasingly clear that the Big Bang theory
cannot account for the universe around us. I believe the the Steady
State Galaxy Theory as presented here can provide the basis of an
alternative to the Big Bang Theory.

(return to index) (return to top of page) Other Interesting Papers

For a historical perspective of the Big Bang Theory see Keith Stein's
Essay "The Big Bang Myth"

"Endless, Boundless, Stable Universe" by Grote Reber -a pioneer in the
field of Radio Astronomy.

"Dark Matter" and "Hubble's Constant in Terms of the Compton Effect"
by John Kierein



Please E-mail me your comments and suggestions.


Last revised Dec 30,1996.
Copyright R.Rufus Young 1996 all rights reserved.
-----------------------------------------------------------------------------------------------------------------------------------


At 2:32 AM, Rev Dan Izzo said...
VORTEX

Liquid - Gravity Induced Vortex

A plug is pulled under a contained volume of liquid.
The liquid above the discharge opening starts dropping down through
the opening creating a lower pressure column of liquid.
The Pascal's rule of pressure in fluids says that the static pressure
within the whole volume exerted sideways around this dropping column
will attempt to fill into the dropping water column, creating an
inverted cone of flow toward the discharge.
The surface plane of the liquid develops a hollow due to the pressure
drop above the discharge.
If the liquid column is in a sufficient ratio to the opening diameter,
gravitational acceleration at the surface hollow tends to outrun the
discharge rate and any slightest impetus causes the liquid sliding
down the slope of the depression to spin. Vortex develops easier in a
conical vessel than in any other vessel, because it is the surface
hollow where the gravitational acceleration causes the liquid to slide
into the falling liquid column.
The static pressure orthogonal to the vortex vertical axis acts on the
column as the total static pressure (area x force). Vortex spin
translates this pressure into accelerating "orbital" speed of the
column. (fig 1)
The circular motion component is triggered by external influences,
such as rotation of earth, or by general liquid flow, if any. The
circular motion within a vortex redirects the strictly centripetal
horizontal pressure component in the liquid side railing it off the
vertical axis of the vortex, accelerating the vortex circular motion.
The static pressure in the liquid surrounding the vortex becomes
realized into the circular motion of the vortex. Therefore, the
horizontal vector of static pressure acts as a vortex spin
accelerator. The dynamic relations within a vortex cause a greater
transfer of gravitational energy into liquid circular motion, than can
be accounted for by strictly downward static pressure calculations. If
the liquid did not spin, the horizontal component of static pressure
would act on the vortex axis symmetrically, and the would not be
realized as spin motion.


In plain terms; while the orthogonal vector of static pressure from
the surrounding volume acts on the column along the "surface" area of
the whole column, it accelerates the spin of the column with total
force equal to the static pressure drop per unit of area at the
particular depth and at all these units. There are many more units
(lets say mm^2) of the surface of the water column then in the cross
section of the discharge hole.

In the most simple terms, the area across the static pressure acting
on a vortex accelerating its liquid (or fluid) into spin is
substantially greater than the cross sectional area of the discharge
hole. The acceleration on the falling column comes from vertical
acceleration of the column. In other words; the "horizontal"
rotational component of gravitational acceleration of the liquid
surrounding the vortex speeds up the circular component of water flow
around the vortex. The vertical gravitational acceleration (drop) of
water within the column is caused by vertical attraction on that
column only. The vertical component of static pressure within the
vortex is not lessened by all that much, as can be seen from the depth
and size of the depression of the vortex at the surface.

As the column progresses in its vertical motion down, the energy of
vortex circular speed is also progressively translated into the energy
of vertical speed of the column. If the circular component of the
vortex speed at the bottom of a discharge were used for energy
production along with the falling column kinetic energy, these two
components combined would supply greater total energy output than
needed for lifting of the same liquid volume (per unit of time) to the
original height.

In other words, we would be milking total static pressure exerted by
gravitation on liquid volume from a much greater area than discharge
hole area without having to pay the cost of returning that volume back
to the height across that same area. It seems obvious from the above
that static pressure, or tension of a field, can be translated into
circular motion of a medium.

Vortex phenomenon is the proof that field "static potential" can be
utilized for energy extraction. Vortex phenomenon also proves that the
so-called "static potential" energy of fields is caused by dynamic
energy flows. This does not violate any laws of physics, it is a law
of physics.

The discharge and the surrounding containment have to be regular in
shape, otherwise more and chaotic turbulences within the fluid
accelerating through the discharge break the vortex symmetry and
hinder its progress. Regular does not mean a perfect cone, but a shape
mimicking the structure of natural turbulence. This shape is somewhat
ropy walled parabolic cone.

VORTICES
The circumferential speed of liquid molecules within a vortex
increases as the molecules approach the central axis of the vortex.
Therefore, it is clear that the farther a molecule is from this axis,
the lesser is its orbital speed.
Any object within such vortex, including the liquid molecules, spin
themselves counter to the spin of the vortex, as their outer orbital
speed is slower than their inner speed relative to the axis of the
vortex. (Planetary gear or a bearing ball in a bearing represents such
a counter spin)
Unless other forces are present, any small vortice within a major
vortex counter spins.
Some claim that such a phenomenon has never been observed in nature.
Crap. It is readily observable phenomenon on rivers. Any larger vortex
in a cove of a river bend has these counter spinning satellite
vortices present. They are usually transient, but are readily
observable in nature.

This principle has a severe impact on the coalescing theory of
planetary systems as well as galactic systems. It is obvious that
planets should spin counter to planetary system. Since not all of them
do, actually the majority does not, it is obvious that the spin of
planets and stars has its casualty in the behavior of magnetic and
electric fields, rather than being a remainder of gas cloud vortice
motion within the solar system general vortex.

Any and every energy flow through a restriction under an orthogonal
pressure of a field tends to develop a vortex structure. It does not
matter whether that flow is liquid, gaseous or what we consider to be
a field. The acceleration gain in a vortex is a utilizable phenomenon.
It is one of the phenomena, which counteracts thermal entropy in the
universe. It concentrates the rate of flow through a restriction
utilizing a dispersed energy of a general field.

Any and every so-called massive particle contains a few geometries of
vortexes (Nucleon). The main, dense flows can be considered to be
magnetic field compounds of the dispersed electric field components.
Electric field components qualify for a sort of aether, but not a
chaotic aether of the past and not exactly liquid like in a sense of
water in the sea. It's liquid is comparable to water flow in rivers.

S.D.K. November 18, 2000

INDEXFIELD FORCES



We have a few principles of the induction of attractive and repulsive
force.

The primary (snake propagation) has been described in my original Tour
the Force

The secondary attractive force acts between two reciprocating counter
flows of positive and negative components of primary as well as
secondary gravitational field. The two flows, passing each other,
vortex on their common side. Their mutual propagation along the long
axis is caused by the primary principle, but the friction between
these two flows creates vortexes between them slowing their common
side fringe down as opposed to their far sides fringe.

If you care for a graphic description, look up some photos of Jupiter
atmosphere. Its atmosphere moves in strips of counter flowing gas
streams and these photos will do better than anything I can draw.

It is this slowing down of the near side fringe, which attracts
parallel paths together. Once we have fringes on two or more sides of
a single thread of a path, created by the proximity of other paths, we
get a general field strength gradient toward the mother body wherever
a mother body is present in space, a star, a planet, a wire.

Why wire? Look through a fly screen at some really colorful
background. Autumn leaves on maple trees will do rather well. You will
find out what I am talking about when you realize that the blurry
effect you can see is caused by the "curvature of space" around the
wires. That curvature is discriminate and it may resonate, although
you would not see it. Such a resonance is behind the effect of Young
double slit experiment. Yes, light is a sort of wave, but the
explanation of Young double slit experiment is not a proof of it. It
was a partly lucky and partly unlucky mistake.

Why the strings rubbing against each other by their fringes do not
stop like any other frictional system? The energy of the path has
nowhere to dissipate, so it does not dissipate anywhere. That energy
can and is used once a gravitational field accelerates a solid body
relative to the wavy pattern of the universe. It is used, or better
said converted, while accelerating a fluid vortex spin and free fall.
It gets transformed into higher orders of energy, be it thermal or
kinetic, of what we call particulate, but other than that, it has
nowhere to go. Its soliton turbulences are stable in that sense, that
they pass energy one to another. The universal gravitational field is
the ultimate storehouse of energy, from which all higher forms of
energy arise and to which all higher forms of energy return.

We have learned how to convert thermal energy to mechanical energy and
to electric dipole energy and back to some degree. Now we should learn
how to convert gravitational energy to thermal, mechanical or electric
dipole energy.

S.D.K. 14. April 2001

FIELDS 6
MAGNETIC LINES OF FORCE

Spacing of magnetic lines of force depends on at least two criteria.

The first one is the size of the iron shaving particulate. The finer
is the size of the particulate, the finer is the line of force
structure.

The second one is the intensity of the magnetic field. As pointed out
in TTF, when we steadily increase the amp value in a DC conductor, the
lined of force around the conductor contract and new lines are being
added from the iron dust at the margin around the conductor. On the
other hand, when we steadily decrease the Amp value in a DC conductor,
the lines of force spread and the margin around the conductor collects
the iron dust.

When are in the process of placing a steel object between two magnets,
which are in attractive orientation and lets say 3" apart, the
original lines of force between the two magnets distort so that they
concentrate on the steel object. When we line up magnets in attractive
orientation with spaces in between or steel objects between two
magnets in attractive orientation, the lines of force join all the
poles of the magnets or all the steel pieces between the magnets.

When we curve a set of magnets in attractive orientation with some
spacing in between them so that the magnets create a circle, all lines
of force join into the circle of the poles and the external lines of
force (the donut) disappears.

The above (and much more) points out that iron, as well as magnets are
able to attract and concentrate and lead magnetic field along any
steel or iron or any ferromagnetic structure or permanent magnet
structure. The lines of force are created by iron particulate of any
size, but the size of the particulate decides how far apart the lines
can be before the space between the lines begins to fill with magnetic
field again.

When we stick two parallel rows of steel posts into the ground in even
an approximate NS orientation, we create at least a partial gap in the
magnetic field of earth within the isle between the rows of steel
posts. The post distance in the row should be less than the distance
of posts across the isle.

The same working arrangement can be created with help of
electromagnets, again arranged into a double row, with all their poles
oriented in one general geographic orientation and complying with
geomagnetic field polarity orientation, because the electromagnets
will again tie geomagnetic lines of force (actually create them) and
concentrate the geomagnetic field into lines of force.

RELEVANCY

The relevancy of this comes out when we dig into the stone levitation
story form Middle East which states:

"First, a 'magic papyrus' (paper) was placed under the stone to be
moved. Then the stone was struck with a metal rod that caused the
stone to levitate and move along a path paved with stones and fenced
on either side by metal poles. The stone would travel along the path,
wrote Al-Masudi, for a distance of about 50 meters and then settle to
the ground. The process would then be repeated until the builders had
the stone where they wanted it."

The same relevancy comes up when we dig into stories about Edward
Leedskalnin and the artifacts left by him on his death at Coral
Castle. Ed was the only man in the West who had been able to handle
monoliths without the use of heavy machinery in modern times.

Leedskalnin had no outside source of hydropower, yet he had electrical
installations on his property. He was able to handle blocks of coral
stone up to 29 tons in weight as evidenced by his work for anyone to
see. He seems to have used grids of copper wires and other electrical
devices to help him with his work. He has also been reported to "sing"
to his stones while working with them. When we read through
Leadskalnin's articles on magnetism, we can get the hang of what his
idea behind his stone levitation was, especially when we learn that
his work gloves and boots had sheet metal pieces attached to them.
Unfortunately, he has not left an instruction manual and what he never
revealed has to be found out.

It appears that steel or magnet or electromagnet induced geomagnetic
field gap is not exactly necessary in order to achieve stone
levitation, because other modern time report from Tibet does not
include any steel or electric apparatus, never the less, there are
possibly other ways in which magnetic field gaps can be created.
Sound, on the other hand, is always reported as a factor.

Note, just about all the references needed can be found on KeelyNet.
(See Links)

SDK 7 August 2001

FIELDS 7

INDEXTHE PRIMARY FIELD NETWORK

The "free" space of the universe is interwoven with uncountable paths
of electric communication among the celestial bodies of the primary
field. Polarity of each path of the primary field is steady and looped
on the quark, nucleon, atom, molecule, planetary, galactic and
eventually universal scale. Each quark and antiquark along a single
path is strung on this path like a bead on a string. The permanent
induction of such a path may join countless quarks of alternate
electric polarity in countless bodies and the path is an integral part
of those quarks. This two-way communication of flows with the quark
knots on them can be compared to beads strung on a double thread
string. I will assign red color to the positive "charge" quark and
blue color to the negative "charge" quark. Figure one shows only one
quark of a pair for simplicity.

Fig 1



The paths polarity directions between more than two bodies are not
unidirectional even for a single circle path. The curvature of the
path is not caused by inertia. It is caused by the directionality of
the quark orientation at the point of exit and entry. See TTF2/FIELDS
5 for the cause of mutual adherence of the two directions of a single
path of electric force exchange. The whole loop can be perceived as
separate strings as well as a single string.

I have to create a term for the intersection related to the wave
function of a path. The term is null axis point.

Fig 2



The paths of electric communication intersect in free space either
actively at null axis points or passively at any other points on the
paths. Figure three shows idealized planar arrangement. The sinusoids
themselves will be distorted as paths standing waves compound at some
places to some degree.

Fig 3



Once we understand that the path is a flow of something and that the
wave of this flow is static or standing, like the riverbed of
Mississippi river, we do not have to count in any frequencies of the
paths when crossing each other as yet. The important condition is that
any orthogonal paths intersect at their null axis points tying
orthogonal paths together into network. Every two-path null point
intersection generates turbulence between their four flows, somewhat
similar to the cloverleaf intersection on a freeway. Stability of this
turbulence is conditioned by the spatial frequency of the wave
components of the paths. If the intersecting paths have harmonic
relationship, which fits into the curved length of the turbulence
(cloverleaf loop), the turbulence will be stable. If the two
intersecting paths have disharmonic spatial frequencies, the
turbulence will oscillate at best, and alternately fall apart and
reestablish at the worst.

There are stable and unstable turbulences around the null axis point
intersections of the field network holding the network together, some
in a transient manner and some in a stable manner. The same is valid
for the structure of nucleon, but it is not valid for the structure of
emitted electron thermal phase.

When the primary gravitational field network gets disturbed at any
point, it behaves as a three dimensional net. It does not mean that
its structure follows three axes in Euclidian cubic axis arrangement.
It only means that the space is filled through out with this network.
The geometry of the network structure itself is multidirectional and
constantly shifting. The directions of the paths within the network
are just about as numerous as the paths themselves.

When we consider the field of a single charged spherical body, its
geometry seems purely radial, i.e. scalar. When we consider geometry
of the field of two reciprocating (opposite polarity) bodies, it
changes quite drastically. The cause of the scalar field of a single
charged body lies in the induction taking place between the body and
air molecules and water vapor molecules and earth molecules and
whatever molecules, or better said their component quarks all around
the so called charged body. In practice, there is no scalar field.
Scalar field is a theoretical idealization of crooked natural
geometry. Perfect scalar field would require a perfect charged sphere
within another perfect sphere (including perfect material), within
which the charged sphere would be placed in dead center. The outer
sphere would have to be perfectly isolated from the rest of the
universe, otherwise it would induce its induced polarity toward the
outside becoming a charged body to the outside and the ideal
theoretical scalar field would become the practical crooked field. In
reality, the inner charged body actually becomes electrically neutral.
(Courtesy Joe Hiding)

Anyway, the network can be obviously shaped and disturbed and induced
and manipulated as long as we know what we are dealing with and what
we are doing with it. The notion that light is an electromagnetic
phenomenon equivalent to radio waves and microwaves is incorrect. The
experimentation of Nicola Tesla in Colorado has clearly shown that
repeated manipulation of the geometry of the general field causes a
general wave disturbance through out the network of gravitational
field. This disturbance is a real longitudinal wave generated within
the gravitational network. On the other hand, light is progressive
unification of electric paths flows into a local magnetic flow.

The speed of light and the speed of gravitation and the speed of radio
wave are interdependent because the wavelengths are interdependent.
When you look back to the volleyball net analogy, you can realize that
the transverse wave of single net string and the longitudinal wave of
the whole net depend on each other in some ratio, whatever that ratio
may be. It is a bit confusing to recognize what is a longitudinal wave
within the whole network and what is a transverse wave in it. A
disturbance, which propagates in one direction as a longitudinal wave,
causes transverse wave in orthogonal directions and vice versa. Our
concept of the transverse and longitudinal is derived from our string
and spring experimenting, which limits our perception to the behavior
of the string or the spring. We tear phenomena out of their context
and study them out of their context. Then we grossly err applying the
newly derived (experimentally as well as mentally confirmed within
artificially imposed limits) concepts to the general behavior of the
limitless universe.

S.D.K. 14. April 2001

FIELDS 5

INDEX









FORCES This site is dedicated to ideas. Some are mine, some belong to
others.
Any and all of the information on this site is as is. If you dissagree
with anything here, be aware that I also disagree with a lot of
things.

BY S.D.K.




TOUR THE FORCE

Is a series of closely interrelated documents outlining the problems
with the currently established interpretations of behavior of the most
fundamental physical phenomena like heat, light, el. current etc. It
does not argue with the established mathematical processes (so called
mathematical theories), as most are reasonably valid generalizations
of functions of particular natural forces. It argues with the concepts
of why things behave the way they do and with their causality and
geometrical as well as functional relationships, not with how much
they behave.

Tour the Force contains a somewhat outdated line of deduction of what
force phenomena really are all about and what are their mutual
relations. The particulate causality of gravitational force and other
force fields as such had to give way to simpler concept of wave
relationship of energy flows along waves. Yet, this original Tour the
Force has its relevancy in paving the road to understanding of Tour
the Force 2

GISMOS

Contains assortment of descriptions and comments on a few man made
contraptions, which were or are claimed to work as intended, as
undependable as some may be.

TOUR THE FORCE 2

Contains updates to the original Tour the Force. This part is in
development and I am uploading new documents as I manage to solve the
different parts of the over all puzzle and put its documents into a
reasonable form. My original Tour the Force is a prerequisite to
understanding of Tour the Force 2.

EXPERIMENTS

Contains assorted bits and pieces of little known knowledge about
anomalous experiments done by "less" learned folks. Some may have my
explanations and all of them stress the need of humanity as such to
resist the dogma of the established authority on truth.

IDEAS

Contains assorted ideas and experiences. Some of the ideas have the
potential to eventually move to experiments once conclusively
performed.

LINKS

Scientific as well as not so scientific references. They should be
understood as pointers in directions of possible research, not as
exhaustive sources of information. Lots of interesting stuff, lots of
garbage. You have to do your own research and sorting out. Good luck.

VICTOR S. GREBENNIKOV

English translation of the original Russian text.

PATENTS

First of my patenting experiences concerns a simple brushless
alternator. This attempt had gone to hell due to bottomless pockets of
my former patent lawyers (for a translation of the English description
into legalese worth close to C$ 5 500, submission fees extra). You can
view the patent application here, as it was submitted to US patent
office by my ex-patent agents. The patent has fallen into public
domain due to insufficient funds.

My second patent experience concerns a very simple and very effective
air (gas) dryer so far applied only on compressed air systems. I have
applied for a patent registration and filed an application (Canadian)
according to Canada Patent Office instructions myself. The application
has been accepted and had cost C$150.00 plus registered mail. I have
not quite revealed the whole patent here, but you can find its general
description and experience with its performance here.



INDEX


At 2:40 AM, Rev Dan Izzo said...
The Decay of the False Vacuum
Written by Sten Odenwald

Copyright (C) 1983 Kalmbach Publishing. Reprinted by permission


--------------------------------------------------------------------------------
In the recently developed theory by Steven Weinberg and Abdus Salam,
that unifies the electromagnetic and weak forces, the vacuum is not
empty. This peculiar situation comes about because of the existence of
a new type of field, called the Higgs field. The Higgs field has an
important physical consequence since its interaction with the W, W and
Z particles (the carriers of the weak force) causes them to gain mass
at energies below 100 billion electron volts (100 Gev). Above this
energy they are quite massless just like the photon and it is this
characteristic that makes the weak and electromagnetic forces so
similar at high energy.
On a somewhat more abstract level, consider Figures 1 and 2
representing the average energy of the vacuum state. If the universe
were based on the vacuum state in Figure 1, it is predicted that the
symmetry between the electromagnetic and weak interactions would be
quite obvious. The particles mediating the forces would all be
massless and behave in the same way. The corresponding forces would be
indistinguishable. This would be the situation if the universe had an
average temperature of 1 trillion degrees so that the existing
particles collided at energies of 100 Gev. In Figure 2, representing
the vacuum state energy for collision energies below 100 Gev, the
vacuum state now contains the Higgs field and the symmetry between the
forces is suddenly lost or 'broken'. Although at low energy the way in
which the forces behave is asymmetric, the fundamental laws governing
the electromagnetic and weak interactions remain inherently symmetric.
This is a very remarkable and profound prediction since it implies
that certain symmetries in Nature can be hidden from us but are there
nonetheless.

During the last 10 years physicists have developed even more powerful
theories that attempt to unify not only the electromagnetic and weak
forces but the strong nuclear force as well. These are called the
Grand Unification Theories (GUTs) and the simplist one known was
developed by Howard Georgi, Helen Quinn,and Steven Weinberg and is
called SU(5), (pronounced 'ess you five'). This theory predicts that
the nuclear and 'electroweak' forces will eventually have the same
strength but only when particles collide at energies above 1 thousand
trillion GeV corresponding to the unimaginable temperature of 10
thousand trillion trillion degrees! SU(5) requires exactly 24
particles to mediate forces of which the 8 massless gluons of the
nuclear force, the 3 massless intermediate vector bosons of the weak
force and the single massless photon of the electromagnetic force are
12. The remaining 12 represent a totally new class of particles called
Leptoquark bosons that have the remarkable property that they can
transform quarks into electrons. SU(5) therefore predicts the
existence of a 'hyperweak' interaction; a new fifth force in the
universe! Currently, this force is 10 thousand trillion trillion times
weaker than the weak force but is nevertheless 100 million times
stronger than gravity. What would this new force do? Since protons are
constructed from 3 quarks and since quarks can now decay into
electrons, through the Hyperweak interaction, SU(5) predicts that
protons are no longer the stable particles we have always imagined
them to be. Crude calculations suggest that they may have half-lives
between 10(29) to 10(33) years. An immediate consequence of this is
that even if the universe were destined to expand for all eternity,
after 'only' 10(32) years or so, all of the matter present would
catastrophically decay into electrons, neutrinos and photons. The Era
of Matter, with its living organisms, stars and galaxies, would be
swept away forever, having represented but a fleeting episode in the
history of the universe. In addition to proton decay, SU(5) predicts
that at the energy characteristic of the GUT transition, we will see
the affects of a new family of particles called supermassive Higgs
bosons whose masses are expected to be approximately 1 thousand
trillion GeV! These particles interact with the 12 Leptoquarks and
make them massive just as the Higgs bosons at 100 GeV made the W, W
and Z particles heavy. Armed with this knowledge, let's explore some
of the remarkable cosmological consequences of these exciting
theories.

The GUT Era

To see how these theories relate to the history of the universe,
imagine if you can a time when the average temperature of the universe
was not the frigid 3 K that it is today but an incredable 10 thousand
trillion trillion degrees (10(15) GeV). The 'Standard Model' of the
Big Bang, tells us this happened about 10(-37) seconds after Creation.
The protons and neutrons that we are familiar with today hadn't yet
formed since their constituent quarks interacted much too weakly to
permit them to bind together into 'packages' like neutrons and
protons. The remaining constituents of matter, electrons, muons and
tau leptons, were also massless and traveled about at essentially
light-speed; They were literally a new form of radiation, much like
light is today! The 12 supermassive Leptoquarks as well as the
supermassivs Higgs bosons existed side-by-side with their
anti-particles. Every particle-anti particle pair that was annihilated
was balanced by the resurrection of a new pair somewhere else in the
universe. During this period, the particles that mediated the strong,
weak and electromagnetic forces were completely massless so that these
forces were no longer distinguishable. An inhabitant of that age would
not have had to theorize about the existence of a symmetry between the
strong, weak and electromagnetic interactions, this symmetry would
have been directly observable and furthermore, fewer types of
particles would exist for the inhabitants to keep track of. The
universe would actually have beed much simpler then!

As the universe continued to expand, the temperature continued to
plummet. It has been suggested by Demetres Nanopoulis and Steven
Weinberg in 1979 that one of the supermassive Higgs particles may have
decayed in such a way that slightly more matter was produced than
anti-matter. The remaining evenly matched pairs of particles and
anti-particles then annihilated to produce the radiation that we now
see as the 'cosmic fireball'.

Exactly what happened to the universe as it underwent the transitions
at 10(15) and 100 GeV when the forces of Nature suddenly became
distinguishable is still under investigation, but certain tantalizing
descriptions have recently been offered by various groups of
theoriticians working on this problem. According to studies by Alan
Guth, Steven Weinberg and Frank Wilczyk between 1979 and 1981, when
the GUT transition occured, it occured in a way not unlike the
formation of vapor bubbles in a pot of boiling water. In this analogy,
the interior of the bubbles represent the vacuum state in the new
phase, where the forces are distinguishable, embedded in the old
symmetric phase where the nuclear, weak and electromagnetic forces are
indistinguishable. Inside these bubbles, the vacuum energy is of the
type illustrated by Figure 2 while outside it is represented by Figure
1. Since we are living within the new phase with its four
distinguishable forces, this has been called the 'true' vacuum state.
In the false vacuum state, the forces remain indistinguishable which
is certainly not the situation that we find ourselves in today!

Cosmic Inflation

An exciting prediction of Guth's model is that the universe may have
gone through at least one period in its history when the expansion was
far more rapid than predicted by the 'standard' Big Bang model. The
reason for this is that the vacuum itself also contributes to the
energy content of the universe just as matter and radiation do
however, the contribution is in the opposite sense. Although gravity
is an attractive force, the vacuum of space produces a force that is
repulsive. As Figures 1 and 2 show, the minimum energy state of the
false vacuum at 'A' before the GUT transition is at a higher energy
than in the true vacuum state in 'B' after the transition. This energy
difference is what contributes to the vacuum energy. During the GUT
transition period, the positive pressure due to the vacuum energy
would have been enormously greater than the restraining pressure
produced by the gravitational influence of matter and radiation. The
universe would have inflated at a tremendous rate, the inflation
driven by the pressure of the vacuum! In this picture of the universe,
Einstein's cosmological constant takes on a whole new meaning since it
now represents a definite physical concept ; It is simply a measure of
the energy difference between the true and false vacuum states ('B'
and 'A' in Figures 1 and 2.) at a particular time in the history of
the universe. It also tells us that, just as in de Sitter's model, a
universe where the vacuum contributes in this way must expand
exponentially in time and not linearly as predicted by the Big Bang
model. Guth's scenario for the expansion of the universe is generally
called the 'inflationary universe' due to the rapidity of the
expansion and represents a phase that will end only after the true
vacuum has supplanted the false vacuum of the old, symmetric phase.

A major problem with Guth's original model was that the inflationary
phase would have lasted for a very long time because the false vacuum
state is such a stable one. The universe becomes trapped in the
cul-de-sac of the false vacuum state and the exponential expansion
never ceases. This would be somewhat analogous to water refusing to
freeze even though its temperature has dropped well below 0
Centigrade. Recent modifications to the original 'inflationary
universe' model have resulted in what is now called the 'new'
inflationary universe model. In this model, the universe does manage
to escape from the false vacuum state and evolves in a short time to
the familiar true vacuum state.

We don't really know how exactly long the inflationary phase may have
lasted but the time required for the universe to double its size may
have been only 10(-34) seconds. Conceivably, this inflationary period
could have continued for as 'long' as 10(-24) seconds during which
time the universe would have undergone 10 billion doublings of its
size! This is a number that is truely beyond comprehension. As a
comparison, only 120 doublings are required to inflate a hydrogen atom
to the size of the entire visible universe! According to the
inflationary model, the bubbles of the true vacuum phase expanded at
the speed of light. Many of these had to collide when the universe was
very young in order that the visible universe appear so uniform today.
A single bubble would not have grown large enough to encompass our
entire visible universe at this time; A radius of some 15-20 billion
light years. On the other hand, the new inflationary model states that
even the bubbles expanded in size exponentially just as their
separations did. The bubbles themselves grew to enormous sizes much
greater than the size of our observable universe. According to
Albrecht and Steinhardt of the University of Pennsylvania, each bubble
may now be 10(3000) cm in size. We should not be too concerned about
these bubbles expanding at many times the speed of light since their
boundaries do not represent a physical entity. There are no electrons
or quarks riding some expandind shock wave. Instead, it is the
non-material vacuum of space that is expanding. The expansion velocity
of the bubbles is not limited by any physical speed limit like the
velocity of light.

GUMs in GUTs

A potential problem for cosmologies that have phase transitions during
the GUT Era is that a curious zoo of objects could be spawned if
frequent bubble mergers occured as required by Guth's inflationary
model. First of all, each bubble of the true vacuum phase contains its
own Higgs field having a unique orientation in space. It seems likely
that no two bubbles will have their Higgs fields oriented in quite the
same way so that when bubbles merge, knots will form. According to
Gerhard t'Hooft and Alexander Polyakov, these knots in the Higgs field
are the magnetic monopoles originally proposed 40 years ago by Paul
Dirac and there ought to be about as many of these as there were
bubble mergers during the transition period. Upper limits to their
abundance can be set by requiring that they do not contribute to
'closing' the universe which means that for particles of their
predicted mass (about 10(16) GeV), they must be 1 trillion trillion
times less abundant than the photons in the 3 K cosmic background.
Calculations based on the old inflationary model suggest that the
these GUMs (Grand Unification Monopoles) may easily have been as much
as 100 trillion times more abundant than the upper limit! Such a
universe would definitly be 'closed' and moreover would have run
through its entire history between expansion and recollapse within a
few thousand years. The new inflationary universe model solves this
'GUM' overproduction problem since we are living within only one of
these bubbles, now almost infinitly larger than our visible universe.
Since bubble collisions are no longer required to homogenize the
matter and radiation in the universe, very few, if any, monopoles
would exist within our visible universe.

Horizons

A prolonged period of inflation would have had an important influence
on the cosmic fireball radiation. One long-standing problem in modern
cosmology has been that all directions in the sky have the same
temperature to an astonishing 1 part in 10,000. When we consider that
regions separated by only a few degrees in the sky have only recently
been in communication with one another, it is hard to understand how
regions farther apart than this could be so similar in temperature.
The radiation from one of these regions, traveling at the velocity of
light, has not yet made it across the intervening distance to the
other, even though the radiation may have started on its way since the
universe first came into existence. This 'communication gap' would
prevent these regions from ironing-out their temperature differences.

With the standard, Big Bang model, as we look back to earlier epochs
from the present time, the separations between particles decrease more
slowly than their horizons are shrinking. Neighboring regions of space
at the present time, become disconnected so temperature differences
are free to develope. Eventually, as we look back to very ancient
times, the horizons are so small that every particle existing then
literally fills the entire volume of its own, observable universe.
Imagine a universe where you occupy all of the available space! Prior
to the development of the inflationary models, cosmologists were
forced to imagine an incredably well-ordered initial state where each
of these disconnected domains (some 10(86) in number) had nearly
identical properties such as temperature. Any departure from this
situation at that time would have grown to sizable temperature
differences in widely separated parts of the sky at the present time.
Unfortunately, some agency would have to set-up these finely-tuned
initial conditions by violating causality. The contradiction is that
no force may operate by transmitting its influence faster than the
speed of light. In the inflationary models, this contradiction is
eliminated because the separation between widely scattered points in
space becomes almost infinitly small compared to the size of the
horizons as we look back to the epoc of inflation. Since these points
are now within each others light horizons, any temperature difference
would have been eliminated immediatly since hotter regions would now
be in radiative contact with colder ones. With this
exponentially-growing, de Sitter phase in the universe's early history
we now have a means for resolving the horizon problem.

Instant Flat Space

Because of the exponential growth of the universe during the GUT Era,
its size may well be essentially infinite for all 'practical' purposes
.. Estimates by Albrecht and Steinhardt suggest that each bubble region
may have grown to a size of 10(3000) cm by the end of the inflationary
period. Consequently, the new inflationary model predicts that the
content of the universe must be almost exactly the 'critical mass'
since the sizes of each of these bubble regions are almost infinite in
extent. The universe is, for all conceivable observations, exactly
Euclidean (infinite and flat in geometry) and destined to expand for
all eternity to come. Since we have only detected at most 10 percent
of the critical mass in the form of luminous matter, this suggests
that 10 times as much matter exists in our universe than is currently
detectable. Of course, if the universe is essentially infinite this
raises the ghastly spectre of the eventual annihilation of all organic
and inorganic matter some 10(32) years from now because of proton
decay.

In spite of its many apparent successes, even the new inflationary
universe model is not without its problems. Although it does seem to
provide explainations for several cosmological enigmas, it does not
provide a convincing way to create galaxies. Those fluctuations in the
density of matter that do survive the inflationary period are so dense
that they eventually collapse into galaxy-sized blackholes! Neither
the precise way in which the transition to ordinary Hubbel expansion
occurs nor the duration of the inflationary period are well
determined.

If the inflationary cosmologies can be made to answer each of these
issues satisfactorily we may have, as J. Richard Gott III has
suggested, a most remarkable model of the universe where an almost
infinite number of 'bubble universes' each having nearly infinite
size, coexist in the same 4-dimensional spacetime; all of these bubble
universes having been brought into existence at the same instant of
creation. This is less troublesome than one might suspect since, if
our universe is actually infinite as the available data suggests, so
too was it infinite even at its moment of birth! It is even
conceivable that the universe is 'percolating' with new bubble
universes continually coming into existence. Our entire visible
universe, out to the most distant quasar, would be but one
infinitessimal patch within one of these bubble regions. Do these
other universes have galaxies, stars, planets and living creatures
statistically similar to those in our universe? We may never know.
These other universes, born of the same paroxicism of Creation as our
own, are forever beyond our scrutiny but obviously not our
imaginations!

Beyond The Beginning...

Finally, what of the period before Grand Unification? We may surmise
that at higher temperatures than the GUT Era, even the supermassive
Higgs and Leptoquark bosons become massless and at long last we arrive
at a time when the gravitational interaction is united with the weak,
electromagnetic and strong forces. Yet, our quest for an understanding
of the origins of the universe remains incomplete since gravity has
yet to be brought into unity with the remaining forces on a
theoretical basis. This last step promises to be not only the most
difficult one to take on the long road to unification but also appears
to hold the greatest promise for shedding light on some of the most
profound mysteries of the physical world. Even now, a handful of
theorists around the world are hard at work on a theory called
Supergravity which unites the force carriers (photons, gluons,
gravitons and the weak interaction bosons) with the particles that
they act on (quarks, electrons etc). Supergravity theory also predicts
the existence of new particles called photinos and gravitinos. There
is even some speculation that the photinos may fill the entire
universe and account for the unseen 'missing' matter that is necessary
to give the universe the critical mass required to make it exactly
Euclidean. The gravitinos, on the other hand, prevent calculations
involving the exchange of gravitons from giving infinite answers for
problems where the answers are known to be perfectly finite. Hitherto,
these calculations did not include the affects of the gravitinos.

Perhaps during the next decade, more of the details of the last stage
of Unification will be hammered out at which time the entire story of
the birth of our universe can be told. This is, indeed, an exciting
time to be living through in human history. Will future generations
forever envy us our good fortune, to have witnessed in our lifetimes
the unfolding of the first comprehensive theory of Existence?

In the Mandelbrot set, nature (or is it mathematics) provides us with
a powerful visual counterpart of the musical idea of 'theme and
variation': the shapes are repeated everywhere, yet each repetition is
somewhat different. It would have been impossible to discover this
property of iteration if we had been reduced to hand calculation, and
I think that no one would have been sufficiently bright or ingenious
to 'invent' this rich and complicated theme and variations. It leaves
us no way to become bored, because new things appear all the time, and
no way to become lost, because familiar things come back time and time
again. Because this constant novelty, this set is not truly fractal by
most definitions; we may call it a borderline fractal, a limit fractal
that contains many fractals. Compared to actual fractals, its
structurs are more numerous, its harmonies are richer, and its
unexpectedness is more unexpected
Benoit Mandelbrot






MANDELBROT SET



As mentioned earlier, no matter what the value of the complex
parameter c is, in the iteration of the complex quadratic map there is
a unique trapping set Tc and a corresponding escape set Ec. The Julia
set (Jc) is the boundary between the set Tc and the set Ec. The
Mandelbrot set is an answer to the following kind of enquiry. Of the
infinite number of possible Julia sets that exists, is there any
organizing principle that classifies these Julia sets.

The key results for this classification of Julia sets were already
there in the works of Julia and Fatou who knew about the topological
dichotomy in the Julia set. The result states that for any choice of
the complex parameter c the associated Julia set Jc and the trapping
set Tc are either topologically connected (severely deformed circles)
or totally disconnected (generalized Cantor dust like).

This was indeed the key result that clued Mandelbrot, in 1979, to
visualize a set in the complex parameter space c which is called the
Mandelbrot set. The Mandelbrot set consists of all values of c that
have connected Julia sets. Picking value of c that is outside the
Mandelbrot set, and iterating the equation to obtain the Jc for this
particular choice of c gives a disconnected Julia set.

Note important , as it is, the classification of Julia set in terms of
disconnected sets, this still doesn't allow one to visualize the shape
of the set of points, in the parameter space, for which the Julia set
is connected. The genius is in the realization of the interrelation
between the above mentioned dichotomy and in the long term behavior of
the critical point.

The computer graphical renderings of Mandelbrot set is made possible
by this important fact which states -- The trapping set Tc is
connected if and only if the critical orbit is bounded. This
definition makes it possible to draw a portrait of the Mandelbrot set.

For each complex number c, a sequence of iterates Zn is defined by 3.
The complex number c is a member of the Mandelbrot set if and only if
|Zn| is finite for all values of n. The bars indicate the magnitude of
Zn given by Zn = Ö(Xn2 + Yn2) where Xn is the real component and Yn
the imaginary component of Zn. The point, in the complex parameter
space, is colored white if the orbit is unbounded for that particular
value of c and is colored black if the orbits are bounded.

The figure shown below is the Mandelbrot set (in black). It extends
from the cusp of the cardoid at Re c = 0.25 to the tip of the tail at
Re c = -2 along the real axis and from Im c = -1.25 to Im c = 1.25
along the imaginary axis.




Monochrome Mandelbrot Set Portrait

The basic algorithm to generate the Mandelbrot set is as follows. For
each pixel c, start with Z = 0. Iterate the above equation up to N
times, exiting if |Z| gets large. If you finish the loop, the point is
probably inside the Mandelbrot set. If you exit, the point is outside
and can be colored according to how many iterations were completed.
You can exit if |Z| 2, since if Z gets this big it will go to
infinity. The maximum number of iterations, N, can be selected as
desired, for instance 200. Larger values of N will give sharper detail
but take longer.

A note about why we start from Z0 = 0. Zero is the critical point of
Mandelbrot equation given by 2. That is, a point where d/dz (Z2 + c) =
0. Critical points are important because by a result of Fatou: every
attracting cycle (Tc) for a polynomial or rational function attracts
at least one critical point. Thus, testing the critical point shows if
there is any stable attractive cycle. For the case of equation with
multiple critical points, all the critical points must be tested.



DETAILS



For the sake of clarity the largest cardoid (heart) shaped central
region of the Mandelbrot set will be referred to as the main body of
the Mandelbrot set (M1 -- the region labeled 1 in the figure(3)
below). All other pieces that are attached to the main body will be
referred to as the buds. The largest bud that is attached to the main
body (along the real axis) will be called the M2 bud (bud labeled 2 in
figure(3) below). The main body of the Mandelbrot set intersects the
real axis at Âc = 0.25 and Âc = -0.75. Extending the stability
analysis criteria discussed for the case logistic equation, it is easy
to see that the fixed point of the complex quadratic iterator is
stable along the real axis for precisely the interval mentioned above.

The determination of the boundary of the main body of the Mandelbrot
set relies on the realization that any value of the complex parameter
picked from within the main body of the Mandelbrot set the
corresponding Julia set is a boundary between the Escape set and the
trapping set of the stable fixed point of the quadratic map. The
boundary of the main body defines the locus of points (in the
parameter space) for which the fixed point is indifferent, that is,
the modulus of the derivative of the map about the fixed point is
exactly equal to 1. Using this fact one can determine the explicit
expression for the outline of the M-set's main body.

If z is the fixed point of complex quadratic map, it follows that z
satisfies the equation z2 - z + c = 0. The derivative of the map about
the fixed point z is given by 2z which in polar coordinates can be
expressed as 2z = reif. Combining these two equations, and solving for
c, we obtain
c = 1
--------------------------------------------------------------------------------
2
r eif - 1
--------------------------------------------------------------------------------
4
r2 e2if
(1)

Note, for the value of r 1 the above equation determines the points
inside the main body of the Mandelbrot set and r = 1 gives the bondary
of M1. The above equation is the parametrization of the curve in the
complex plane for 0 £ f 2p. Thus, is explicitly seen as an equation
of cardoid when expressed as
Âc = cos(f)/2 - cos(2f)/4

Ác = sin(f)/2 - sin(2f)/4
(2)

by equating the real and imaginary parts of the equation.

It turns out, that at the parameter values, f = 2p/k, where k = 2, 3,
4, 5 ¼, one of the main buds of the Mandelbrot set is attached to M1
set. Moreover, the period of the attractive cycles that belong to
these buds is given by the number k in 2p/k. Also, there is another
amazing fact about the arrangement of the buds. Two given buds of
periods p and q at the cardoid detemine the period of the largest bud
in between them as p+q. (This is illustrated for the case of p = 2 and
q = 3 in figure(3) below). Similar rules are true for buds on buds.




Figure 3: The buds of the Mandelbrot set corresponding to Julia sets
that bound the basins of attraction (trapping sets) of periodic
orbits. The numbers in the figure indicate the periods of these
orbits.


Figure 4: The plot of equation (2) which defines the boundary of the
main body (M1) of the Mandelbrot set and the numbers indicate the
periodicity of the buds that attach to the main body of the Mandelbrot
set and the point where they attach to the main body of the Mandelbrot
set.



The above two remarkable property corresponding to the periodicity of
the bud was the reason for indexing the buds attached to the main body
of the M-set as Mn. Thus, from the above argument the period 2 bud is
attached at an angle p (setting k = 2 in f = 2p/k), similarly period 3
is the attached at f = 120 and so on. Figure(4) above shows the buds
of the Mandelbrot set corresponding to Julia sets that bound basins of
attraction of periodic orbits. The numbers in the figure indicate the
periods of these orbits.



MATHEMATICAL MODEL OF CHOLESTEROL BIOSYNTHESIS REGULATION IN THE CELL

*Ratushny A.V., Ignatieva E.V., Matushkin Yu.G., Likhoshvai V.A.

Institute of Cytology and Genetics SB RAS, Russia



*Corresponding author,

Keywords: gene network, cholesterol, regulation, mathematical model,
computer analysis

Resume

Motivation:

An adequate mathematical model of the complex nonlinear gene network
regulating cholesterol synthesis in the cell is necessary for
investigating its possible function modes and determining optimal
strategies of its correction, therapeutic included.

Results:

Dynamic model of function of the gene network regulating cholesterol
synthesis in the cell is constructed. The model is described in terms
of elementary processes—biochemical reactions. The optimal set of
parameters of the model is determined. Patterns of the system behavior
under different conditions are simulated numerically.

Introduction

Cholesterol, an amphipathic lipid, is an essential structural
component of cell membranes and outer lipoprotein layer of blood
serum. In addition, cholesterol is a precursor of several other
steroids, namely, corticosteroids, sex hormones, bile acids, and
vitamin D. Cholesterol is synthesized in many tissues from acetyl-CoA
and its main fraction in blood serum resides with low-density
lipoproteins (LDL). Free cholesterol is removed from the tissues with
involvement of high-density lipoproteins (HDL) and transported to the
liver to be transformed into bile acids. Its major pathological role
is in serving as a factor causing atherosclerosis of vital cerebral
arteries, heart muscle, and other organs. Typical of coronary
atherosclerosis is a high ratio of LDL to HDL cholesterol [Marry R. et
al., 1993]. Haploid and diploid versions of the dynamic model of
function of the gene network regulating cholesterol synthesis in the
cell are constructed in the work. The models are described in terms of
elementary processes—biochemical reactions. The optimal set of
parameters of the model allowing the calculations to comply with the
published experimental data is determined through numerical
experiments. Patterns of the system dynamic behavior under different
conditions are simulated numerically. The results obtained are
compared with the available experimental data.

Cholesterol biosynthesis and its regulation

Approximately half of the cholesterol amount present in the organism
is formed through biosynthesis (about 500 mg/day) [Marry R. et al.,
1993], while the other half is consumed with food. The main part of
cholesterol is synthesized in the liver (~ 80% of the total
cholesterol produced), intestines (~ 10%), and skin (~ 5%) [Klimov &
Nikul'cheva, 1999].

Acetyl-CoA is the source of all the carbon atoms composing the
cholesterol molecule. The main stages of cholesterol biosynthesis are
described in the GeneNet database.

Cholesterol regulates its own synthesis and the synthesis of LDL
receptors at the level of transcription through a negative feedback
mechanism [Wang et al., 1994]. A decrease in the cell cholesterol
content stimulates SRP (sterol regulated protease)- catalyzed
proteolysis of the N-terminal fragment of SREBP (sterol regulatory
element- binding protein), bound to the endoplasmic reticulum (ER)
membrane. On leaving the ER membrane, SREBP migrates to the cell
nucleus to bind the so-called sterol regulatory element (SRE),
residing in the promoter of the receptor gene, thereby switching on
the receptor synthesis. In addition, SREBP activates the gene of
synthase of hydroxymethyl glutaryl (HMG)-CoA reductase [Klimov &
Nikul'cheva, 1999] as well as farnesyl diphosphate synthase and
squalene synthase syntheses. Several studies have demonstrated rather
fast effect of cholesterol on the reductase activity, unexplainable by
the mere effect on the rate of enzyme synthesis. HMG-CoA reductase may
be either active or inactive. Phosphorylation- dephosphorylation
reactions provide for the transitions from one state into the other
[Marry R. et al., 1993].

The main factors affecting the cholesterol balance at the cell level
[Marry R. et al., 1993] are shown in Fig. 1.



Figure 1. Factors affecting the cholesterol balance at the cell level:
C, cholesterol; CE, cholesterol esters; ACAT, acyl-CoA:cholesterol
acyltransferase; LCAT, lecithin:cholesterol acyltransferase; A1,
apoprotein A1; LDL, low density lipoproteins; VLDL, very low density
lipoproteins, HDL, high density lipoproteins; (- ), inhibition of
cholesterol synthesis; and (+) ACAT activation [Marry R. et al.,
1993].

Cell cholesterol content increases if (1) specific LDL receptors bind
cholesterol-containing lipoproteins; (2) cholesterol-containing
lipoproteins are bound without receptors; (3) free cholesterol,
contained in cholesterol-rich lipoproteins is bound by cell membranes;
(4) cholesterol is synthesized; and (5) cholesterol ester hydrolase-
catalyzed hydrolysis of cholesterol esters takes place.

Cell cholesterol content decreases if (1) cholesterol passes from
membranes into cholesterol-poor lipoproteins, in particular LDL3 or
LDL synthesized de novo (lecithin:cholesterol acyltransferase promotes
this transition); (2) ACAT-catalyzed cholesterol esterification takes
place; and (3) cholesterol is used for synthesizing other steroids, in
particular, hormones or bile acids in the liver [Marry R. et al.,
1993].

Methods and algorithms

A generalized chemical kinetic approach [Bazhan et al., 1995] was used
for the simulation. A blockwise formalization was used, that is, each
process is separated in an individual block and described
independently of the other processes. A block is a simulation quantum,
and its formal structure is completely described with the following
three vector components: (1) X, the list of dynamic variables; (2) P,
the list of constants; and (3) F, type of the right part of the system
dX/dt = F(X, P) determining the rule these dynamic variables change
with time. Four types of blocks are used to describe the processes in
the model, namely:



Successive application of the blockwise approach to description of
biological systems is based on the law of summation of the rates of
elementary processes while uniting them in a general scheme of the
simulated object development. The method of Gear [Gear, 1971] was used
for numerical integration of the set of differential equations.

Results

Mathematical model

The mathematical model of intracellular cholesterol biosynthesis
regulation comprises 65 kinetic blocks, 40 dynamic variables, and 93
reaction constants. The diploid model comprises 72 kinetic blocks, 44
dynamic variables, and 130 reaction constants. Experimental data,
partially listed in table below, were used for the initial evaluation
of certain parameters of enzymatic reactions with the system.

Table. Some constants of enzyme reactions

Enzyme
Substrate
Organism
Organ
Kc, sec- 1 Km, mM
HMG-CoA reductase HMG-CoA Rattus norvegicus [Gil et al., 1981] Liver
980
(-)

HMG-CoA reductase HMG-CoA Rattus norvegicus [Kleinsek & Porter, 1979]
Liver (-)
0.0169

HMG-CoA reductase HMG-CoA Rattus norvegicus [Sugano et al., 1978]
Intestine (-)
0.0417

HMG-CoA synthase Acetyl-CoA
Acetoacetyl-CoA
Gallus gallus (hen) [Reed et al., 1975] Liver (-)

(-)
0.1? 0.7

0.005

HMG-CoA synthase Acetyl-CoA Homo sapiens [Rokosz et al., 1994] Adrenal
(-)
0.029

Acetoacetyl-CoA thiolase Acetoacetyl-CoA
CoA
Bos taurus (calf) [Huth et al., 1975] Liver (-)

(-)
0.01

0.025

Acetoacetyl-CoA thiolase Acetoacetyl-CoA
CoA
Gram-negative bacteria [Kim & Copeland 1997] 2.38e+4

2.38e+4
0.042

0.056

Presqualene synthase Farnesyl diphosphate Saccharomyces cerevisiae
(yeast) [Sasiak & Rilling, 1988] (-)
0.03

Geranyltransferase Geranyl PP
Isopentyl PP
Homo sapiens [Barnard & Popjak 1981] Liver 40.7

40.7
4.4e-4

9.4e-4

Lanosterol synthase (R,S)-squalene-2,3-oxide Saccharomyces cerevisiae
[Balliano et al., 1992] (-)
0.035

ACAT-1 Oleoyl-CoA
Cholesterol
Homo sapiens (Cricetulus griseus)[Chang et al., 1998] Ovary (-)
7.4е-3

Bile acid hydrolase Taurocholate Lactobacillus sp. (bacteria) [Lundeen
& Savage, 1990] 1900
0.76


Other published data were used for evaluating parameters of the model,
in particular [Klimov & Nikul'cheva, 1999]:

Fasting LDL concentration in adult human blood serum CLDL = 200- 300
mg/dl.

The average number of unesterified and esterified cholesterol
molecules per one LDL particle QUEC = 475 and QEC = 1310.

LDL half-life in blood of healthy humans t 1/2 = 2.5 days; therefore,
kLDLutil. = ln(2)/t 1/2 =3.21*10-6 sec-1.

Total number of LDL receptors per one cell at 37° C QLDLR = 15,000-
70,000.

Lifespan of LDL receptors t = 1- 2 days; therefore, kLDLRutil. = 1/t ~
7.72e - 6 sec- 1.

LDL receptor recyclization span t ~ 20 min.

The values of the rest parameters of the model were determined through
numerical experiments.



Figure 2. Kinetics of main components of the system regulating
cholesterol biosynthesis in the cell.

Results of calculations

The results obtained while simulating the cell response to a twofold
increase in LDL particle content in blood serum (Fig. 2, b) illustrate
the model performance. The number of receptors bound to LDL increases
(d); unbound, decreases (e). Intracellular concentrations of free
cholesterol (a) and its esters (c) increase. Free cholesterol binds
the protease (SRP), preventing SREBP-1 formation (f). Productions of
enzymes involved in the internal cellular cholesterol synthesis
(HMG-CoA reductase; g), LDL receptors, and intermediate
low-molecular-weight components (mevalonic acid, h; squalene, i) are
stopped. Cholesterol concentration in the cell is decreasing. No
further influence on the system provided, it returns to the initial
state. A complete recovering requires about 15 h.

In future, we plan to perform computer stimulation of recombination
process in diploid cell, by modelling interactions between alleles of
genes responsible for cholesterol biosynthesis.

Acknowledgments

The authors are grateful to Galina Chirikova for translation of the
manuscript into English and to N.A. Kolchanov for fruitful
discussions. The work was supported by National Russian Program "Human
Genome" (No 106), Integrational Science Project of SB RAS "Modelling
of basic genetical processes and systems".

References

R. Marry, D. Grenner, P. Meies, V. Roduell, "Human Biochemistry",
Moscow, "Mir", (1993).
A.N. Klimov and N.G. Nikul'cheva, "Lipid and Lipoprotein Metabolism
and Its Disturbances" St. Petersburg: Piter Kom. (1999).
X. Wang, R. Seto, M. S. Brown et al., "SREBP-1, a membrane-bound
transcription factor released by sterol regulated proteolises" Cell,
77, 53 (1994).UI: 94208061
S.I. Bazhan, V.A. Likhoshvai and O.E. Belova, "Theoretical Analysis of
the Regulation of Interferon Expression during Priming and Blocking"
J. Theor. Biol., 175, 149 (1995).UI: 96007769
C. W. Gear, "The automatic integration of ordinary differential
equations", Communs ACM, 14, 176 (1971).
G. Gil, M. Sitges, and F.G. Hegardt, "Purification and properties of
rat liver hydrohymethylglutaryl coenzyme A reductase phosphatases"
Biochim. Biophys. Acta, 663, No. 1, 211 (1981).UI: 82044857
D.A. Kleinsek, J.W. Porter, "An alternate method of purification and
properties of rat liver 3-hydroxy-3-methylglutaryl coenzyme A
reductase" J. Biol. Chem., 254, No. 16, 7591 (1979).UI: 79239331
M. Sugano, H. Okamatsu, and T. Ide, "Properties of
3-hydroxy-3-methylglutaryl-coenzyme A reductase in villous and crypt
cells of the rat small intestine" Agr. Biol. Chem., 42, No. 11, 2009
(1978).
W.D. Reed, K.D. Clinkenbeard, and M.D. Lane, "Molecular and catalytic
properties of mitochondrial (ketogenic) 3-hydroxy-3-methylglutaryl
coenzyme A synthase of liver" J. Biol. Chem., 250, No. 8, 3117
(1975).UI: 75133544
L.L. Rokosz, D.A. Boulton, E.A. Butkiewicz, G. Sanyal, M.A. Cueto,
P.A. Lachance, and J.D. Hermes, "Human cytoplasmic
3-hydroxy-3-methylglutaryl coenzyme A synthase: expression,
purification, and characterization of recombinant wild-type and Cys129
mutant enzymes" Arch. Biochem. Biophys., 312, No. 1, 1 (1994).UI:
94304197
W. Huth, R. Jonas, I. Wunderlich, and W. Seubert, "On the mechanism of
ketogenesis and its control. Purification, kinetic mechanism and
regulation of different forms of mitochondrial acetoacetyl-CoA
thiolases from ox liver" Eur. J. Biochem., 59, No. 2, 475 (1975).UI:
76091931
S.A. Kim and L. Copeland, "Acetyl coenzyme A acetyltransferase of
Rhizobium sp. (Cicer) strain CC 1192" Appl. Environ. Microbiol., 63,
No. 9, 3432 (1997).
K. Sasiak and H.C. Rilling "Purification to homogeneity and some
properties of squalene synthetase" Arch. Biochem. Biophys., 260, No.
2, 622 (1988).UI: 88132877
G.F. Barnard and G. Popjak, "Human liver prenyltransferase and its
characterization" Biochim. Biophys. Acta, 661, No. 1, 87 (1981).UI:
82046705
G. Balliano, F. Viola, M. Ceruti, L. Cattel, "Characterization and
partial purification of squalene-2,3-oxide cyclase from Saccharomyces
cerevisiae" Arch. Biochem. Biophys., 293, No. 1, 122 (1992).UI:
92117685
C.C.Y. Chang, C.-Y.G. Lee, E.T. Chang, C.J. C.ruz, M.C. Levesque,
T.-Y. Chang "Recombinant acyl-CoA:cholesterol acyltransferase-1
(ACAT-1) purified to essential homogeneity utilizes cholesterol in
mixed micelles or in vesicles in a highly cooperative manner" Journal
of Biological Chemistry, 273(52), 35132 1998.UI: 99074294
S.G. Lundeen and D.C. Savage, "Characterization and purification of
bile salt hydrolase from Lactobacillus sp. strain 100-100" J.
Bacteriol., 172 No. 8, 4171 (1990).UI: 90330517

Vacuum Energy Density, or How Can Nothing Weigh Something?
Recently two different groups have measured the apparent brightness of
supernovae with redshifts near z = 1. Based on this data the old idea
of a cosmological constant is making a comeback.

Einstein Static Cosmology
Einstein's original cosmological model was a static, homogeneous model
with spherical geometry. The gravitational effect of matter caused an
acceleration in this model which Einstein did not want, since at the
time the Universe was not known to be expanding. Thus Einstein
introduced a cosmological constant into his equations for General
Relativity. This term acts to counteract the gravitational pull of
matter, and so it has been described as an anti-gravity effect.

Why does the cosmological constant behave this way?

This term acts like a vacuum energy density, an idea which has become
quite fashionable in high energy particle physics models since a
vacuum energy density of a specific kind is used in the Higgs
mechanism for spontaneous symmetry breaking. Indeed, the inflationary
scenario for the first picosecond after the Big Bang proposes that a
fairly large vacuum energy density existed during the inflationary
epoch. The vacuum energy density must be associated with a negative
pressure because:

The vacuum energy density must be constant because there is nothing
for it to depend on.
If a piston capping a cylinder of vacuum is pulled out, producing more
vacuum, the vacuum within the cylinder then has more energy which must
have been supplied by a force pulling on the piston.
If the vacuum is trying to pull the piston back into the cylinder, it
must have a negative pressure, since a positive pressure would tend to
push the piston out.
The magnitude of the negative pressure needed for energy conservation
is easily found to be P = -u = -rho*c2 where P is the pressure, u is
the vacuum energy density, and rho is the equivalent mass density
using E = m*c2.

But in General Relativity, pressure has weight, which means that the
gravitational acceleration at the edge of a uniform density sphere is
not given by

g = GM/R2 = (4*pi/3)*G*rho*R

but is rather given by
g = (4*pi/3)*G*(rho+3P/c2)*R

Now Einstein wanted a static model, which means that g = 0, but he
also wanted to have some matter, so rho 0, and thus he needed P 0.
In fact, by setting
rho(vacuum) = 0.5*rho(matter)

he had a total density of 1.5*rho(matter) and a total pressure of
-0.5*rho(matter)*c2 since the pressure from ordinary matter is
essentially zero (compared to rho*c2). Thus rho+3P/c2 = 0 and the
gravitational acceleration was zero,
g = (4*pi/3)*G*(rho(matter)-2*rho(vacuum))*R = 0

allowing a static Universe.

Einstein's Greatest Blunder
However, there is a basic flaw in this Einstein static model: it is
unstable - like a pencil balanced on its point. For imagine that the
Universe grew slightly: say by 1 part per million in size. Then the
vacuum energy density stays the same, but the matter energy density
goes down by 3 parts per million. This gives a net negative
gravitational acceleration, which makes the Universe grow even more!
If instead the Universe shrank slightly, one gets a net positive
gravitational acceleration, which makes it shrink more! Any small
deviation gets magnified, and the model is fundamentally flawed.

In addition to this flaw of instability, the static model's premise of
a static Universe was shown by Hubble to be incorrect. This led
Einstein to refer to the cosmological constant as his greatest
blunder, and to drop it from his equations. But it still exists as a
possibility -- a coefficient that should be determined from
observations or fundamental theory.

The Quantum Expectation
The equations of quantum field theory describing interacting particles
and anti-particles of mass M are very hard to solve exactly. With a
large amount of mathematical work it is possible to prove that the
ground state of this system has an energy that is less than infinity.
But there is no obvious reason why the energy of this ground state
should be zero. One expects roughly one particle in every volume equal
to the Compton wavelength of the particle cubed, which gives a vacuum
density of

rho(vacuum) = M4c3/h3 = 1013 [M/proton mass]4 gm/cc

For the highest reasonable elementary particle mass, the Planck mass
of 20 micrograms, this density is more than 1091 gm/cc. So there must
be a suppression mechanism at work now that reduces the vacuum energy
density by at least 120 orders of magnitude.
A Bayesian Argument
We don't know what this mechanism is, but it seems reasonable that
suppression by 122 orders of magnitude, which would make the effect of
the vacuum energy density on the Universe negligible, is just as
probable as suppression by 120 orders of magnitude. And 124, 126, 128
etc. orders of magnitude should all be just as probable as well, and
all give a negligible effect on the Universe. On the other hand
suppressions by 118, 116, 114, etc. orders of magnitude are ruled out
by the data. Unless there are data to rule out suppression factors of
122, 124, etc. orders of magnitude then the most probable value of the
vacuum energy density is zero.

The Dicke Coincidence Argument
If the supernova data and the CMB data are correct, then the vacuum
density is about 75% of the total density now. But at redshift z=2,
which occurred 11 Gyr ago for this model if Ho = 65, the vacuum energy
density was only 10% of the total density. And 11 Gyr in the future
the vacuum density will be 96% of the total density. Why are we alive
coincidentally at the time when the vacuum density is in the middle of
its fairly rapid transition from a negligible fraction to the dominant
fraction of the total density? If, on the other hand, the vacuum
energy density is zero, then it is always 0% of the total density and
the current epoch is not special.

What about Inflation?
During the inflationary epoch, the vacuum energy density was large:
around 1071 gm/cc. So in the inflationary scenario the vacuum energy
density was once large, and then was suppressed by a large factor. So
non-zero vacuum energy densities are certainly possible.

Observational Limits
Solar System
One way to look for a vacuum energy density is to study the orbits of
particles moving in the gravitational field of known masses. Since we
are looking for a constant density, its effect will be greater in a
large volume system. The Solar System is the largest system where we
really know what the masses are, and we can check for the presence of
a vacuum energy density by a careful test of Kepler's Third Law: that
the period squared is proportional to the distance from the Sun cubed.
The centripetal acceleration of a particle moving around a circle of
radius R with period P is

a = R*(2*pi/P)2

which has to be equal to the gravitational acceleration worked out
above:
a = R*(2*pi/P)2 = g = GM(Sun)/R2 - (8*pi/3)*G*rho(vacuum))*R

If rho(vacuum) = 0 then we get
(4*pi2/GM)*R3 = P2

which is Kepler's Third Law. But if the vacuum density is not zero,
then one gets a fractional change in period of
dP/P = (4*pi/3)*R3*rho(vacuum)/M(sun) = rho(vacuum)/rho(bar)

where the average density inside radius R is rho(bar) =
M/((4*pi/3)*R3). This can only be checked for planets where we have an
independent measurement of the distance from the Sun. The Voyager
spacecraft allowed very precise distances to Uranus and Neptune to be
determined, and Anderson et al. (1995, ApJ, 448, 885) found that dP/P
= (1+/-1) parts per million at Neptune's distance from the Sun. This
gives us a Solar System limit of
rho(vacuum) = (5+/-5)*10-18 2*10-17 gm/cc


The cosmological constant will also cause a precession of the
perihelion of a planet. Cardona and Tejeiro (1998, ApJ, 493, 52)
claimed that this effect could set limits on the vacuum density only
ten or so times higher than the critical density, but their
calculation appears to be off by a factor of 3 trillion. The correct
advance of the perihelion is 3*rho(vacuum)/rho(bar) cycles per orbit.
Because the ranging data to the Viking landers on Mars is so precise,
a very good limit on the vacuum density is obtained:

rho(vacuum) 2*10-19 gm/cc


Milky Way Galaxy
In larger systems we cannot make part per million verifications of the
standard model. In the case of the Sun's orbit around the Milky Way,
we only say that the vacuum energy density is less than half of the
average matter density in a sphere centered at the Galactic Center
that extends out to the Sun's distance from the center. If the vacuum
energy density were more than this, there would be no centripetal
acceleration of the Sun toward the Galactic Center. But we compute the
average matter density assuming that the vacuum energy density is
zero, so to be conservative I will drop the "half" and just say

rho(vacuum) (3/(4*pi*G))(v/R)2 = 3*10-24 gm/cc

for a circular velocity v = 220 km/sec and a distance R = 8.5 kpc.

Large Scale Geometry of the Universe
The best limit on the vacuum energy density comes from the largest
possible system: the Universe as a whole. The vacuum energy density
leads to an accelerating expansion of the Universe. If the vacuum
energy density is greater than the critical density, then the Universe
will not have gone through a very hot dense phase when the scale
factor was zero (the Big Bang). We know the Universe went through a
hot dense phase because of the light element abundances and the
properties of the cosmic microwave background. These require that the
Universe was at least a billion times smaller in the past than it is
now, and this limits the vacuum energy density to

rho(vacuum) rho(critical) = 8*10-30 gm/cc

The recent supernova results suggest that the vacuum energy density is
close to this limit: rho(vacuum) = 0.75*rho(critical) = 6*10-30 gm/cc.
The ratio of rho(vacuum) to rho(critical) is called lambda. This
expresses the vacuum energy density on the same scale used by the
density parameter Omega. Thus the supernova data suggest that lambda =
0.75. If we use OmegaM to denote the ratio of ordinary matter density
to critical density, then the Universe is open if OmegaM + lambda is
less than one, closed if it is greater than one, and flat if it is
exactly one. If lambda is greater than zero, then the Universe will
expand forever unless the matter density OmegaM is much larger than
current observations suggest. For lambda greater than zero, even a
closed Universe can expand forever.



The figure above shows the regions in the (OmegaM, lambda) plane that
are suggested by the current data. The green region in the upper left
is ruled out because there would not be a Big Bang in this region,
leaving the CMB spectrum unexplained. The red and green ellipses with
yellow overlap region show the LBL team's allowed parameters (red) and
the Hi-Z SN Team's allowed parameters (green). The blue wedge shows
the parameter space region that gives the observed Doppler peak
position in the angular power spectrum of the CMB. The purple region
is consistent with the CMB Doppler peak position and the supernova
data. The big pink ellipse shows the possible systematic errors in the
supernova data.


The figure above shows the scale factor as a function of time for
several different models. The colors of the curves are keyed to the
colors of the circular dots in the (OmegaM, lambda) plane Figure. The
purple curve is for the favored OmegaM = 0.25, lambda = 0.75 model.
The blue curve is the Steady State model, which has lambda = 1 but no
Big Bang.

Because the time to reach a given redshift is larger in the OmegaM =
0.25, lambda = 0.75 model than in the OmegaM = 1 model, the angular
size distance and luminosity distance are larger in the lambda model,
as shown in the space-time diagram below:




The OmegaM = 1 model is on the left, the OmegaM = 0.25, lambda = 0.75
model is on the right. The green line across each space-time diagram
shows the time when the redshift was z = 1, which corresponds to
approximately to the most distant of the supernovae observed to date.
Using a ruler you can see that the angular size distance to z = 1 is
1.36 times larger in the right hand diagram, which makes the observed
supernovae 1.84 times fainter (0.66 magnitudes fainter).

Conclusion
In the past, we have had only upper limits on the vacuum density and
philosophical arguments based on the Dicke coincidence problem and
Bayesian statistics that suggested that the most likely value of the
vacuum density was zero. Now we have the supernova data that suggests
that the vacuum energy density is greater than zero. This result is
very important if true. We need to confirm it using other techniques,
such as the MAP satellite which will observe the anisotropy of the
cosmic microwave background with angular resolution and sensitivity
that are sufficient to measure the vacuum energy density.


Ned Wright's Home Page

FAQ | Tutorial : Part 1 | Part 2 | Part 3 | Part 4 | Age | Distances |
Bibliography | Relativity

© 1998-2002 Edward L. Wright. Last modified 2-Nov-2002
-----------------------------------------------------




Cosmic Radio Signals can be polarized at 91mhz (fm); 160mhz (vhf); 610
mhz ( on channel 78uhf-tv) by keying a cb microphone over a radio
receiver set on these
radio channels with your home equipment.
These are known cosmic radio sources from outerspace from
Annual Review of Astrophysics and Astronomy 1966 editor Leo Goldberg.
Facts : A lot of the static snow that you recieve is your non cable
uhf tv; is cosmic radio signals......many elements naturally emitt
radio pulses whern excited;
You can polarize these signals with CB radio Microphone
buy keying the transmitting CB microphone over the speaker of a
recieving radio set at 91 mhz..(91fm)..160 mhz..(160 vhf radio)..and
transmit the spacey sound you hear to a recieving TV set at channel
78 UHF tv.....then you will see a ATT type of symbol..and see the
oscillations and fluctutations of the cosmic radio signal that has
been just polarized.....personally I think 160 vhf radio.....is
artifically generated.....since it oscillates odd.....if intelligent
life has learned to genrate radio signals within our galaxy....the
odds are in our favor.we are recieving them.as they recieve our
signals from 50+ years ago.......that energy is bouncing off our
heads now. I have recieved a strange CBS eye symbol on my TV after I
did this, It wasn't CBS's thought..it was gold on the edge, with a
green center eye and purple round about the eye..but for this
happened a circular rainbow image formed.then out of the cloud the
CBS eye appeared.the eye looked like a lizards eye.....real spooky.I

called CBS in NY, and they don't know why they pick that symbol..(~)

after some research.there was a " CBS" electron gun.made for TV

picture tubes in

the 1950's..so maybe back in the early days of television..tv

engineers must have recieved this same signal.

God Bless You

Br Dan Izzo
512 Onondaga Ace
Syracuse, NY 13207

1-315-472-5088


Post a Comment

Home
About Me
Name:Rev Dan Izzo
View my complete profile

Previous Posts
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
The Gravitational Instability Cosmological Theory on the Formation of the Universe rev dan izzo History 8 October 9th 03 05:41 PM
The Gravitational Instability Cosmological Theory on the Formation of the Universe rev dan izzo Astronomy Misc 0 September 29th 03 06:28 PM


All times are GMT +1. The time now is 12:04 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 SpaceBanter.com.
The comments are property of their posters.