the problem w ith theoretical physics

9
15 th Annual Conference of the NPA, April 2008, University of New Mexico, Albuquerque, NM The Problem with Theoretical Physics By Roger A. Rydin Associate Professor Emeritus of Nuclear Engineering University of Virginia Abstract: The Editor of the Australian magazine Cosmos asked, “Is it time to call a spade a spade, and admit that theoretical physics is heading down the wrong track?” In this particular issue was a feature article discussing inflation and string theory, and others about Stephen Hawking, the Standard Model of particle physics, and dark matter and energy. All pointed to various failures of accepted theory to adequately predict experimental results, even after years of trying. This paper discusses what nuclear engineer Roger Rydin and some of his associates in the Natural Philosophy Alliance (NPA) have to say about critical cosmological data that is being ignored, about obvious errors in relativity theory, and about new particle and gravitational theories that might point the way to new directions in physics. All of these NPA authors point to theoretical failures in the Big Bang model, to unwarranted corrections made to experimental cosmological data to make it fit the Big Bang model, and to conceptual errors in particle physics brought about by including relativistic theories and corrections that do not adequately explain all the experimental data. Many of these new ideas represent simpler theoretical models that explain more experimental data, and hence meet the essence of Occam’s Razor as areas worth developing. Introduction A recent issue of Cosmos [1] discussed the apparent failure of string theory, after more than 20 years of development, to elucidate how the universe may have begun, or to begin to unify the four forces, or to reconcile the Standard Model with Quantum Physics. The data on galaxy rotation that call for the existence of dark matter, and the data on supernovae Type Ia thermonuclear explosions that lead to the idea of dark energy, should make us wonder why 95% of all the necessary matter in the universe is as yet undetected. Perhaps we need to examine experimental data and theories that disagree with what has become physics dogma in order to see where and when we may have taken a wrong turn. One basic problem is that a consensus on the Big Bang model was reached in approximately 1970. However, astronomers have been taking more and more data over the past 40 years using better and more sophisticated data gathering and analysis techniques, and the new data raises questions about whether consensus was reached too soon. We need to look at this data critically instead of trying to rationalize it away. This paper treats four specific areas: 1) General Relativity problems; 2) Standard Model problems; 3) Gravity and related Dark Matter and Energy problems; and 4) Deep Redshift Survey implications.

Upload: others

Post on 11-Apr-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Problem w ith Theoretical Physics

15th Annual Conference of the NPA, April 2008, University of New Mexico, Albuquerque, NM

The Problem with Theoretical Physics By Roger A. Rydin

Associate Professor Emeritus of Nuclear Engineering University of Virginia

Abstract: The Editor of the Australian magazine Cosmos asked, “Is it time to call a spade a spade, and admit that theoretical physics is heading down the wrong track?” In this particular issue was a feature article discussing inflation and string theory, and others about Stephen Hawking, the Standard Model of particle physics, and dark matter and energy. All pointed to various failures of accepted theory to adequately predict experimental results, even after years of trying. This paper discusses what nuclear engineer Roger Rydin and some of his associates in the Natural Philosophy Alliance (NPA) have to say about critical cosmological data that is being ignored, about obvious errors in relativity theory, and about new particle and gravitational theories that might point the way to new directions in physics. All of these NPA authors point to theoretical failures in the Big Bang model, to unwarranted corrections made to experimental cosmological data to make it fit the Big Bang model, and to conceptual errors in particle physics brought about by including relativistic theories and corrections that do not adequately explain all the experimental data. Many of these new ideas represent simpler theoretical models that explain more experimental data, and hence meet the essence of Occam’s Razor as areas worth developing. Introduction

A recent issue of Cosmos [1] discussed the apparent failure of string theory, after more than 20 years of development, to elucidate how the universe may have begun, or to begin to unify the four forces, or to reconcile the Standard Model with Quantum Physics. The data on galaxy rotation that call for the existence of dark matter, and the data on supernovae Type Ia thermonuclear explosions that lead to the idea of dark energy, should make us wonder why 95% of all the necessary matter in the universe is as yet undetected. Perhaps we need to examine experimental data and theories that disagree with what has become physics dogma in order to see where and when we may have taken a wrong turn.

One basic problem is that a consensus on the Big Bang model was reached in approximately 1970. However, astronomers have been taking more and more data over the past 40 years using better and more sophisticated data gathering and analysis techniques, and the new data raises questions about whether consensus was reached too soon. We need to look at this data critically instead of trying to rationalize it away.

This paper treats four specific areas: 1) General Relativity problems; 2) Standard

Model problems; 3) Gravity and related Dark Matter and Energy problems; and 4) Deep Redshift Survey implications.

Page 2: The Problem w ith Theoretical Physics

15th Annual Conference of the NPA, April 2008, University of New Mexico, Albuquerque, NM

General Relativity

At the May 2007 meeting of the NPA at the University of Connecticut, Robert Heaston [2] gave a paper analyzing the history of Einstein’s development of General Relativity. He used correspondence and other sources to pinpoint when each of the equations was developed over a ten year period. He discovered that Einstein made a conceptual error when he replaced actual dimensions by setting the speed of light c equal to unity (others also set Planck’s constant h, Boltzmann’s constant k, and the gravitational constant G equal to unity). The effect of this substitution was the creation of a singularity, which has subsequently been used for the Big Bang beginning and for black hole analyses.

Heaston calls the parameter Eta, which relates how much mass occupies a limited volume. Eta = infinity corresponds to a singularity. But if the dimensions are retained in General Relativity, then Heaston shows Eta has a limit of 1.0 where mass must simultaneously convert to energy. The characteristic measure of the size of relativistic effects on a body is the quantity Eta = GM/Rc2. On this scale, Eta ~ 0.3 corresponds to a neutron star or pulsar, and Eta ~ 0.5 corresponds to a black hole. There is some experimental data for the existence of quark stars with measured densities greater than a pulsar, which would occupy the range in between.

Peter Erickson [3], in his new book, makes a persuasive argument that there is no

such thing as space-time, but rather that space is absolute where motion can curve in space, and where time is now and moves on to a new instant. Erickson claims that Euclid was right in his definition of a straight line and its meaning, and mathematicians have corrupted this with set theory, manifolds, spaces and other constructs. Constantin Antonopoulos [4] makes a corresponding case that there is a logical inconsistency to the ideas of time “beginning”, and of space “expanding”. Perhaps Heaston should turn his attention to the left hand side of Einstein’s General Relativity equations to see if the tensor relations of space-time make sense.

The theoretical consequences of Heaston’s discovery of a size limit for a mass are significant. If there is no singularity, then the theory of inflation from a singularity, string theory to make matter, and the Big Bang itself are in trouble. If there is no singularity, then black holes are finite with all the mass located within the event horizon where light does not escape. Remember, even though light is trapped, gravity does escape, so light (which obeys Maxwell’s equations) is not analogous to gravity, and gravity may very well propagate much faster than the speed of light as claimed by Tom Van Flandern [5], who says this is apparent from celestial mechanics measurements. If there is no singularity, then Hawking’s miniature black holes are a mathematical illusion. Standard Model of Particle Physics Roland Dishington [6] discusses problems with classical electromagnetic theory caused by the assumption of a point particle. He says, “Almost all the shortcomings of classical E&M can be traced to a breakdown in its handling of particle structure. The combination

Page 3: The Problem w ith Theoretical Physics

15th Annual Conference of the NPA, April 2008, University of New Mexico, Albuquerque, NM

of Dirac's equations and Maxwell's equations to form quantum electrodynamics in the Standard Model has exactly the same problems. For about 102 years, these problems have been left uncorrected. The rush to quantum mechanics brought serious work on them to a crawl. There were three things classical E&M did wrong: 1) It used point particles; 2) It defined electric and magnetic energy incorrectly (and often attributed a particle's kinetic energy to its magnetic field); and 3) The Poynting theorem failed to correctly describe energy flow except for radiation.” Dishington uses a simple finite spherical potential distribution to correct these difficulties, and points out that the static potentialφ and the vector potential A should be considered to be the fundamental electromagnetic fields. This derivation seems to correspond to Francisco Muller’s measured magnetically shielded motor/generator results [7].

Stephan Gift has analyzed the Standard Model, and he concludes that there are too many independent elementary fermions [8, 9]. He involves a new gauge particle called the fluon, using the singlet state, which promotes a flavor change. It requires that non-leptonic strange particle decay occur via the new gauge particle and not by the weak interaction, and provides an interesting explanation for the longstanding CP violation problem.

Specifically, Gift claims that the three classes of quarks may be fluon-excited

states of each other (charm and top are excited states of up; and strange and bottom are heavier carbon copies of down), which might explain a broad range of poorly understood non-leptonic interaction phenomena. This would also be true for the leptons which the model proposes are not elementary but are made up of quarks. The new particle accelerators that produce collisions may simply provide the energy to raise these particles to the next excited state, and not prove that they are new independent particles.

A consequence of Gift’s model is that the weak interaction conserves flavor as it does color. He reduces the number of elementary particles from 60 to22 with his theory, and in so doing explains a number of experimental results that are not explained by the Standard Model.

At a previous meeting of the NPA at the University of Tulsa in 2005, Charles

Lucas presented a new model of all of the elementary particles, based on a three level intertwining charged fiber torus configuration, plus electromagnetic equations, and combinatorial geometry [10]. Granted the Standard Model has had many successes, but it does not explain everything well. The new Lucas model [11] for the first time predicts all of the magic numbers for closed, tightly bound, neutron and proton shells in nuclei (see Table 1). It quantitatively predicts all of the spins and nuclear magnetic moments for the isotopes, whereas the Standard nuclear quantum shell model gives errors of 20 to 40% versus experiment. There is a configuration for all of the elementary sub-nuclear particles, and these configurations give all of the decomposition products in the same charged fiber notation [12]. Finally, the charged fiber representations of the families of particles like neutrinos show a basic structure to which added fibers from excitation make up the next member in the family. This supports Gift’s contention about excited states instead of independent particles.

Page 4: The Problem w ith Theoretical Physics

15th Annual Conference of the NPA, April 2008, University of New Mexico, Albuquerque, NM

Table 1. Lucas’ Magic Number Model for the Nucleus

Shell #’s 2 8 18 18 32 50 Total # Rings in Shells

2 2 8 8

20 2 18 28 2 8 18 50 18 32 82 32 50 126 8 18 18 32 50

Lucas applies a similar electric and magnetic field model for atoms, based upon

spherical symmetry subject to magnetic dipole forces between the electrons, and predicts the Periodic Table as shown in Table 2. The same model explains the geometric configurations of complicated molecules [11].

Table 2. Lucas’ Periodic Table of the Atoms

Note that the Lanthanide Series Begins When the N Shell Fills and the Actinide Series Begins When the O Shell Fills Atomic

#Z Electron

Shell K L M N O P Q

2 2 10 2 8 18 2 8 8 36 2 8 18 8 54 2 8 18 18 8 86 2 8 18 32 18 8

118 2 8 18 32 32 18 8

The primary conclusion is that the Standard Model has some theoretical problems and limitations that prevent it from predicting a great deal of experimental data accurately. Simpler models using newer concepts are able to predict physical correspondences like magic numbers that have not been predicted before, and to match experimental data more accurately. If neutrinos are truly excited states of one another, then the so-called neutrino oscillation theory, where one type of neutrino spontaneously changes to another type, cannot be true because it would need energy and momentum to make an excited state. Hence, this explanation for the Sun’s neutrino deficit would be false. Gravity and Dark Energy

The justification for the existence of dark matter is to explain the rotation of spiral galaxies, where the outer stars apparently move faster than predicted by Newtonian gravity. The rotation curves have been explained ad hoc by Mordehai Milgrom‘s MOND

Page 5: The Problem w ith Theoretical Physics

15th Annual Conference of the NPA, April 2008, University of New Mexico, Albuquerque, NM

model [13], where if gravity saturates at a low value and then varies as 1/r, the rotation curves can be matched. However, there was no theory to back this assertion.

Reginald Cahill has published a new theory of gravity [14], where Newtonian gravity has an extra term in some situations that is proportional to the fine structure constant alpha,

036.137/1/2 == ce hα The theory is supported by bore hole gravity data measured down a few kilometers into the Earth, and by measured ratios of the masses of about 20 galaxies to their central black hole masses. Using this theory, Cahill gets an almost perfect match to the measured rotation curve of a spiral galaxy. Since the fine structure constant contains Planck’s constant, this also gives gravity a quantum aspect that may explain the almost instant propagation of gravity.

From Cahill [14]

The justification for dark energy is about 100 supernovae Type Ia bursts, whose

magnitudes fall uniformly above the Hubble redshift-brightness curve by a small amount. Theories based on an assumption that the Big Bang model using General Relativity is

Page 6: The Problem w ith Theoretical Physics

15th Annual Conference of the NPA, April 2008, University of New Mexico, Albuquerque, NM

correct then predict that the expansion must be accelerating, and this must be due to antigravity or dark energy. We should note, however, that Hubble’s Law is a straight line fit to Cepheid “standard candle” data with a spread of 10% or more [15]. A straight line like Hubble’s Law is the solution to an extremely simple first order differential equation, dy/dx = constant, which does not contain any physics at all since there are no dependences on any other parameters. Dark energy is therefore based on the small deviations of a small data set from a redshift-brightness fit that does not contain any real theory of the universe, but assumes that the Big Bang model is correct. Furthermore, Eric Lerner [16] contends that such brightness data has been improperly corrected as a function of redshift assuming that space is expanding according to the Big Bang model.

A consequence of Cahill’s theory is that dark matter does not need to exist at all to explain galaxy rotation. A consequence of the supernova Type Ia data is that dark energy may also not exist either because the deviations from Hubble’s Law are meaningless. Galaxy Distributions

Finally, we come to sets of astronomical data, that have been measured by teams of astronomers for about 20 years, whose implications have been ignored by theorists because they simply can’t explain them using Big Bang theory. These are the deep pencil redshift measurements by Broadhurst, Koo et al [17]. There are six different traverses done by two different telescopes. One is called the NS traverse, which goes perpendicular to the plane of the Milky Way to a distance of about 5 billion light years in each direction. The number of galaxies in each direction in a small pencil cone a few arc minutes in diameter is simply counted and plotted as a function of redshift. A second set of traverses at 45 degrees to the NS survey is called the SA68 and anti-SA68 direction. The final set is 45 degrees to the NS set and 90 degrees to the SA68 set, called the Hercules and anti-Hercules directions. The data have not been corrected for cone spread, where volume sampled increases with distance as r-squared.

From Koo et al [17]

Page 7: The Problem w ith Theoretical Physics

15th Annual Conference of the NPA, April 2008, University of New Mexico, Albuquerque, NM

The amazing thing about all this data is that it is periodic, with a period of about

400 million light years. It is also damped with distance on the uncorrected scale, and very damped when corrected for cone spread. Furthermore, the data in all six directions has the same period. In fact, there is a great deal of astronomical data showing the same periodicity, including periodicity of voids [18]. Roger Rydin has digitized some of this data and performed auto and cross correlations that were reported at an NPA meeting held at the University of Connecticut in 2003 [19]. The net result is that all of the data are highly correlated, meaning they have a common cause. Furthermore, such data can only represent a spherically symmetric distribution with a center no more than 70 or so million light years from Earth in the direction of Virgo. The Virgo super cluster contains the only 6 galaxies with measured blue shifts. This region also seems to be the geometric origin of the Cosmic Microwave Background, and as described by Lawrence Krause [20], there is a related problem for Big Bang cosmology in explaining the Dipole Anomaly with its quadrapole and octapole moments, and in explaining large scale drifts of super clusters.

Mathematical analysis of the Sloan SDSS galaxy survey presented by Rydin in Tulsa in 2006 [21] indicates that the way the 3D data is plotted in a 2D pie diagram corresponds to a density of galaxies that decreases approximately as 1/r-squared from Earth. Annular regions of high density are visible at an approximate spacing of about 400 million light years, where the first region north is called the Great Wall. If galaxy counts can be taken as a surrogate for the density of the universe, then the density is definitely not constant but decreases from Earth.

Sloan SDSS Survey with circles sketched in

Page 8: The Problem w ith Theoretical Physics

15th Annual Conference of the NPA, April 2008, University of New Mexico, Albuquerque, NM

The consequence for the Big Bang is that this data falsifies it. Einstein’s General

Relativity model assumes that the universe is uniform and isotropic, and that it expands uniformly. The data indicate that the universe is non-uniform and the expansion is spherical from an origin. Therefore, General Relativity does not apply to this problem. Periodicity is not predicted at all, but some theorists like Kirilova and Chizhov [18] are working with baryogenesis models based on General Relativity that might produce periodicity, although in 3D corresponding to Voronoi foam-like soap bubbles. Conclusions

Information and theories developed by NPA members indicate that theoretical physics indeed made a wrong turn 100 years ago, and that error has been compounded by ignoring new theories and failing to heed experimental warning signs that there were serious problems with existing theories. Instead of waiting another 20 years for new experiments to hopefully confirm aspects of Big Bang theory, find dark matter and dark energy, and finish the Standard Model by unifying the forces and finding the Higgs boson, we should critically examine the data we now have in abundance. This data may tell us where we made wrong turns, and lead us in new directions. In order to do this, some closed minds have to open up and admit that all is not well in theoretical physics. Data has to be explained and not simply rationalized. And new theories like some of the ones discussed above need to be considered as replacements for the old ones that do not work very well after years of trying. ________________________________________________________________ Roger Rydin is a graduate of MIT and Associate Professor Emeritus of Nuclear Engineering at the University of Virginia. References

1) Tim Dean, Editor, “Untying the Strings”, Cosmos, p. 8, Issue 16, July 2007. 2) Robert J. Heaston, “Reconstruction of the Derivation of the Einstein Field

Equations of General Relativity”, Proceedings of the 14th Natural Philosophy Alliance International Conference, May 21 - 25, 2007, at University of Connecticut, Storrs, CT, to be published, 2007.

3) Peter F. Erickson, Absolute Space, Absolute Time, & Absolute Motion, Xlibris Corporation, 2006.

4) Constantin Antonopoulos, “The Expanding Universe: A Treatise on How Something Can Be Larger Than It Is”, Proceedings of the 13th Natural Philosophy Alliance International Conference, April 3 - 7, 2006, at University of Tulsa, Tulsa, OK Vol. 3, No. 1, pp. 1 –8, 2006.

5) Tom Van Flandern, “Does the Universe Have a Speed Limit?, Part 1”, Meta Research Bulletin, Vol. 14, No. 4, pp. 49 - 62, December 15, 2005.

6) Roland H. Dishington, “Advances in Main Line Electromagnetism”, http://www.lafn.org/~bd261/, 2007.

Page 9: The Problem w ith Theoretical Physics

15th Annual Conference of the NPA, April 2008, University of New Mexico, Albuquerque, NM

7) Francisco J. Muller, “Electromagnetic Induction and Motor Action Across Field-Free Regions”, Proceedings of the 13th Natural Philosophy Alliance International Conference, Vol. 3, No. 2, April 3 - 7, 2006, at University of Tulsa, Tulsa, OK, 2006.

8) Stephan J. G. Gift, “Model of Fundamental Particles I, Quarks”, Physics Essays, Vol. 17. No. 1, pp. 3 – 13, 2004.

9) Stephan J. G. Gift, “Model of Fundamental Particles II, Leptons”, Physics Essays, Vol. 17. No. 2, pp. 117 – 132, 2004.

10) Charles W. Lucas, Jr., “A Classical Electromagnetic Theory of Elementary Particles – Part 1, Introduction”, Foundations of Science, Vol. 7, No. 4, pp.1 – 8, November 2004.

11) Charles W. Lucas, Jr., “A Classical Electromagnetic Theory of Everything”, Proceedings of the 13th Natural Philosophy Alliance International Conference, April 3 - 7, 2006, at University of Tulsa, Tulsa, OK Vol. 3, No. 1, pp. 142 – 158, 2006.

12) Charles W. Lucas, Jr., “A Classical Electromagnetic Theory of Elementary Particles – Part 2, Intertwining Charge Fibers”, Foundations of Science, Vol. 8, No. 2, pp.1 – 16, May 2005.

13) Adam Frank, “Gravity’s Gadfly, Mordehai Milgrom”, Discover, pp. 32 -37, August 2006.

14) Reginald T. Cahill, “Black Holes in Elliptical and Spiral Galaxies and in Globular Clusters”, Progress in Physics, Vol. 3, pp. 51 – 56, October 2005.

15) Roger A. Rydin, “A Case Against Tired Light and the Big Bang”, Proceedings of the 14th Natural Philosophy Alliance International Conference, May 21 - 25, 2007, at University of Connecticut, Storrs, CT, to be published, 2007.

16) Eric J. Lerner, “Evidence for a Non-Expanding Universe: Surface Brightness Data From HUDF”, http://www.bigbangneverhappened.org/, First Crisis In Cosmology Conference CCC-I , Moncao, Portugal June 23-25, 2005.

17) David C. Koo, N. Ellman, R.G. Kron, J.A. Munn, A.S. Szalay, T.J. Broadhurst, and R.S. Ellis, "Deep Pencil-Beam Redshift Surveys as Probes of Large Scale Structures", Astronomical Society of the Pacific, Conference Series, Vol. 51, 1993, and S.R. Majewski, class notes, Department of Astronomy, University of Virginia, March 1996.

18) D.P. Kirilova and M.V. Chizhov, "Large Scale Structure and Baryogenesis", XXIst Moriond Astrophysics Meeting - Les Arcs, March 10-17, 2001.

19) Roger A. Rydin, “Cross Correlation of Deep Redshift Galactic Pencil Survey Data”, Proceedings of the 10th Annual Natural Philosophy Alliance International Conference, June 9 - 13, 2003 at University of Connecticut, Storrs, CT, in Journal of New Energy, Vol. 7, No. 3, pp 139-143, 2003.

20) Lawrence Krauss, Quintessence, the Mystery of Missing Mass in the Universe, Basic Books, Parts II-IV, pp. 82 – 85, New York, NY, 2000.

21) Roger A. Rydin, “Experimental Evidence that the Density of the Universe is Not Constant”, Proceedings of the 13th Natural Philosophy Alliance International Conference, Vol. 3, No. 2, April 3 - 7, 2006, at University of Tulsa, Tulsa, OK, 2006.