Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this document? Why not share!

- Why There Are No True Black Holes by John47Wind 171 views
- Chaos theory by Sergio Zaina 1538 views
- Does God play dice ? by Susse_Chandrasekaran 791 views
- Chaos theory by Tamsen Webster 3481 views
- Williams G. - Chaos Theory Tamed by Logos Coletivo 165 views
- Fractals And Chaos Theory by Nirmala last 6010 views

696 views

Published on

License: CC Attribution-ShareAlike License

No Downloads

Total views

696

On SlideShare

0

From Embeds

0

Number of Embeds

5

Shares

0

Downloads

21

Comments

0

Likes

1

No embeds

No notes for slide

- 1. 8$.03.0$4;390#0,9#//0 49,943841,32,90:7$.03989 433/078
- 2. Note to my readers: You can access this essay and my other essays directly instead of through this website, by visiting the Amateur Scientist Essays website at the following URL: https://sites.google.com/site/amateurscientistessays/ You are free to download and share all of my essays without any restrictions, although it would be very nice to credit my work when making direct quotes.
- 3. ♫ I'm not a true scientist, but an engineer (retired) by trade. There's an old story about the engineer and the guillotine. During the French Revolution, a priest, a drunkard, and an engineer were condemned to be executed by the guillotine. The priest was the first to be put to death. When blade was released it almost reached the bottom, but somehow it got stuck a few inches above the priest's neck. This was taken as a divine augury of the priest's innocence, so he was set free. The same thing happened with the drunkard, and he was also released. Then it was the engineer's turn. He asked to be placed on the guillotine facing upward without a blindfold. The engineer's unusual request was granted, and as the executioner raised the blade for the third time, the engineer shouted, “Hey, I think I see what the problem is!” Engineers aren't known for our political survival skills, but we sure know how to troubleshoot a technical problem. I'm quite familiar with “pure” science, having learned higher mathematics, physics, chemistry, and many other scientific subjects while pursuing a couple of engineering degrees. One of my favorite hobbies is poring over as many books and articles on relativity, quantum physics, and cosmology as I can get my hands on, and trying to understand how reality “works.” I call it solving the reality riddle. Despite all the progress that science has made, it got stuck only inches away from solving it – just like the figurative guillotine blade. As an engineer, I think I see what the problem is. Science used to be inextricably linked to philosophy. Isaac Newton, perhaps the greatest scientist and mathematician of all time, spent most of his time working on astrology, alchemy, and Bible prophesy.1 I always considered Albert Einstein to be an even greater philosopher than a scientist. By all accounts, he was just a mediocre mathematician, which undoubtedly hindered his early scientific career. Settling for a job as a patent clerk, he stole time from his daily chores to tackle relativity. I think Einstein's trouble with mathematics brought out his true genius. His famous thought experiments used images rather than mathematical formulas. Using this technique, his fertile imagination enabled him to see beyond the 19th century paradigm that inhibited many of the great scientific minds of that era who were far more mathematically competent than Einstein. Niels Bohr, the father of the Copenhagen interpretation of quantum physics, was another truly great philosopher/scientist/genius. He and Einstein held very different views on the true nature of reality and they had heated arguments over them, yet they remained close friends throughout their lives. J. Robert Oppenheimer was steeped in math and physics, but he also had a keen interest in philosophy and eastern religion.2 It's too bad there aren't many great philosopher/scientists around today. Newton showed us that equations describe nature, although nobody knows why nature should follow mathematical rules. James Clerk Maxwell gave us beautiful mathematical equations that revealed a fundamental truth about nature that led Einstein to invent special relativity. Beginning in the late 19th century, physics became increasingly dominated by mathematics. Many of today's physicists seem to be primarily mathematicians with scientific leanings. I think that is the crux of the problem. I have nothing against math personally; I used it all the time in my work. I can appreciate the seductive appeal of its abstract beauty, but scientists shouldn't follow the mathematicians blindly, especially when equations lead in the wrong direction or in too many directions at the same time. String theory is currently leading science in some of those directions. I admit I understand nothing about string theory beyond what I've read in the popular literature. I've read that string theory is extremely difficult to master, but it has produced the most beautiful mathematics ever. I don't doubt that 1 He was interested in knowing when the world would end. He worked out the math, and it told him the End would come in 2060. But he was a rational man who hedged his bets, so he said it might also come after 2060. 2 Oppenheimer could actually read Sanskrit. He said he recalled lines from the Bhagavad Gita while watching the first atomic fireball rise over the desert near Alamogordo, NM. Enrico Fermi was too preoccupied with timing the arrival of the shockwave and calculating the kilotonage of the blast to think about Hindu scripture. He wan't much into it anyway. 1
- 4. for an instant, but shouldn't string theorists have come up with at least one testable theory after 30 years of hard work? I'll return to this topic later on. Peeling away the layers of the problem, we see that modern physics embraces dual theories: general relativity and quantum mechanics. General relativity is a classical theory in the tradition of Newton and Maxwell with the underlying premise that every phenomenon in the universe has a cause that is explained by universal laws expressed in equations. It conforms to the 19th century belief that the universe is like a giant machine that works in the same predictable manner as a clock or a steam engine. Everything that has happened, is happening, or ever will happen is completely determined by the initial state of the machine. When Albert Einstein proposed relativity, other scientists who were mired in 19th century orthodoxy thought he was a radical. He wasn't; he pretty much stuck with Newton's classical universe, except that he merged space and time into a space-time continuum. Everything else remained smooth, predictable, and infinitely divisible into smaller parts just like in Newton's world. I think those underlying premises are completely wrong. Quantum physics agrees with me: everything is jagged, unpredictable, and lumpy. It's almost as if God refuses to do calculus and prefers to count and roll dice instead. Why place money on quantum physics? Because when the classical view opposes the quantum view, the quantum view is always proven right by experiment. A bit more of that later. Now I don't argue that most predictions based on relativity have proven to be accurate in the everyday world. Time dilation and E = mc2 , predicted by special relativity, are proven facts. GPS navigation is only possible by allowing for time dilation and the fact that gravity slows clocks as predicted by general relativity. Other phenomenon such as gravitational lensing of light and the precession of Mercury's orbit closely match Einstein's own predictions. Cosmologists use general relativity to model the birth and evolution of the universe; however, I have this nagging suspicion that those models aren't quite right. Infinities and violations of causation are some of the problems I see with them. Other cracks are beginning to show in the relativistic edifice as well. Take for example the fact that galaxies seem to rotate significantly faster than general relativity predicts. Cosmologists have filled in that crack with dark matter. The problem is that dark matter can't be explained in the current version of the standard model based on quantum field theory. Also, even after adding dark matter to our galaxy, there is another gravitational anomaly much closer to home: the so-called Pioneer anomaly. NASA launched the Pioneer 10 and 11 interplanetary space probes in 1972 and 1973 to explore the outer planets of the solar system. By now, these probes are in deep space well beyond the orbit of Pluto, still moving away from the sun, and still sending radio signals back to earth. Analysis of these signals shows that the rates of deceleration from the sun for both spacecraft exceed the deceleration calculated from the current theory of gravity by 10-9 meter/sec2. The Ulysses and Galileo spacecraft show similar anomalies. Can astrophysicists fix this anomaly by adding a halo of dark matter around the sun, or is this another indication that the current theories of gravity need to be revisited and maybe even revised? Then there is the problem with the rate of expansion of the universe. Solutions to the field equations of general relativity produce dynamic universes that expand or contract. Expanding universes will either stop expanding then start to contract or they will expand forever; the rates of expansion decrease in either case. Astronomers have found that the rate of expansion is increasing lately, which contradicts theory. To remove this contradiction without abandoning general relativity, cosmologists invented dark energy, a kind of anti-gravity force that pushes space-time apart. The problem with dark energy is that it confronts quantum physicists with a new form of energy requiring the addition of a new force carrier particle to an already overcrowded table of fundamental particles. Along with dark matter, it means they will have to revamp quantum field theory. I'm not saying they won't pull that off, but it seems to indicate that something is amiss. Finally, there is the problem of time travel. If relativity teaches us anything, it's that the principle of causation is inviolate. In fact, it's a main premise underlying special relativity, otherwise the equations 2
- 5. are meaningless. Unfortunately, some of the solutions of the general relativity field equations allow backward time travel. The avant garde mathematician Kurt Gödel created an entire universe based on general relativity that violates causality. Various hypothetical time machines have been constructed on paper by imagining extremely long and massive cylinders that rotate rapidly in 4-dimensional Einsteinian space-time. Now I'm fairly open-minded when it comes to time travel (it provides wonderfully entertaining plot lines); however, the engineer within me sees a problem when a theory makes predictions that violate one of its own fundamental principles. I think I can see what the problem is: when space and time are combined into a continuum, gravity can twist it to such an extent that space and time are interchanged. This makes for great sci-fi movies, but I'm afraid it creates some serious paradoxes. I suspect backward time travel also violates the second law of thermodynamics (see Appendix D). Maybe liberating time from its space-time prison would avoid these paradoxes. Now let's turn back to the conflict between quantum mechanics and relativity. Over decades, Albert Einstein and Niels Bohr engaged in a friendly sparring match about the true nature of reality. Einstein never denied that quantum effects are real, but he remaining true to his classical roots and never accepted Bohr's belief in indeterminacy. Einstein remained ever faithful to his clockwork universe, and assumed that events have hidden causes even when they appear random. He thought a universe that violates causation is absurd, and he kept challenging Bohr with clever and inventive thought experiments that attempted to prove the falsehood of the Copenhagen interpretation. Einstein, along with two Princeton University colleagues, Boris Podolsky, and Nathan Rosen, presented one of his most clever thought experiments in a paper published in 1935. That paper came to be known as the EPR paradox, using the authors' initials. The paper said that quantum mechanics is “incomplete” because without hidden variables it could not satisfactorily explain (at least in the minds of the authors) certain changes that occur simultaneously in two remote systems that are quantum-mechanically entangled. Einstein, Podolsky, and Rosen didn't explain how the hidden variables operate, but they insisted they are the only way to avoid violating causation. Following the publication of the EPR paradox, there was a collective shoulder shrug by quantum physicists. To borrow a phrase from modern software engineers, quantum physics “just works” so why bother explaining why? They just continued with their research as if nothing had happened. Bohr sort of defended himself by publishing a response to the EPR paper, offering what some might call a hand- waving argument using the Heisenberg uncertainty principle. At any rate, nobody could envision an experiment that would prove whether Einstein or Bohr were right. Until 1964 that is. John Bell, a brilliant Irish mathematician/philosopher/physicist relooked at the EPR paradox and came up with an elegant, profound, yet simple type of experiment that would show whether the idea of hidden variables holds water. If hidden variables do in any sense exist, then results from those experiments would obey statistical inequalities, called Bell's inequalities. If Bell's inequalities are violated, then the idea of hidden variables is false. His proof was published in 1964, but there was no technology available to carry out the kinds of experiments proposed in his paper. Science had to wait for technology to catch up, and Alain Aspect and others were finally able to carry them out in the 1980s. Their results did violate Bell's inequalities, thus proving EPR were wrong and validating Bohr's Copenhagen interpretation.3 In fact, the evidence was so overwhelming that if Einstein were still alive in the 1980s, he surely would have conceded that Bohr was right all along. The point I'm trying to make with this long-winded historical detour is this: if scientists want to merge quantum physics with general relativity, they should build a new theory on quantum principles, and not the other way around. Quantum mechanics really does describe how the universe works, even if it seems counter intuitive or it violates classical principles. Quantum physicists should give up trying to duplicate Einstein's field equations from quantum mechanics. By by that I mean they should stop using a 4-dimensional space-time continuum as the starting point. 3 This topic is covered in much more detail in Appendix A. 3
- 6. As an electrical engineer, I totally get why Einstein introduced the 4-dimensional space-time continuum, a.k.a. Minkowski space, in special relativity. It was a handy mathematical device that provided a convenient way to work out the “currency exchange rate” between units of space and units of time. Assigning real numbers to three dimensions of space and imaginary numbers to time, and combining all of these numbers into a single entity called space-time makes the mathematics clean, elegant, and beautiful. It also makes the important formula E = mc2 emerge naturally, which is kind of cool. So it's not hard for me to understand why Einstein fell in love with Minkowksi space and why others were seduced by it also. However, it's a mistake to conflate an elegant mathematical technique that works for a special case with reality itself, and then apply that technique to everything. Engineers use similar mathematical devices we know aren't real; we use them simply because they work. Charles Steinmetz was the first person to represent voltages and electrical currents as complex numbers. No sane engineer actually believes that voltages and currents have real and imaginary parts – it's just math. We use Steinmetz's methods because they're easy to work with; the imaginary parts of the complex numbers take care of the phase angles of sine functions in a natural way that just rolls right through the calculations. Doing analysis of AC circuits would be extremely difficult, if not impossible, without using complex numbers. But whenever there's a case that can't be solved that way, I forget the whole idea of complex voltages and currents in a heartbeat and use a different method instead. The question Einstein et al proposed in the title of their EPR paper was, “Can [a] quantum-mechanical description of physical reality be considered complete?” It would also be fair to ask the question, “Can a relativistic description of reality using Minkowski space be considered complete?” Quantum mechanics also starts out with that framework. In fact, the famous Feynman diagrams, used to explain particle interactions, show particles zooming along “world lines” in an abbreviated 2- dimensional version of Einstein's space-time continuum. Why do quantum physicists keep using Einstein's model of space with time woven into it, filled with fields that are smooth, continuous, and infinitely divisible into smaller parts? It's very clear that reality is lumpy and discontinuous. You are probably asking how an engineer with only an elementary grasp of general relativity would have the audacity to question the validity of general relativity, when every renowned physicist for the past 100 years says it is simply beyond question? Richard Feynman, who was no slouch when it came to creating theories, once said, “It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong.” I'm not saying relativity is wrong exactly; it just isn't complete. It's not obvious to most physicist that general relativity is incomplete because it works for them most of the time. But Newton's theory of gravity also works in very many cases, and yet everyone agrees it's incomplete. Okay, so when doesn't relativity work? Well, I cited a couple of examples earlier where gravity is stronger over large distances than it ought to be and where the expansion of the universe is a bit quirky. Sure you can plug dark matter and dark energy into relativity and make it work, but doing that just seems too ad hoc for my taste. String theory was an attempt by physicists to start over with a clean sheet of paper, realizing there are irreconcilable differences between general relativity and quantum mechanics. Furthermore, although quantum field theory and the standard model make successful predictions, they have too many arbitrary constants that can't be explained from first principles. What was needed was a whole new set of first principles where both quantum mechanics and gravity would emerge together from the theory. This was a very sensible approach and I applaud it. Unfortunately, it think string theory made some faulty assumptions from the get go by repeating some of Einstein's mistakes about the nature of time and space. Instead of just three spatial dimensions represented by real numbers with time pointing in the imaginary direction, string theory uses nine spatial dimensions with time pointing in the imaginary direction. Why a total of 10 dimensions? Well, just because it makes the math work. Okay, I get the math part, but why do we keep insisting that space and time are smooth and continuous instead of jagged and lumpy, and why must we always merge space and time into a continuum? If time and space 4
- 7. are really separate things, maybe it's wrong to keep modeling them as a continuum. Furthermore, unless I'm way off target, string theory has hidden variables, and Bell's EPR experiments proved beyond a doubt that there are no hidden variables in the real universe. Another problem is that string theory is infinitely more complicated than the existing theories. That would be okay if it produced new results that aren't found in existing theories. Unfortunately it hasn't. Something can be factually true and beautiful without being useful; engineers call such a thing “a solution in search of a problem.” String theory may be 100% mathematically consistent and full of beauty and elegance, but I'm afraid it's leading us down a rabbit hole. Or many rabbit holes to be exact. It seems there are at least 10 500 different string theories, each one describing a different reality.4 With general relativistic being fundamentally flawed because it's deterministic, with quantum field theory having too many arbitrary constants that can't be explained from first principles, and with string theorists seemingly chasing tangents, where does science go for answers? Maybe we should put all the complicated math aside for a while and just look at nature. I mean really look at it. The answer to the reality riddle may be hiding in plain sight – everywhere. The late Benoit Mandelbrot was the father of Mandelbrot sets, commonly known as fractals. Using very simple mathematical formulas that feed output variables back in as new inputs over and over again, some amazingly complex, beautiful, and bizarre structures emerge seemingly out of nowhere. The functions that generate fractals not complicated: a basic one is znew = z 2 old + c where z is a complex variable and c is a complex constant. When the real and imaginary parts of the z-values in the set are plotted as points on an x-y plane, they form 2-dimensional fractal patterns with structure. Solid 3-D fractal shapes, called Mandelboxes and Mandelbulbs, can also be generated from simple functions that employ feedback.5 However, you don't need complex numbers to generate fractals. Fractal shapes are much more than quirky blobs. Fractal objects seem to be ubiquitous in nature, such as breaking ocean waves, veins in a leaf, heads of broccoli, and galaxy disks. In fact, many of the 3-D “special effects” seen in modern animated movies employ computer-generated fractals that look exactly like landscapes, fires, oceans, and other natural objects. Are the similarities between fractal structures and things seen everywhere in nature mere coincidences, or is there some deeper truth behind them? John Wheeler took Bohr's interpretation of quantum theory to a whole new level, stating that nothing in the universe even exists until it is observed: we live in a universe created by consensus among observers.6 I'm not sure I'd go as far as Wheeler, but it's obvious that there isn't much matter to be found in the material universe. Most of what we humans consider “solid” – including atoms – consists almost entirely of empty space. What little “matter” can be found in all that empty space seems to consist of quantum states, which are bits of information. Everything that fills the universe, all the fundamental particles and force carriers, the atoms, molecules, and so forth up the chain, are made out of information. But there's more: empty space isn't really empty. A vacuum is filled with “virtual” particles that constantly enter and exit reality. We know those “virtual” particles are present because 4 That's unimaginably many theories; it's equal to a googol raised to the 5th power in case you're interested. In fact, that number seems pretty much like infinity to me and here's how I finally got my head around it: Suppose computer engineers design a supercomputer that can examine a trillion string theories each second to find the right one. If the computer runs 24/7 for a trillion years, the computer could examine around 3.15 ×1031 string theories. Now that's also a pretty big number. How big? Imagine owning a swimming pool 15 meters wide by 30 meters long by 2 meters deep – smaller than Olympic-size, but still pretty nice. The number of string theories the computer examined would be more than the number of water molecules in your swimming pool. Now, if you take the original number of string theories and subtract that very large number of string theories examined by the supercomputer over a trillion years, you still end up with almost 10 500 more theories left to examine. The computer didn't even make a tiny dent in the total. Infinity minus something huge equals infinity. That's why I'm not putting my money on anyone finding the right string theory. 5 The Internet is replete with beautiful computer-generated animations of Mandelbrot solids. You can Google them using the keywords “mandelbulb video” or you can download some free Mandlebulb 3D graphics software on your computer and create your own animations. 6 Human beings get most of our information through our eyes. Dogs get most of their information through their noses, which makes me wonder what a doggie universe created by canine consensus is like. 5
- 8. they exert pressures on metal plates suspended in a vacuum that can be measured experimentally. These “virtual” particles also must have quantum states, so even empty space is filled with information. Alternatively, we could switch that around and say the “virtual” particles carry quantum information about empty space itself. In other words, we live in a dataverse.7 Data processing and computations are performed by combining bits of information according to certain rules to produce new bits of information. Alan Turing came up with a concept of a universal computer in 1936 that did nothing more than read bits of information from a strip of paper moving back and forth, replacing the bits on the strip with new bits by following a set of logical rules. A Turing machine could perform any task that a modern digital computer is capable of. Quantum interactions do essentially the same thing: interacting particles exchange quantum information about themselves, reading bits and writing new ones. If quantum interactions are equivalent to data processing on a microscopic level, perhaps these processes also generate Mandelbrot sets, either by accident or by design, applying the basic “and”, “or” and “not” logical operations on the quantum states themselves. Maybe our universe is just the totality of countless Mandelbrot-set-producing processes. In fact, maybe our entire universe actually is a Mandelbrot set. If all of this sounds crazy, consider some of the properties of Mandelbrot sets. Fractals have a property called self-similarity, meaning that pattern of the whole is repeated by similar patterns everywhere, over and over at ever smaller scales. Most physicists are coming around to the idea that the universe is essentially non-local where everything is interconnected, and the self-similarity property also results in interconnections among all parts of the set. Mandelbrot sets also have characteristically jagged and discontinuous features extending down to the smallest imaginable scales, echoing the lumpiness found in the quantum world and throughout the universe at large. Finally, fractal geometry has dimensionality with peculiar properties. The Hausdorff dimensions of common geometric objects are integers: 0 for a point, 1 for a line, 2 for a triangle, 3 for a sphere. The Hausdorff dimension of a Euclidean space equals the number of orthogonal directions in that space. On the other hand, fractals have non-integer dimensions that are smaller than the dimensions of the Euclidean spaces they fit into. For example, a Sierpinski triangle is a fractal that fits into a 2-dimensinal plane, and its Hausdorff dimension is approximately 1.58. Fractal solids, like a Mandelbub, have dimensions between 2 and 3. We know that over very large distances gravity seems to deviate from both Newton's inverse square law and solutions to general relativity's field equations. Physicists use dark matter to explain away that discrepancy, but suppose space itself does't have fixed Euclidean-type dimensions. Instead, suppose space itself is a Mandelbrot set with Hausdorff dimensions that approach 3 at small scales and decrease over very large scales. If we picture gravity as a classic gravitational field8 , the field should be more confined in a lower-dimension space than it would in three dimensions. Motion and inertia would also deviate from classical laws in such a space. Could that explain those discrepancies astronomers are seeing, as well as the Pioneer anomaly? Variable dimensionality would even affect the way the universe at very great distances is perceived through telescopes. This essay has been very much a work in progress. The first version was six pages long, plus a couple of appendices. The main essay abruptly ended on Page Six because I simply ran out of clever things to say. Since then, I've published several more versions, adding several more appendices on some related topics, and expanding the entire essay to about 27 pages. After thinking about the reality riddle some more, it occurred to me that I needed to fix the abrupt ending and wrap things up with a few more concluding remarks. I hope these remarks, which follow here, will finalize what I needed to say. Looking over the landscape of different “realities” that science has postulated, it seems to me that all of them are really just models that map data that Nature presents to us through our senses and instruments 7 Appendix B discusses Information, Entropy, and Meaning from the standpoint of modern information theory. It needs to be pointed out that according to thermodynamics, entropy always increases. Structure, meaning, and intelligence won't emerge spontaneously on their own unless some organizing Process or Principle is causing them to emerge. 8 This is meant only as a conceptual visualization. I don't know whether or not gravity has any actual fields. 6
- 9. into alternative spaces. Engineers use remapping techniques all the time. For example, when analyzing the normal and shear stresses in I-beams and other structures for computing strains, civil engineers map these stresses into a 2-dimensional space known as Mohr's circle, named after its inventor, Christian Otto Mohr. Data are mapped into a fictional space represented by Mohr's circle, and the analysis is done there. The analytical results – data – are mapped back into the space of the physical structure. Electrical engineers use similar techniques. Ideally, 3-phase electric power networks should be balanced. When circuits are unbalanced, which is often the case, they are very difficult to analyze. Edith Clarke came up with a way, called symmetrical components, to decompose an unbalanced, physical 3-phase circuit into three idealized, perfectly-balanced 3-phase circuits, where the analysis can be carried out much more easily. The analytical results are then mapped back into the unbalanced physical circuit to obtain useful values. The extra steps of mapping and remapping the data back again are well worth the effort, because it greatly reduces the overall amount of computational work. Science is confronted with time and simultaneity issues because of the constraints imposed by causation and the finite speed of light. In response, Albert Einstein proposed a method to map data into an alternative 4-dimensional space where he could work around those constraints and make calculations. This is known as the theory of relativity. Confronted with the problem of uncertainty, science invented quantum physics, which has taken on various forms: the wave function, invented by Erwin Schrödinger, and quantum field theory, developed by Richard Feynman and others. All of these techniques have survived into the 21st century because they have been largely successful – within prescribed limits. However, there is a consensus among physicists that none of the existing theories are complete and none are able to describe reality in total. That's why there are so many ongoing efforts to solve the reality riddle by finding the one theory, the one model, the one technique that can do it all. Scientists have become very inventive lately. In his book The End of Time, Julian Barbour explores an alternative space known as Platonia, where all possible configurations of the universe are mapped as single points. Platonia has countless dimensions, and each point in this space represents an entire universe. The one feature of Platonia that makes it unique is that time does not exist in it. Barbour says our entire reality – our entire past, present and future – are mapped as single points in Platonia. In fact he says every possible past, present, and future are mapped as single points. The engineer in me sees nothing wrong with this, as long as we view it as just an alternative way of arranging the data that are presented to us in our reality. But is Platonia reality, or just an alternative way of thinking about it? The same thing could be said of string theories or any of the other current models of reality. As long as these theories can consistently map data from “our world” into an alternative space and back again, all of them might be equally valid.9 But do any of them actually solve the reality riddle? And if some data are fundamentally unknowable to us because of uncertainty, can the riddle be solved at all? Some of the current candidate theories may be plausible but they cannot be proved or disproved by experiment or observation. Included in this group are Platonia, the Many Worlds Theory, the Eternal Inflation Theory, and the Anthropic Principle. Any of them might explain why things are the way they are, but none of them yields to the scientific method. Maybe science alone can't provide an answer. I've included some appendices at the end of this essay about some topics I find interesting. I hope you'll find them interesting also. 9 Data mapping seems to be the common feature among all theories. In the final analysis, I think we'll discover that our “reality” consists of only data. The dataverse concept – Wheeler's “it from bit” – has received harsh criticism from certain parties who don't consider information as “real” because it lacks material substance. However, Schrödinger's wave function, ψ, also lacks material substance, and yet quantum physicists consider ψ to be very real. 7
- 10. Appendix A – The EPR Paradox and Bell's Inequality Bell's inequality is an amazingly powerful, elegant, and yet simple theorem. It was published by John Bell in 1964 as a belated response to the Einstein-Podolsky-Rosen (EPR) paper published in 1935. According to EPR theory, quantum mechanics is deterministic, even if we're unable to ascertain the mechanisms that make it so, at least for the time being. Hidden variables are cause-and-effect mechanisms we don't see that keep the universal clockwork running. Bell's theorem proposed experiments that would disprove the existence of hidden variables if certain statistical inequalities are violated. There are several ways that such an experiment can be set up. I chose one that's different than Bell's original idea, but it's simple to understand and doesn't require much math in order to work out the results. Here it is: Pairs of entangled photons are created and are sent in opposite directions toward separate laboratories where Alice and Bob10 perform experiments on the photons as they arrive. Their two labs are many feet apart, and no communication is allowed to take place between them while the experiments are performed. The photon pairs are initially unpolarized, meaning that each photon has a 50% chance of passing through a polarizing filter oriented at an arbitrary angle. If a photon passes through, it becomes polarized at that angle. Moreover, if either of the entangled photons passes through a filter, both it and its entangled partner will be polarized at the same filter angle. It is the instantaneous polarization of both photons by one filter that would have given Einstein heartburn; this instantaneous action at a distance was at the heart of the EPR paradox. Now immediately before the arrival of each of their entangled photon, Alice and Bob both randomly select one of three polarizing filters. Filter 1 is oriented at 0°, Filter 2 is oriented at 120°, and Filter 3 is oriented at 240°. There are photon detectors behind Alice's and Bob's filters that record whether or not photons pass through their filters. Each time a photon is scheduled to arrive, Alice and Bob randomly choose a filter and record which filter was chosen and whether or not a photon is detected. After doing this billions of times, Alice and Bob compare their notes of each experiment and count how many times they are in agreement; i.e., when both of them observed a photon or neither of them observed one in a given pair. The experiments where Alice and Bob chose the same filters will always agree, so those results contain no information and they are discarded. Only those experiments where different filters were chosen are compared. Bell's inequality proves that if different filters were chosen and hidden variables directed the photons to go or not go through the filters, Alice and Bob will be in agreement for at least ⅓ of the pairs of entangled photons. However, if there are no hidden variables, then there will be agreement for ¼ of the pairs of entangled photons. It's easy to show why there is ¼ agreement probability if there are no hidden variables. Suppose Alice randomly selects Filter 1 and her photon is the first one measured. It has a 50% chance of passing through the filter. If it does, it becomes polarized at 0°, and immediately so does its partner over in Bob's lab. Now if Bob chooses Filter 2, his 0° photon as a 25% chance of passing through that filter. On the other hand, if Alice's photon doesn't pass through her filter, then it becomes polarized at 90°, and immediately so does its partner over in Bob's lab. Now Bob's 90° photon has a 75% chance of passing through his Filter 2. Therefore, Bob's photon has a 25% chance of not passing through the filter. In order for Bob and Alice to agree, either both photons pass through their respective filters, or neither of them do. Whether Alice's photon passes through her filter or not, Alice's and Bob's results will agree in 25% of all cases. You can repeat the calculation by assuming Bob's photon arrives first, or use any combination of filters, and the result will still be 25%. It's slightly harder to show that hidden variables increase the probability of agreement above ⅓. In 10 Alice and Bob are undoubtedly the most famous pair of experimental scientists in the world. They perform almost all of the experiments requiring teamwork that are found in the Scientific literature. 8
- 11. essence, there are a total of eight possible ways that hidden variables can predetermine whether a photon will go through Filters 1, 2, and 3. Let a plus sign represent a hidden instruction to go through a filter, and a minus sign represent a hidden instruction not to go through a filter. Arranging the plus and minus signs in the order of Filter 1, 2, and 3, there are eight possible sets of hidden variables: (+++), (++-), (+-+), (+- -), (- ++), (- + -), (- - +) and (- - -). Now look at each set. There are three pairs of hidden variables in each set that show how photons behave when Alice and Bob use different filters. The first set is comprised of three three pairs that are all alike (+ +) meaning Alice and Bob will be in 100% agreement if their photons are imprinted with that set of hidden variables. The last set is also comprised of three pairs that are all alike (- -) meaning that there's 100% agreement with that set too. Each of the other six sets of hidden variables has one pair out of three that is either (+ +) or (- -), so agreement will occur ⅓ of the time with those sets. So the minimum rate of agreement among all eight sets is ⅓. It's important to remember that according to EPR, the same set of hidden variables must be imprinted on both photons of each photon pair because both photons must do exactly the same things when they are tested with the same filters. We don't know for certain which set of hidden variables is imprinted on any given pair – they're hidden after all – but no matter which set is actually imprinted on them, Alice's and Bob's experiments will agree at least ⅓ of the time. Using Bell's air-tight chain of logic, if Alice's and Bob's results agree less than ⅓ of the time, then the EPR hidden variable theory is false. Fortunately, the gap between a ¼ agreement rate and a ⅓ agreement rate is large enough to provide an unambiguous verdict if the experiment is carried out properly and it shows a violation of the inequality. Yet as simple as the experiment is in principle, it is technically very difficult to carry out. Pairs of entangled photons must be generated with a high level of reliably. If any pairs of photons or individual photons go missing, or photons pairs become disentangled in transit to Alice or Bob, this will skew the statistical results. The polarizing filters must be nearly 100% efficient, and the random filter selections in Alice's and Bob's labs must be exquisitely timed and synchronized with the arrival of each photon. Finally, both experiments must start and finish in less time than it would take a signal to travel between Alice's and Bob's labs at light speed. The last requirement makes sure that no hidden communication at or below light speed can take place between the photons. The technology to fulfill Bell's experimental requirements did not exist in 1964, but sensitive EPR experiments were performed years later. Alain Aspect accomplished this feat in the 1980s using polarized light. His statistical results violated Bell's inequality with a high degree of certainty, proving conclusively that no hidden variables exist. This was the final nail in the coffin of the classical- deterministic interpretation of quantum mechanics outlined in the EPR paper. Since the 1980s, less rigorous EPR demonstrations using polarized light have been carried out in college undergraduate physics labs that show effects similar to the full-blown experiments. Looking back at the EPR paper, I think Niels Bohr could have easily dismissed Einstein's paradox by pointing out that the collapse of a quantum wave function over extended distances does not involve communication of any kind, and therefore it cannot violate causation. Modern communication theory wasn't invented until the late 1940s, so neither Einstein nor Bohr could have realized in 1935 that communication requires an exchange of information. Since Particle A and Particle B are in the same quantum state, there is literally no information that Particle A could communicate about its own quantum state to Particle B or vice versa, so no communication can take place between them. (This is why engineers can't exploit the EPR paradox to create an intergalactic instantaneous communication system. Darn.) Richard Feynman said, If you think you understand quantum mechanics, you don't understand quantum mechanics. Einstein thought he understood quantum mechanics from a classical- deterministic viewpoint. Clearly he didn't understand it completely. I don't know if Bohr understood it either, but he probably came a lot closer than Einstein did. 9
- 12. Appendix B – Entropy, Information, and Meaning Prior to Claude Shannon's work in the 1940s on the subject, there wasn't a good definition of what information is. People had a fairly good intuitive idea of what it means to communicate, but there wasn't any formal language to express it. It turns out that there is a connection between information and entropy, which was a concept that was fairly well understood from thermodynamics. Statistical thermodynamics defines the entropy of a system as equal to the Boltzmann constant times the natural logarithm of the total number of microstates of the system. Let's see how that relates to information. Suppose I flip a trick two-headed coin, and I inform you that heads came up. How much information did I convey? According to information theory, the answer is 0. You already knew the coin would come up heads without me even telling you, because it's a trick coin. Now suppose I flip an ordinary coin, and I inform you that heads came up. According to information theory, I conveyed exactly one bit of information. This is because the number of bits of information I gave you concerning the state of the coin is equal to the base 2 logarithm of the number of possible ways the coin might have landed. Since there are two possible ways the coin might have landed, the number of bits equals the base 2 logarithm of 2, which is equal to one bit. Now suppose I hold a deck of 52 cards that is thoroughly shuffled. I draw a card from the top of the deck and inform you that it's the deuce of clubs. How many bits of information did I convey to you? Well, I didn't tell you everything about the deck, only what the first card was. There are 52 possible cards I could have drawn, so the amount of information is the base 2 logarithm of 52, or log2 52, which happens to be somewhere between 5 and 6 bits (5.7 bits to be more precise). Next, suppose I keep drawing cards off the deck and tell you what each card is. There are 52! possible ways a shuffled deck could be arranged. This is an enormous number of ways, so giving you precise information about the order of the deck conveys a lot more information than only telling you what the first card is. The amount of information I gave you is log2 52!. If you don't like dealing with factorials, it can be computed as log2 52 + log2 51 + log2 50 + … + log2 1 = 225.581 bits, which is about the same amount of information that's conveyed by the results of 226 coin tosses. (If you want to know how many ways the deck can be arranged, you can work backwards from the number of bits. It equals 2 225.581 = 8.0658 × 10 67 ways.) Finally, suppose I purchase a new deck of cards with all of the cards in ascending order. I draw all the cards from the deuce of clubs through the ace of spades and tell you which cards I draw. I think you'll know by now that me telling you the precise order of the cards conveys 0 bits of information because you know that every new deck of cards is arranged exactly the same way. Only after shuffling the cards will any information be contained in that deck. So increasing entropy also increases information. To put the icing on the cake, Shannon had to account for the fact that information content is dependent not only on the total number of states, but their probabilities. When we send a message in English, the letter “e” contains less information than the letter “z” because “e” occurs much more often in English. Suppose each of the 26 letters in the alphabet have a probability of occurring equal to p n. Shannon defines the information of the nth letter as equal to –(pn log2 pn) bits (note the minus sign in front of that expression). This is the gist of information theory and the connection with entropy. Shannon stated flatly that information equals entropy. As a side note, the second law of thermodynamics states that entropy of an isolated system can never decrease over time. This suggests that the total amount of information in the universe is increasing over time and a universe constructed from information would have to expand.11 11 According to the holographic universe model, the total amount information contained within a volume is limited by the number of bits that can be encoded on the surface surrounding the volume, which is equal to ¼ of the number of Planck areas on that surface, where a Planck area is equal to 2.6 × 10-66 cm2 . Conversely, the amount of information about the universe that an observer can receive is limited by the surface area through which the information passes. The human 10
- 13. This seems very neat and tidy, but it isn't complete. Quantum states fit into Shannon's definition of information perfectly because they're so simple; binary states like spin up versus spin down, positive charge versus negative charge, etc. The second law of thermodynamics should quickly drive a quantum universe into chaos. How did structure and meaning arise in such a place? We need to examine the concept of meaning a little further. For example, you may be tempted to compute the information contained in a message expressed in English by simply adding up the bits contained in each letter of the message. That would take into account the probabilities of the individual letters, but not the order in which they appear. The sentence “the cow jumped over the moon” is more meaningful than the string of characters “ehw ejo toprhv emtd eon coum” although both sets contain the same number of bits according to Shannon's definition. So it appears that actual words have much more inherent meaning than the individual letters that make up the words, and word order and context are more meaningful still. Obviously, a Shakespeare sonnet has more meaning than the taking the words from that sonnet and jumbling them up. If a sentence in English is translated into Chinese with precisely the same meaning, will both sentences convey the same amount of information? According to Shannon's definition of information, they probably don't because Chinese characters have different probabilities than English words. Clearly then, information and meaning are two different things. Even with a solid mathematical definition of information in hand, a similar statistical definition of meaning seems to be quite elusive. How can meaning emerge in a universe built upon simple quantum states? Here's where Mandelbrot sets may play a role. Fractals have complex patterns that mysteriously emerge from very simple mathematical expressions. If meaning can be ascribed to fractals, it comes from those patterns. A universe as a Mandelbrot set with meaning could be generated from elementary quantum states that have information without any more meaning than individual letters in a random character string. In an English sentence, it's the pattern of those letters that give it meaning. Likewise, pattern, structure, and meaning in a Mandelbrot universe could emerge from what seems to be quantum uncertainty and chaos. Of course, Mandelbrot sets aren't the only way this could come about. There is an entire class of systems that display self-organized criticality (SOC) that has been studied extensively by Per Bak, Chao Tang, and Kurt Wiesenfeld. One of the features of SOC is an “attractor.” If points get close enough to the attractor, they remain close to it even if they are disturbed. There is one class of attractors known as “strange” attractors, because they have non-integer dimensions – these are the fractals discussed earlier – but there are other types as well. Three conditions must exist in a system in order for SOC to arise: non-equilibrium, extended degrees of freedom, and non-linearity. Reality undoubtedly possess the first two; the universe is not (yet) in a state of thermodynamic equilibrium, and there are spatial dimensions to provide at least three degrees of freedom. Whether reality possesses the necessary amount of non- linearity is an open question. There is some evidence that SOC is a feature of our universe. The prevalence of fractal-type geometric forms, mentioned earlier, and the ubiquitous “pink noise” (or 1/f noise) that permeates reality are good indications. There is a good Wikipedia article on pink noise that gives many examples found in nature. The bottom line is that although entropy usually connotes randomness and chaos, there seems to be an underlying self-organizing principle at work in the universe. Unfortunately, although Bak, et al identified three necessary conditions for SOC, the sufficient conditions have not been found. Maybe when that question is answered, we'll finally be able to solve the reality riddle. brain has a surface area around 1,500 – 2,000 cm2 . So the amount of information about the universe that is available to the human brain has a theoretical limit of around 2 ×1068 bits, which is quite a lot of information. The point is that there is still a certain amount of information that is “unknowable” to any observer. Maybe the role of space and time is to cordon off unknowable information while letting in the information that is important to the observer. See Appendix C. 11
- 14. Appendix C – A Few Comments About Space-time Albert Einstein developed the theory of special relativity using a very simple set of principles. First, the theory only applies to special cases involving uniform, non-accelerating, motion; hence the word special. Second, there is no such thing as absolute motion in a universal frame of reference. All motion is relative; hence the word relativity. Physicists sometimes forget these first two principles, as we will see shortly. Third, the speed of light is the same for every frame of reference that's in uniform motion with respect to all other frames of reference. As surely as night follows day, these three principles have direct consequences: when clocks and distances are observed from different frames of reference in uniform relative motion to each other, the measurements are different. Distances appear foreshortened in the direction of relative motion, and clocks in relative motion appear to slow down. That's it. Everything about what I've just stated above can be described by simple mathematical formulas that any high school algebra student can understand. But Einstein wanted use something that all frames of reference could agree on. He used Minkowski space, which wraps space and time in a neat package that is invariant (doesn't change) seen from frames of reference in iniform motion. Minkowski space has three spatial dimensions like the Euclidean (flat) variety used in Newtonian physics, and a fourth dimension is stuck in there as j·c·t. Here j is the square root of -1 (an “imaginary number”), c is the speed of light, and t is time, so this fourth dimension also has spatial characteristics, although it points in the imaginary direction. Voilà! Space and time are now combined into a single invariant entity called space-time. Mathematically, distances are foreshortened and clocks slow down while their spatial coordinates change, just like with the simple high school algebraic formulas. It is here where I think relativity ran aground. Einstein took his 4-dimensional space-time literally. According to the literal interpretation, all objects in the universe are traveling along “world lines” through space-time at the speed of light. In other words, everything is in absolute motion with respect to the universal space-time frame of reference. Excuse me?? Remember what the second fundamental priciple of special relativity said? It said there is no such thing as absolute motion or a universal frame of reference. That violates the “relative” part of spacial relativity, which is Fallacy No. 1. Fallacy No. 2 is using Minkowski space to model cases where special relativity doesn't even apply. Take the so-called twin paradox (which isn't really a paradox at all). In this thought experiment, there is stay-at-home Alice and her traveling twin brother Bob. Bob leaves Earth and heads for a distant Planet X, traveling at nearly the speed of light. He reaches Planet X, makes a U-turn and heads home. When he returns to Earth, Alice has aged considerably more than Bob. Minkowski space has been used to explain the apparent paradox. Having traveled a long distance, Bob had to “cash in” some time in order to “buy” space for his world line whereas Alice's world line used up all her time by staying put. The fallacy of that explanation is that in order for Bob to do that, he had to accelerate at least twice; first when he blasted off toward Planet X, and again when he reversed direction back toward Earth. Special relativity applies only to uniform motion, and Bob's accelerations violate the “special” part of special relativity. So using Minkowski world lines within the framework of special relativity isn't the correct explanation, even if the math happens to give the correct results.12 If any acceleration occurs, you are 12 There's a very easy way to explain the twin paradox without cashing in time for space or using any hand-waving arguments about Bob shifting frames of reference. Suppose there are three individuals, Alice, Bob, and Charlene, born at different times on different planets. Alice was born on Earth, and Bob is an astronaut who just happens to be passing Earth on his way toward Planet X. As Bob zooms by Alice, they exchange biometric data and notice they're exactly the same age! What a coincidence. As Bob zooms past Planet X, Charlene just happens to be zooming by in the opposite direction toward Earth. Bob and Charlene exchange biometric data, and guess what? They're also exactly the same age! Another amazing coincidence. (Thought experiments are very convenient that way.) When Charlene arrives at Earth, would you expect her to be the same age as Alice? Of course not; Alice will be older. The whole “paradox” is easily explained by the fact that the distances between Earth and Planet X in Bob's and Charlene's frames of reference are shorter compared to Alice's. End of story. Nobody shifted frames of reference or accelerated at all during their travels. There is no need for Bob or Charlene to cash in time to buy space in order for Charlene to be younger than Alice. 12
- 15. compelled to abandon special relativity, but then the math really starts to get hairy. The mathematics of Minkowski space worked so well for special relativity that Einstein decided to keep using it for the general case, which includes acceleration and gravity. One of Einstein's insights was that inertial mass and gravitational mass are the same, which he thought was no mere coincidence. He had another brilliant insight that unlocked the whole thing: experiments performed in a gravitational field are indistinguishable from experiments performed in accelerating frames of reference without gravity. This equivalence principle allowed Einstein to learn what gravity does to Minkowski space by examining what happens in an accelerating frame of reference. Hanging on to the idea that every object travels through space-time at the speed of light, he was able to show that two objects falling toward each other from their mutual gravitational attraction is equivalent to their world lines intersecting as they travel through space-time. This means that space-time is curved in their vicinity. In special relativity, Einstein had already proved that mass and energy are equivalent. Therefore, he said gravity is mathematically equivalent to curvature of Minkowski space in the presence of mass-energy. The math that goes along with this is extremely difficult, and only a few people in the world could handle it13 when Einstein published his theory in 1915. But those who could handle it produced some very powerful results, including accurate predictions of Mercury's orbital precession, which Newtonian mechanics can't explain, and the bending of light, which Newtonian mechanics does predict, but not accurately. But is 4-dimensional space-time real, or just a mathematical gadget? There are cases where Newton's laws agree almost exactly with general relativity. There are other cases where general relativity does a better job of predicting how objects move and how light behaves than Newton's laws. There are other things that general relativity predicts that Newton's laws can't explain at all, such as a clock slowing down when it's at the bottom of a “gravity well.” I have a hunch, however, that there could be instances in nature where even general relativity breaks down completely. Note that Minkowski space started out as Euclidean (flat) in special relativity and it ended up being curved in general relativity. The engineer within me sees this as a model perturbation. Engineers apply model perturbations all the time, like when we use linear equations to model non-linear systems. The results will be fine as long as the changes (perturbations) from the rest state are small. When perturbations become too large, the models break down. Minkowski space is a model with flatness as its rest state and curvature as a perturbation. In some cases, like within or around black holes, gravity curves space-time so much that infinities start cropping up in the math. Engineers view mathematical infinities with a great deal of suspicion – it's hard to find infinities anywhere in nature – but physicists seem to take the math as literal truth, which I find puzzling. Another way to make the model break down from too much curvature is by trying to model the entire universe. Considering the universe as a “something” you can model seems fishy to me. If special relativity teaches us anything, it is that there is no such thing as events occurring simultaneously over cosmological distances. Even talking about what is going on in the Adromeda galaxy “right now” is absurd becuse “right now” only applies to “right here”; so how can anyone model the entire universe all at once and say it has a certain size and structure at a given point in time like a grapefruit or a basketball? Another discrepancy is that relativistic field equations permit backward time travel, as Kurt Gödel proved. Physicists should see a paradox like that as a huge red flag, but apparently they don't. Cosmolgists readily slip into fallacy mode when they talk about the “size” of the universe or traveling “through” space-time as if it's some kind of fixed coordinate system. Nothing travels “through” space- time and here's why: every “thing” is at the exact center of the universe all the time. Even if I “travel” ten billion light-years from where I am right now, I won't get any closer to the edge of the universe than I was originally. I'll still be exactly at the center. How do you measure the size of something? Well, 13 Unfortunately I am not one of them, but Arthur Eddington was. The physicist Ludwik Silberstein approached Eddington once and said, Professor Eddington, you must be one of three persons in the world who understands general relativity. Eddington paused, unable to answer. Silberstein continued, Don't be modest, Eddington! Finally, Eddington replied On the contrary, I'm trying to think who the third person is. 13
- 16. you pick two points that define the extent of the thing you're trying to measure, then you place a measuring stick between them. Alternatively, you can do what modern surveyors do: stand at one point, shine a beam of light at a corner reflector placed at the other point, measure the time it takes for the light to bounce back to you, and covert that time into distance. Okay, so which two points do you pick when measuring the size of the universe? That's a trick question, because you won't find any. Every point is always at the center, so no two points can be found that define the extent of the universe. How big is the universe? All anyone can say is that it's really big. When you look out into space on a clear moonless night, what do you see? Well, you see nearby planets and stars, and maybe even the Milky Way if you are far away from city lights. Beyond the Milky Way, you might see the Andromeda galaxy, which is actually quite close to us in cosmological terms, appearing as a fuzzy patch of light at 40° northern latitude. Everything looks flat, Euclidean, and 3- dimensional; in other words, things look pretty “normal” in our little corner of the cosmos. If your eyes were as good as the Hubble telescope, you might see some really distant objects, and here's where things get strange. The farther out you look, the farther apart those objects seem to be from each other, but that's an illusion. You see, when you look at objects that are really, really, really far away, you're looking at a universe that's much younger and much smaller than it is today (to the extent that terms such as age, smallness, or today make any sense at all when talking about the universe).14 So very distant objects were actually much closer to each other than they appear to be from our vantage point.15 In fact, looking at the distant universe from any vantage point is like looking at it through a fun-house mirror. It is literally impossible for our puny little 3-dimensional brains to properly visualize a universe in its entirety, given that every point is always at the exact center; therefore, we substitute a fictional version of reality that we can wrap our heads around: a giant 3-dimensional coordinate system with a clock ticking away in the background (or alternatively, a 4-dimensional space-time coordinate system, which isn't much better). Unfortunately, cosmologists aren't any better than the rest of us at comprehending the universe in its entirety.16 Here's the bottom line: space-time is not a “thing” you can travel through, like a ship traveling through the ocean. Traveling is only meaningful when you do it relative to something else. Minkowski space- time simply provides a way to take measurements between objects and events so that all frames of reference can agree on those measurements. The problem is, those measurements lose meaning entirely (in our 3-D version of reality) when the objects or events are too far apart, too far away, or too long ago. I consider general relativity to be a big improvement over Newtonian physics, but I can't help the feeling that there is something fundamentally wrong with using 4-dimensional space-time as a model of reality. I think physicists will eventually come up with a more complete theory of space, time, and reality; I believe a theory based on quantum entropy and data processing will emerge as the right one. There's an old saying about space and time: Time is what keeps everything from happening all at once, and space is what keeps everything from happening to me. I believe there's a grain of truth in that. Space and time are there as a form of censorship. Think of the uncertainty principle. There is a fundamental limit to the amount of information about the universe that is knowable. The purpose of space and time is to draw a curtain in front of the information we're not permitted to know. Think of the entangled photons in Alice's and Bob's labs during the EPR experiment. Space and time can't isolate two observers that already “know” everything there is to know about each other, so as far as those two photons were concerned, there can be no space or time between them no matter how far they traveled. 14 This is according to the big bang theory. I don't doubt that the universe had a beginning where everything was crunched together, because from today's vantage point there is every indication that the universe really did start out that way. However, I'm not sure at all about the details of how this happened or why the universe decided to have a beginning. Appendix I offers a somewhat farcical account of the current scientific belief system in that regard. 15 If you could see all the way back to the big bang, which was an infinitesimal point, it would seem to fill the entire sky. Now that's the ultimate optical illusion. 16 … which is why cosmologists keep referring to silly, nonsensical things like traveling through space and time, or measuring the size of the universe. 14
- 17. Appendix D – Space and Time with Quantum Weirdness Many “quantum” phenomena can also be described in classical terms. Albert Einstein wasted much of his career vainly attempting to debunk quantum mechanics by explaining away the results of experiments that violated his cherished classical principles. Take for example, Young's famous double- slit experiment, named after Thomas Young (1773-1829). Richard Feynman is quoted as saying, “All of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment.” The gist of the experiment is that particles aimed at a double slit produce wave interference patterns on a detector screen placed behind the slits. But if an observer tries to measure which of the slits the particles pass through, the wave interference pattern is destroyed and the particles behave like bullets instead of waves. Einstein says, “Okay, I can explain that. If a particle detector is placed at one of the slits, the measurement disrupts the flight of particles through the slit, destroying the wave interference. It's just a problem of the measuring device imparting extra momentum to the particles.” That seems like a plausible explanation, but it's wrong, as we shall see. A variation of Young's experiment is the so-called Quantum Eraser Experiment. This experiment is truly weird and it is impossible to explain it using classical arguments. A beam of photons (light) is directed at a double slit, one photon at a time. Each slit contains a beta barium borate crystal that converts one photon into a pair of entangled photons, which go in two different directions: One entangled photon goes toward a photon detector D0 that is set up to look for interference patterns. The other entangled photon goes into an “idler circuit” where quantum weirdness takes place. The idler circuit has partially-silvered mirrors that give idler photons from Slit A or Slit B a 50/50 chance of being reflected or passing through the mirrors. If an idler photon comes from Slit A, it has a 50% chance of being reflected toward photon detector D1. Otherwise, it passes into a quantum eraser, which combines its path with the path of idler photons from Slit B that pass through their mirror. These combined paths go to detector D2. If an idler photon comes from Slit B, it has a 50% chance of being reflected toward photon detector D3. Otherwise, it passes into a quantum eraser, which combines its path with the path of idler photons from Slit A that pass through their mirror. These combined paths go to detector D4. For each primary photon aimed at the double slit, one entangled photon arrives at the vicinity of D0, with apparently no way of telling which slit this photon came from. Over in the idler circuit, there are four possibilities: 1. A photon is detected at D1, which means it definitely came from Slit A. 2. A photon is detected at D3, which means it definitely came from Slit B. 3. A photon is detected at D2, which means it could have come from either Slit A or Slit B. 4. A photon is detected at D4, which means it could have come from either Slit A or Slit B. Now, let us turn our attention back to the D0 detector. This detector is placed on a track so it can be moved back and forth parallel to the positions of the slits. Charting millions of photon “hits” versus distances along the track should produce a clear wave interference pattern, since the photons originated in the beta barium borate crystals with no a priori way of telling which slit they came from. However, the raw results show only an ill-defined smudge pattern of hits, containing no information at all. It seems that this experiment is a complete failure. But suppose we correlate the photon hits at the D0 detector with hits from their entangled twins in the idler circuits using a coincidence recorder. When we do this, four distinct patterns emerge at D0: 1. D0 hits correlated with D1 hits show bullet-like particles coming from Slit A. 2. D0 hits correlated with D3 hits show bullet-like particles coming from Slit B. 3. D0 hits correlated with D2 hits show a “positive” wave interference pattern.17 4. D0 hits correlated with D4 hits show a “negative” wave interference pattern.18 17 A positive light pattern would be bright, dark, bright, dark … 18 A negative light pattern would be dark, bright, dark, bright … 15
- 18. The sum of these four patterns produces that awful smudge pattern at D0, but breaking up the smudge into correlated groups of photons reveals its true nature: Both particle and wave patterns emerge, based on the “decisions” the entangled photons made at the half-silvered mirrors in the idler circuits. Now here's the truly weird part: If you make the paths of idler circuits much longer than the D0 path, photons will be detected at D0 before any correlated photons could arrive at D1, D2, D3 or D4. It's as if the D0 photons knew ahead of time where their entangled twins would be detected. Observations in the “present” seem to depend on events in the “future.” These results would leave Einstein scratching his head because there is simply no classical explanation for them – there is nothing happening in the idler circuit that could “disrupt” photons arriving at D0 or impart any momentum to them. This experiment is similar to John Wheeler's delayed choice thought experiment. At first blush, it seems to violate causation by sending signals backwards in time. It also seems to violate the speed of light limitation from special relativity because there is no limit, in principle, to how far away D0 can be from the idler detectors and still be affected by events taking place there. However, there is no violation of either causation or relativity because no “information” is really being sent backwards through time or instantaneously across space. (No, you can't use this kind of apparatus to instantaneously send Morse code signals across the galaxy. Darn.) The correct interpretation is this: Until and unless an observation is made, there is literally no information regarding the past. Thus, even if the idler photons are detected millions of years after the photons are detected at D0, no distinct patterns will emerge at D0 until the photons are properly correlated. Now here's a very important corollary to this: Once an observation is made, there is no way to “undo” the information and change history. This is because of the deep connection between information and entropy. Remember what the second law of thermodynamics says: Entropy cannot be destroyed. There is a corresponding law of information: Information cannot be destroyed. Once something happens, it happens for good. The reason backward time travel is impossible is because it violates the second law of thermodynamics.19 Another corollary is that you can't see into (or remember) the future because information about the future simply doesn't exist. Information is created in the present, permanently becoming the past. (Note that observation doesn't necessarily require intelligence. Two particles colliding is the same as an observation as far as those particles are concerned.) In the final analysis, the Quantum Erasure Experiment redefines space and our concept of past, present, and future. In our puny little 3-dimensional brains, it looks like the D0 photons and their idler twins head in different directions and exist in different places and are detected at different times. From the perspective of the photons themselves, however, there are no such spatial or temporal distinctions. What we humans see going on in space and time (at least in this experiment) is clearly an illusion. There is a much more fundamental reality concerning space and time than is visible in our workaday world. It's important to remember that quantum mechanics never violates causation or special relativity. In most instances it's invisible and hiding just below our threshold of perception. But its effects are real and they cannot be ignored or glossed over. If you ask Nature sensible questions, She will always provide sensible answers. Nonsensical answers are always the result of asking nonsensical questions. A deterministic universe is a dead universe, devoid of information and meaning. Information can only arise by shuffling the deck and introducing uncertainty and entropy. Entropy and uncertainty are not the enemy of reality; they are the very things that give information, life, and meaning to it. 19 Arthur Eddington (one of the few people who understood general relativity in 1915) is famously quoted as saying, “If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations – then so much the worse for Maxwell's equations. If it is found to be contradicted by observation – well these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.” Whatever physicists decide the “theory of everything” turns out to be, it will have to account for the Quantum Eraser Experiment and it will have to conform to the second law of thermodynamics. Otherwise, it will collapse in deepest humiliation (using Eddington's words). I'm convinced that any theory that is based on classic space-time or determinism in any way will fail because of this. 16
- 19. Appendix E – The Problem of Spin There is the problem with angular momentum – at least it's a problem for me. Let's start off with uniform motion. According to both special and general relativity, there is an exact symmetry between the uniform motions of two objects. If I say Object A moves at velocity + v with respect to Object B, then you can say Object B moves at velocity - v with respect to Object A, and we would both be right. A similar equivalence exists in general relativity between gravity and objects accelerating in a straight line. An accelerating frame of reference without a gravitational field is indistinguishable from non- accelerating frames of reference in a gravitational field, and freely-accelerating frames of reference in a gravitational field are indishtinguishable from non-accelerating frames of reference without a gravitational field.20 Here's another way the gravitational equivalence works: When a rocket ship accelerates, its occupants feel the acceleration just like they're immersed in a uniform gravitational field pointing in the opposite direction of the rocket motor's thrust. As far as the rocket ship's occupants are concerned, the entire universe is accelerating backwards due to a gravitational field extending to infinity – except the rocket ship itself, which is held perfectly still by the thrust of the rocket motor. Distant clocks ahead of the rocket ship are seen by the occupants as being at the top of a gravitational well, and those clocks seem to run fast. Distant clocks behind the rocket ship are seen by the occupants as being at the bottom of a gravitation well, and those clocks seem to run slow.21 Although this scenario seems unlikely, it could be physically realized if there were an actual a uniform gravitational field extending forever. In other words, the equivalence between gravity and acceleration makes sense logically, even if it isn't a very plausible explanation of what's really happening. So far, so good. It's rotational equivalence described by relativity that doesn't make sense to me. When an object spins, it knows it's spinning because every atom accelerates toward the center of rotation and there is a centrifugal force felt in the opposite direction. But unlike linear acceleration, it doesn't take a rocket motor to keep a spinning object accelerating toward the center of rotation – it just keeps spinning and accelerating forever all by itself. Now here's the part I don't get. All the literature I've read conerning general relativity state that the sensation of spinning is somehow linked to gravity and bending of space- time. Some of these books refer to spinning as motion relative to “the fixed stars,” which is a shorthand for all the mass in the universe.22 So according to this account, the spinning sensation is caused entirely by a mysterious gravitational influence on the spinning object by all the mass in the universe. This is saying that the sensation of spinning is equivalent to being surrounded by a universe that's spinning. If there were no “fixed stars” there would be no sensation of spinning. I'm having some serious problems with that equivalence. First of all, since mass is more or less uniformly spread throughout the universe, the gravitational influence of the universe (i.e., the “fixed stars”) on an spinning object would be equivalent to being surrounded by a hollow sphere having a uniform mass density. In the Newtonian world, the gravitational field inside such a hollow sphere is zero because the masses in all directions cancel out. The same thing is true in general relativity if the hollow sphere is stationary. Okay, so let's set the sphere into rotation around an object and see what happens. According to Newtonian physics, nothing happens. However, according to the equations of general relativity, rotating matter-energy “out there” has peculiar effects on the local space-time “right here.” So when I spin around, it's exactly equivalent to the entire universe rotating around me and causing bending and twisting of space-time near me. I'm just an engineer who doesn't know how to solve Einstein's equations, except for the most simple 20 This can be expressed mathematically as bending space-time, although I don't think that's what reality is. 21 This is not to be confused with the “red” and “blue” Doppler shifts due to relative motions. The effect of acceleration/gravity on clocks is in addition to those Doppler shifts. 22 I dislike the term “fixed stars” intensely because nothing in the universe is fixed. I use that term here only because many other authors use it when talking about rotating bodies. 17
- 20. cases, so I guess I'll just have to go along with those results.23 But here's the thing that bothers me: If I'm spinning around at one revolution per second, then according to the general relativity equivalence principle, the same spinning sensation could be produced by the universe rotating around me at one revolution per second, and there would be no way in principle for me to know which scenario is causing my motion sickness. In the case of linear acceleration, I can be tricked into believing the fantasy that the entire universe except me is accelerating due to gravity. Although such a thing is implausible, it doesn't violate the principles of general relativity. But in the case of rotation, how can the sun, which is eight light-minutes away, let alone the “fixed stars” much farther away, make one revolution around me every second without going faster than light? I know that a universe rotating around me once per second is not only implausible, it's impossible according to the very principles the theory is based on; therefore, I cannot be tricked into believing that such an equivalence is true. Let's consider the hypothesis that without “the fixed stars” exerting their mysterious gravitational influence over me, there would be no sensation of spinning. This sounds more like astrology than physics, and something tells me this picture just isn't right. If it were true and I were the only object in an otherwise empty universe, I could attach rocket motors to myself and spin around as fast as I want and not feel a thing. It seems much more reasonable to suppose that angular momentum – unlike linear momentum and linear acceleration – is an intrinsic property of an object that doesn't depend on relationships with other objects or other frames of reference. In other words, a spinning object accelerates toward its axis of rotation regardless of what the rest of the universe is doing. There is a quantum-mechanical property known as “spin.” The basic unit of spin is given by Planck's constant, h, which is equal to 6.626068 × 10-34 m2 -kg/sec. For technical reasons, physicists use the so- called “reduced” Planck constant, ħ or “h-bar”, which is equal to h divided by 2π. Due to an historical accident, elementary particles turn out to have quantum spin values that are multiples of ½ ħ.24 Now notice the physical dimensions of Planck's constant: m2 -kg/sec. They are exactly the same dimensions as the angular momentum, L, that a pitcher imparts to a baseball when he throws a curve. Quantum physicists go to great pains to stess that their spin property ħ has absolutely nothing to do with the L of spinning baseballs, as if they're embarrassed that there's a connection between their field of study and everyday reality. In fact, quantum physicists leave out ħ altogether when talking about spin, saying that the electron has ½ spin, a photon has a spin of 1, a graviton a spin of 2, etc., as if it's a pure number. However, the units of Planck's constant describe a spinning physical object and that fact is undeniable. Planck's constant is one of the most fundamental constants in nature – it's the most fundamental constant in quantum physics – and isn't it odd that this most fundamental contant happens to have the exact dimensions of angular momentum? I think I know why physicists are so reluctant to admit there's any connection between ħ and L: the definition of quantum-mechanical spin says that it is an intrinsic form of angular momentum carried by elementary particles; however, general relativity stresses that there are no intrinsic motions – all motions are relative, including spin. But what if that isn't true; i.e., what if spin really is an intrinsic property of a rotating object, not only for electrons but for baseballs too? I may be wrong, but this would conflict with general relativity, and it could bring down the entire edifice. That is something physicists don't like to contemplate, but it doesn't bother me at all. Engineers don't have strong emotional attachments to theories. If one theory doesn't work, we just find another one that does. As you may have guessed, I have some serious reservations about general relativity. As I've said over and over, it works pretty well when describing local phenomena and it makes better predictions than Newton's laws. I just think that general relativity gives us a distorted picture of the universe as a whole, and the reality riddle won't be solved until science abandons that theory or modifies it. 23 I'm only repeating what I've read in the literature; I lack the mathematical skills to solve Einstein's equations for a rotating universe on my own. If I've misinterpreted what I've read, I'm sorry. 24 It took quantum physicists several tries until they finally got the value of spin right. 18
- 21. Appendix F – A New Theory of Gravity Newton's theory of gravity and Einstein's general relativity both try to explain the obvious fact that massive objects tend to pull toward each other. Newton's theory postulates a gravitational potential that is inversely proportional to distance, which produces a force that is inversely proportional to distance squared. Einstein's theory treats gravitational attraction as geometric properties of curved space-time. In my opinion both theories are flawed because they are classical-deterministic and background- dependent theories. Erik Verlinde has approached the question of gravity and inertia from an entirely new direction. According to his paper On the Origin of Gravity and the Laws of Newton, gravity is an emergent force that originates from entropy. He derived both Newton's equations and Einstein's equations of general relativity from entropy; there is certainly nothing new about those equations, and he will have to go much further than that if he expects his theory to gain any traction. However, I truly believe that when and if the theory of entropic gravity is fully developed, it will lead science in a new and better direction than any of the other prevailing theories being explored at the present time. If I understand the concept of entropic gravity correctly, it starts out from the holographic principle that says that all information contained within any volume of space is actually encoded on a hypothetical surface surrounding that volume. Even mentioning the concept of a holographic universe puts most physicists' teeth on edge because it sounds all squishy and new-agey, like talking about energy vortexes and crystal therapies. Yet even some old-school physicists, like Leonard Susskind, take the holographic principle seriously because it happens to explain the properties of black holes rather nicely.25 Verlinde begins his paper with the idea of entropic force, using forces in a polymer strand immersed in a temperature bath as his model. When you pull on the strand, it exerts a force that tends to resist straightening it. Verlinde says that this force has nothing to do with energy – it's caused simply by the arrangement of the atoms of the strand in space. Pulling on the strand and stretching it reduces the entropy of the strand's hologram, and the natural tendency of all things is to maximize entropy; hence, a force resists the change. Likewise, the reason that two masses fall toward each other is because the configuration of “togetherness” maximizes the entropy of their hologram. The paper then goes on to derive Newton's laws of inertia in the same bottom-up manner. I would urge anyone who has an interest in this subject to read Verlinde's paper for a much better and more complete treatment than I can possibly give it here. The salient point is that gravity and inertia are both emergent properties that arise naturally from thermodynamics. You don't need a hammer to pound nature into submission, forcing it to give us a theory of gravity. There is no need to use exotic strings vibrating in 10-dimensional space-time continua or incomprehensible mathematics to explain these things – they should explain themselves using rather basic and simple first principles. Pressure and temperature are also emergent properties, and just as neither pressure nor temperature exist on the submicroscopic level, neither do gravity and inertia. All four of these phenomena can be shown to be closely related to entropy, but entropy only applies to aggregate collections of particles and not to the individual states of the particles themselves. There are many critics of Verlinde's work, who view any deviation from the orhodoxy of relativity and quantum field theory as heresy, punishable by excommunication from the scientific community. They argue that gravity is a reversible process, whereas entropy isn't. By their logic, a reversible process like gravity simply cannot emerge from an irreversible process like entropy. Well, pressure is also a reversible process and it is clearly related to entropy. We certainly don't need curved 4-dimensional space-time to explain pressure or temperature, and we shouldn't need it to explain gravity either. 25 Susskind (the Father of String Theory) gave some very convincing lectures on the holographic principle as it relates to black holes. Lately, he's been sidetracked as being an advocate of the anthropic principle. In 2004 he engaged in a heated email exchange with Lee Smolin, who argued that the anthropic principle is not science. 19
- 22. Appendix G – Trying to Erase Relativity I recently watched a video of a lecture on string theory by Leonard Susskind at Stanford University. He introduced the concept of strings with a thought experiment involving a box of elementary particles that are moving relativistically.26 He then boosts the box to nearly the speed of light in the z-direction of space. What happens next is very interesting. According to Susskind, as seen from our frame of reference, the particles start behaving “classically” in the other two dimensions, the x-dimension and y- dimension. (By classically, he means according to Newtonian physics.) It's as if he magically erased relativity from the picture with a wave of his marker pen. He then went on to lecture about strings as both relativistic and classical objects; but strings really aren't the point of what I have to say. After I thought about this lecture for a while, I had a deep insight. Instead of a box of particles, let's imagine a room full of ordinary objects (which could include people) that are all moving relativistically with respect to each other.27 We can make the room as big as it needs to be. If we boost the room to just under the speed of light in the z-direction, almost all of the “available” motion is “used up” in the z- direction, so there is very little available motion left for the other two directions. From our perspective, the Lorentz transformation slows down time and everything shrinks in the z-direction, making the room slower, flatter, and more 2-dimensional. What happened was that we have (almost) eliminated time in that room by replacing its dimension with the z-dimension.28 But have we eliminated relativity and created a classical world? Well, everything in the room does move a lot slower in the x- and y-directions. But so does light, which makes even these slow motions seem “relativistic” in the room. According to special relativity, light cones project from all objects into time (the future and the past) in Minkowski space-time; events not inside an object's light cones are “unknowable” to that object. When everything is boosted in the z-direction, those light cones are in the z-direction, and they become very narrow. Also, information sent from one person takes a very long time to reach another person. Now, you'd say that's just a matter of time scaling, so doing the reverse Lorentz transformation would speed everything back up in our reference frame. That's true, but all of the special relativistic effects are still there in the boosted room because we can still see them from our reference frame; also, when we reverse the Lorentz transformation, those effects return in all their glory. We didn't eliminate any special relativistic effects in the room by boosting it in the z-direction. What about gravity? Well, the Lorentz transformation increases all the masses in the room. The increased masses also show up as increased inertias in the x-y plane, which makes it harder to change motions of objects in those directions. Now gravity isn't really too important in a room full of ordinary objects, but we can increase the size of the room to include the solar system where gravity plays a bigger role. Let's boost the room to nearly the speed of light in the direction perpendicular to the plane of the planets' orbits. We'll stick to good ol' Newton's theory of gravity and inertia, because it works pretty well in the solar system. Let's see what happens (I promise to keep the math very simple). First, let's make the planets' orbits cicular because that's easier. The centrifugal force of a planet revolving in a circle is equal to mv2 /r, where m is the planet's mass, v is the planet's velocity, and r is the radius of the circle. The gravitational attraction between the sun and the planet is equal to GMm/r2 . Here, M is the mass of the sun and G is the universal gravitational constant. By setting the centrifugal force equal to the force of gravity and solving for velocity, you get v = √GM/r. Notice that the mass of the planet (small m) disappeared from the formula. Now let's boost the solar system in the z-direction to around 0.994 times the speed of light. The Lorentz tranformation increases the masses of the sun and 26 Susskind is a particle physicist. Like most particle physicists, he looks at the universe as collections of elementary particles. 27 Actually, everything always moves relativistically. We just don't notice relativistic effects because ordinarily they're too small to observe. 28 Remember that according to special relativity, all objects “fall through” Minkowski space-time at the speed of light. Ordinarily, our “fall” is mainly in the time dimension, but the boosted room is moving very fast in the z-direction instead. Time consumes the z-dimension and we wind up with one less dimension to worry about. 20
- 23. the planets by a nice, round factor of 9. The sun and planets turn into flat pancake-shaped objects, but the radii of the orbits in the x-y plane don't change. Let's assume that Newton's laws in “Pancakeland” are the same as everywhere else. When you plug in the new mass of the sun, 9M, into the formula for v, you see that the orbital velocities increase by a factor of √9 = 3. Adding the effect of time dilation will divide the new v by 9, reducing it to 1/3 of its original value. On the other hand, the other velocities in “Pancakeland” are reduced to 1/9 of their original values. My point is that in the boosted solar system, gravity-induced velocities actually increase relative to other velocities; so the effects of gravity certainly don't go away. In summary, to the extent that any special or general relativistic effects exist in the non- boosted room, boosting the room close to light speed won't make those effects disappear or even diminish them. Those effects must still be considered whether the room is boosted or not. But we've only gone to 99.4% of the speed of light, so why not “go all the way” and see what happens when we boost the room to 100% of the speed of light? This should erase relativity because all motions would cease and time would end. There would be a timeless, static room where nothing ever changes. Well, as every high school physics student knows, you can't do that. The standard reason is that the masses of all objects in the room would equal infinity at the speed of light, requiring the addition of infinite energy to make that happen. But there's a twist to this story that's much more interesting. Transforming a system of objects from one reference frame into another is the same as data mapping, which I discussed a little bit at the end of the main essay. You can transform data about those objects concerning their positions, velocities, etc., into a new space. The new data may look different by altering positions, velocities, etc., of objects in the new space; however, the original data are still present, but encoded in a different way. Now here's the kicker: if you reverse the transformation, you should get the original data back and come up with exactly the same configuration of objects you started with. But if you don't, your transformations aren't right. That's what led me to a very deep insight. Remember Claude Shannon's statement: information equals entropy. Since the second law of thermodynamics states that entropy cannot be destroyed, there's an immediate corollary that says information cannot be destroyed. Remember, even at a tiny ε below the speed of light, objects in the boosted room still move very slowly in the x-y directions. But right at the speed of light, time stops completely. By boosting a room to the speed of light, all information concerning x-y motion is lost and reversing the data mapping won't retrieve any of that information. Entropy was destroyed. If we try to return “Pancakeland” back to normal and reverse the Lorentz transformation, the planets would just hang motionless around the sun, which makes no sense. It seems that whenever we try to defeat entropy, the universe conspires against us somehow to prevent us from doing that. Nature increases the masses of moving objects in order to stop us from destroying entropy, implying that mass, inertia, and gravity are somehow fundamentally connected with information through entropy.29 Entropy and information keep poking their noses into reality in unexpected ways. If scientists decide to use a different model of the universe, they will have to account for all information as it's presented to us in this universe. If someone creates a timeless model of the universe, information concerning time and motion here must be encoded into that universe, even if it takes extra dimensions to do it. If someone creates a theory of the universe without gravity or relativity, information concerning the effects of gravity and relativity in our universe still must be encoded into that model somehow. It's no good to take shortcuts and omit some data from your theory – if you even try to do that, the universe will conspire against you and render your theory invalid. Entropy is a stern master. Getting back to Susskind's video lecture that started this whole thing off, I can't comment on the validity of boosting a box full of particles and treating them classically within the context of string theory. It's probably perfectly okay to do that in string theory.30 I only mentioned his lecture because it triggered some insights about how information seems to be inextricably linked to the reality riddle. 29 Refer to Erik Verlinde's entropic theory of gravity and inertia, discussed in Appendix F. 30 I must admit that string theory is still way over my head, even after I watched the lecture. 21

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment