Your SlideShare is downloading. ×
Order, Chaos,and the End of Reductionism
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Order, Chaos,and the End of Reductionism


Published on

The author presents a case against reductionism based on the emergence of chaos and order from underlying non-linear processes. Since all theories are mathematical, and based on an underlying premise …

The author presents a case against reductionism based on the emergence of chaos and order from underlying non-linear processes. Since all theories are mathematical, and based on an underlying premise of linearity, the author contends that there is no hope that science will succeed in creating a theory of everything that is complete. The controversial subject of life and evolution are explored, exposing the fallacy of a reductionist explanation, and offering a theory of order emerging from chaos as being the creative process of the universe, leading all the way up to consciousness. The essay concludes with the possibility that the three-dimensional universe is a fractal boundary that separates order and chaos in a higher dimension. The author discusses the work of Claude Shannon, Benoit Mandelbrot, Stephen Hawking, Carl Sagan, Albert Einstein, Erwin Schrodinger, Erik Verlinde, John Wheeler, Richard Maurice Bucke, Pierre Teilhard de Chardin, and others. This is a companion piece to the essay "Is Science Solving the Reality Riddle?"

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Order, Chaos and the End of Reductionism (Further Ruminations of an Amateur Scientist) By John Winders z' = z n + c
  • 2. Note to my readers: You can access this essay and my other essays directly instead of through this website, by visiting the Amateur Scientist Essays website at the following URL: You are free to download and share all of my essays without any restrictions, although it would be very nice to credit my work when making direct quotes.
  • 3. The image below was generated by cellular automata. The pattern evolves downward from an Alpha Point at the top of the image. Each pixel in a row is defined by the neighboring pixels in the preceding row by following simple rules of modulo-2 arithmetic. Modulo-2 arithmetic is highly non-linear, and non-linear processes produce order and chaos. Projecting the top-to-bottom evolution as a 2-dimensional image, complicated large-scale order seems to emerge from simple localized processes. The image below was generated by the Mandelbulb Generator computer program. The surface surrounding this strange object is the boundary that separates order from chaos. Points inside the surface represent order (included in the Mandelbrot set) and points outside the surface represent chaos (excluded from the Mandelbrot set). Order and chaos thus mirror each other.
  • 4. The image below is the barred spiral galaxy NGC1300 taken by the Hubble space telescope. The color rendering was inverted to produce the color-on-white image. The large-scale order is largely a result of interactions involving gravity and inertia. According to reductionist thinking, entropy can only produce randomness and disorder. Erik Verlinde has discovered that gravity and inertia both emerge from entropy. Thus, a post-reductionist interpretation of this image is the balance between the tendency of gas molecules to fly apart and the tendency for them to collapse; both of these tendencies are driven by a single entropic force. The image below is an actual photograph of a DNA strand. DNA has the most highly-organized naturally-occurring structure known; current scientific theories based on reductionism cannot fully explain it.
  • 5. The image below is the famous painting “The Great Wave off Kanagawa” by the Japanese artist Katsushika Hokusai. It captures the essence of order from chaos. Notice the self-similarity and scale-invariant features of the breaking wave, which are the fundamental properties of fractals. Also notice the similarity between the rising wave in the foreground and the snow-covered mountain in the background. Hokusai was keenly aware of the fractal-like patterns found throughout nature. This raises the prospect that these patterns are reflections of fractal properties of space itself. It is possible to mathematically construct a Mandelbrot set using quaternions; the set would be a finite 4-dimensional solid enclosed by a fractal boundary having three dimensions with an infinite volume. Could our 3-dimensional space be a fractal-like boundary that separates order from chaos in a higher dimension? The image below is the strange stationary hexagonal feature at the north pole of Saturn taken by the Cassini orbiter in 2012. It was first seen in the 1980s by the Voyager flyby missions. An unknown self-organizing mechanism is responsible for sustaining the formation. (Credit: NASA)
  • 6. This image captures natural order and chaos that spring within the fractal boundary we live in. The chaotic water jet splashing over the urn gives way to the orderly laminar flow down along the sides. The same fundamental law, which maximizes the total degrees of freedom of the universe, governs the laws of fluid dynamics and the self-organizing principle expressed in the plants and flowers that surround the urn.
  • 7. Note: The drawing on the cover is an example of Penrose tiling, generated by a computer program provided by Craig S. Kaplan of the University of Waterloo in Ontario, Canada. This particular example was generated by varying the program's parameters until they were almost at the borderline of order and chaos. This essay is a companion piece to Is Science Solving the Reality Riddle?(Cogitations of an Amateur Scientist). I considered adding yet another appendix to Reality Riddle, but repeatedly fooling around with it was starting to get ridiculous. So I decided to encapsulate some ideas in a separate piece instead (this one). In case you're interested in knowing the genesis of these ideas, I suggest reading over Reality Riddle first. I'll start off with a dictionary definition of reductionism: re·duc·tion·ism 1: explanation of complex life-science processes and phenomena in terms of the laws of physics and chemistry; also: a theory or doctrine that complete reductionism is possible 2: a procedure or theory that reduces complex data and phenomena to simple terms — re·duc·tion·ist noun or adjective — re·duc·tion·is·tic adjective I'd like to concentrate on the second definition first. The basic idea is that the whole is equal to the sum of its parts. I'm what you might call an anti-reductionist, because I think the whole is greater than the sum of its parts, and usually it's a lot greater. Unfortunately, the “hard” sciences, such as physics and chemistry, and almost all of engineering fall into the reductionist camp. This started back before Isaac Newton, but he was the one who really gave it legs. Scientists knew that planets revolved around the sun before Newton, and they even had a pretty good idea of how they moved. They just didn't have a clue as to why they moved the way did. Johannes Kepler accurately described planetary motions in a set of three laws, but he was a little fuzzy about why these laws are true. Oh, he did have a theory, described in a document called the Mysterium Cosmographicum, which seems to be a weird mixture of Platonism, astrology, Biblical doctrine, and maybe even alchemy. But that doesn't resemble anything like a sound theory according to modern physics. Then in 1687, Newton came up with his laws of motion and gravity that he published in his Philosophiæ Naturalis Principia Mathematica, or just Principia for short. He even invented the calculus to help scientists and engineers work with his theories.1 Way to go, Sir Isaac! The big breakthrough came when Newton realized that the same laws that govern apples falling on the Earth also apply to motions of the Moon and the planets. This also reinforced the idea that natural processes can be described by mathematics, specifically linear equations, and more specifically differential equations. This idea became an obsession among scientists, and reductionism hinges on the notion that nature obeys mathematics; however, I think it's more accurate to state that mathematics sometimes mimics nature. Since Newton's time, science and mathematics have been inextricably linked. Every breakthrough in mankind's understanding of nature has been accompanied by a scientific theory couched in the language of mathematics. Today, it's the other way around: mathematics is leading science by the nose. Today, it's virtually impossible to express scientific thought in any language other than 1 Actually, he co-invented the calculus along with Gottfried Wilhelm Leibniz, whose notation was adopted by mathematicians, and is the standard way calculus is taught in high schools and colleges. Newton accused Leibniz of plagiarism, even though Leibniz published his version first. 1
  • 8. mathematics. I feel that this is becoming a stumbling block of science.2 There was great scientific progress in early part of the 20th century, beginning with Albert Einstein's theory of special relativity and the quantum theory of light in 1905, followed up by his theory of gravity expressed by general relativity in 1915. Einstein's breakthrough with the quantum theory of light was further developed by a notable cast of characters beginning roughly in the 1920s.3 I'm not going to repeat the well-documented history of these events, other than to point out that relativity and quantum mechanics came at reality from completely different directions, and are in many ways completely incompatible with each other. This led Einstein and others to try to merge or “unify” quantum physics with general relativity. So far, these attempts have been completely unsuccessful. In my opinion, the problem of unification lies mainly with general relativity, because it is still a classical-deterministic theory.4 Experiments have shown time and again that reality does not obey classical-deterministic rules. As I stated often in Reality Riddle, general relativity is a good conceptual tool that describes many phenomena very accurately on fairly small scales, as long as the “curvature” of space-time isn't carried to extremes. The mathematics begins to fall apart – as indicated by infinities and time anomalies that pop up – when it is (mis)applied to extreme gravitational conditions or when trying to “solve” the state of the entire universe. Physicsts believe that the unification of general relativity with quantum field theory will ultimately result in a Quantum Theory of Gravity. That theory requires a hypothetical elementary particle known as the graviton – the force carrier of gravity. So far, this particle has not been seen in the wild, but its quantum-mechanical properties are pretty well established. It's range is infinite and it must travel at the speed of light, so it can't have any mass, and in order to fit into the standard model, it must be a boson with a spin of 2.5 One of the strange things about gravity is that it cannot be shielded or blocked. If you stand behind a wall of solid lead – or solid wall of anything for that matter – the force of gravity will go right through it. So the graviton must also have infinite penetrating power, which is a somewhat unique property among elementary particles. Unfortunately, coming up with a quantum theory of gravity involves a lot more than just plugging a graviton into quantum field theory, or turning gravitons loose to zoom around in 4-dimensional space-time. As I stated in Reality Riddle, there seems to be a problem with properly incorporating rotation into general relativity, which might actually point to a bigger problem. Einstein apparently believed that there are no inherent, qualitative differences between rotating objects, which have centripetal acceleration, and objects that accelerate in straight lines. But I suspect there really are qualitative differences between them. For one thing, an object that accelerates in a straight line needs to be pushed by something else; otherwise it just stops accelerating.6 A rotating object on the other hand, accelerates centripetally without any help from the outside. That's one qualitative difference. Another qualitative difference is that linear acceleration is equivalent to a gravitational field; however, there doesn't seem to be any plausible gravitational equivalence to centripetal acceleration. My suspicion is that the failure to recognize these inherent, qualitative differences resulted in an incomplete theory. This causes anomalies like backward time travel when the general 2 This was one of the basic themes in Is Science Solving the Reality Riddle?(Cogitations of an Amateur Scientist). 3 These included Einstein himself, along with Niels Bohr, Max Born, Satyendra Nath Bose, Louis de Broglie, Arthur Compton, Paul Dirac, Werner Heisenberg, David Hilbert, Enrico Fermi, Max Von Laue, John von Neumann, Wolfgang Pauli, Max Planck, and of course Erwin Schrödinger. 4 It is also very much a reductionist theory, which is another fallacy. 5 Mass particles, such as electrons, protons, and neutrons, are fermions. They have spins that are odd multiples of ½ and they obey Pauli's exclusion principle. Force carrier particles, such as photon, gluons, and such, are bosons. They have spins that are either zero or even multiples of ½ and they don't obey Pauli's exclusion principle. 6 An accelerating rocket pushes on the gas escaping the rocket nozzle. The gas pushes back on the rocket according to Newton's third law of motion, causing it to accelerate. 2
  • 9. relativity field equations are solved for cases where there are spinning motions. Here's another clue: the fundamental constant in quantum mechanics is Planck's constant, ħ. This constant has units of angular momentum or spin. The energy of a body in periodic motion is quantized, as given by the formula ΔE = ħω, where ω is the frequency of oscillation. Planck's constant also shows up in Schrödinger's wave function, which is a periodic function. Periodic motions and spin are closely related. Therefore, it seems that spin is the one ingredient that automatically provides quantization, and I have a hunch that a quantum theory of gravity might emerge naturally if spin could be properly incorporated into general relativity and baked into it from the very beginning. At the end of the 19th century, the Industrial Revolution had transformed the western world, science and mathematics had triumphed, and it appeared that nothing further could be invented or discovered. This was the prevailing reductionist fantasy, expressed earlier in the century by the physicist Pierre-Simon Laplace: “Consider an intelligence which, at any instant, could have a knowledge of all forces controlling nature together with the momentary conditions of all the entities of which nature consists. If this intelligence were powerful enough to submit all this data to analysis it would be able to embrace in a single formula the movements of the largest bodies in the universe and those of the lightest atoms; for it nothing would be uncertain; the future and the past would be equally present to its eyes.” By Laplace's time, science had pretty much worked out the movements of the largest bodies in the universe and those of the lightest atoms, thanks to Newton's laws. So all that needed to be done was to collect the momentary conditions of all the entities (plugging in the boundary conditions) and turn the crank. Past, present, and future would be revealed in all their glory. Of course, the remarkable progress in the early 20th century laid waste to the naive notion that there was nothing left to discover or invent. But in the early 21st century, it's déjà vu all over again. Some scientists actually think that unifying quantum theory with relativity – possibly through string theory – is the only piece of the puzzle that's missing. Like the intelligence in Laplace's fantasy, finding the missing puzzle piece will reveal the entire past, present and future; how the universe began in minute detail, its entire evolution, and how it will end. It might even reveal the the origin of life itself. Well, here's what I think: when and if a unified theory is unveiled, it won't be the end of science, but it very well might be the end of reductionism. I'll now try to explain the reasoning behind that statement. First, it will be helpful to give a very broad overview of the two physical theories that scientists are attempting to merge. Einstein's theory of relativity can be expressed by the following mathematical equation, which links the curvature of space the concentration of mass-energy as follows: This is called the Einstein field equation. I'm not going to attempt to explain exactly what each of the terms mean, other than the fact that Rμν, gμν, and Tμν are what are known as tensors. Tensors are geometric object that express linear relationships among objects, in this case in four dimensions. Thus, Einstein's field equation is somewhat similar to an ordinary linear differential equation. But despite its apparent economy, it is devilishly difficult to solve except for the most simple cases. Quantum mechanics can be similarly summarized by a single equation, known as the 3
  • 10. time-dependent Schrödinger equation, shown below. Again, I'm not going to explain all of the terms, other than to say that it is a second-order differential equation of the variable Ψ, which varies over time and distance; i.e., it's a wave. The wave itself has no physical meaning – it simply “exists” in space and time.7 Yet this immaterial wave mysteriously orchestrates the movements of all material objects from electrons to planets. The Schrödinger equation, like Einstein's field equation, is linear. What is meant by “linear”? Well, the equation z = x + y is linear because the value of z is simply the sum of its parts, x and y. Both the Einstein equation and the Schrödinger equation are linear because the components simply add. If two wave functions, Ψ1 and Ψ2, were to overlap in space, the resulting wave function would be the sum of the two because space is presumed to be linear. You would get an interference pattern, but you could still decompose the pattern into its constituent parts. This also makes it possible to apply mathematical tools, such as Fourier analysis, which are used to break down complicated functions into sums of much simpler functions such as sine waves. The equation z = x2 + 2xy + y2 is nonlinear because the whole, z, is not the sum of its parts, x and y. If space were nonlinear, two overlapping wave functions would combine in ways that would make it impossible to decompose the resulting wave into its parts. This would render most situations unanalyzable. In order to have any chance of analyzing a physical process mathematically, the process must be linear. Therefore, all physical processes that scientists analyze are assumed to be linear. String theorists call the ultimate theory of reality M-Theory, although nobody really knows what the M stands for. For the lack of a better name, I'll stick to the term M-Theory as well. It's almost certain that M-Theory must be expressed mathematically, since pure mathematics is the only driving force behind it at the moment. This means that no matter what form M-Theory takes, the underlying assumption is that reality is linear. But what if it isn't? In that case, although M-Theory might successfully describe many things, it won't describe everything, which was the original purpose for developing it in the first place. But if that's true, then physicists will discover to their horror that M-Theory was actually a dead end. They will have no choice but to scrap the notion that reality is linear or that it can be expressed through mathematics, or at least using the kinds of mathematics we presently use. In other words, scientific principles will change in significant ways, forcing us to abandon reductionism and look for other kinds of answers. Now saying that reality is nonlinear is a pretty sweeping statement, but I'm convinced it's true. The simple reason is that there is order in the universe, and order can only arise naturally through nonlinearity. We kind of take order for granted, but it's really a very deep mystery because according to the second law of thermodynamics, order shouldn't exist at all. First, we need to explore the concept of entropy. When James Watt invented the steam engine around 1765, he didn't have a clue about thermodynamics. He just knew that steam makes pressure and by condensing steam you make a vacuum; and if you put pressure on one side of a piston or a vacuum on the other side, you can make the piston move back and forth; and you can make a moving piston turn a wheel by using rods. Scientists started to study heat analytically, and they conjured up a bunch of laws they called thermodynamics. The second law of thermodynamics 7 The wave function Ψ is expressed as a complex variable, having a real and an imaginary part. Its conjugate, Ψ*, changes the sign of the imaginary part from a plus to a minus or vice versa. The product ΨΨ* is a real number, and that does have a physical meaning: it's the probability density function of a particle, or the likelihood of finding the particle within a given region of space and time. 4
  • 11. states that heat always flows from hot objects to cold objects. Well, duh. That sort of seems obvious to most people, but it has some very significant ramifications. Scientists in the late 18th and early 19th century became obsessed by steam, for good reason, because steam had completely transformed their civilization by ushering in the Industrial Revolution. They studied steam from every possible angle, and calculated all of its properties, including temperature, pressure, enthalpy, and a mysterious property known as entropy. In 1803, Lazare Carnot came up with the notion of entropy, whereby all physical systems have the tendency to lose useful energy. The concept of entropy was further developed by his son, Sadi, who viewed production of work by a heat engine as coming from the flow of a substance called caloric, like the flow of water through a waterwheel. In the ideal Carnot cycle, the system is returned to its original state, so the cycle is theoretically reversible. When a process is reversible, then the entropy of the system remains constant, but if a process is irreversible, some of its ability to do work is lost and the entropy increases. Increasing entropy → decreasing ability to do work. When heat flows from a hot object to a cold object, it is an irreversible process and entropy increases. In a reversible process like the ideal Carnot cycle, entropy stays constant. But in neither case does entropy decrease. Thus, the second law of thermodynamics can be stated as follows, “In an isolated system, entropy never decreases.” In 1877, Ludwig Boltzmann came up with a way to express entropy as a statistical property, which became the modern way of working with entropy. He defined entropy as the logarithm of the number of states a system can have times a constant, known as the Boltzmann constant. The second law of thermodynamics is just another way of saying that all physical systems tend to move toward their most probable states, which shows up as increased entropy. Viewed in that context, entropy can be thought of as measuring disorder or randomness. This led to a very depressing state of affairs, however. Physicists soon realized that the entropy of the entire universe is increasing, which means that the universe is constantly winding down. This ultimately will lead to a condition known as “heat death.” This doesn't mean that heat will vanish; it only means that the universe will reach a state of thermodynamic equilibrium where heat can no longer produce useful work. But this doesn't just apply to heat; it applies to everything. Stars will burn up all their nuclear fuel, all radioactive materials will decay, and everything will be in perfect state of equilibrium and maximum entropy where nothing ever changes. The prospect of heat death as the ultimate fate of the universe is a direct result of reductionism. Based on the underlying assumption of linearity where the sum is equal to the sum of its parts, there can be no other outcome. The second law of thermodynamics is relentless, driving the universe to a bland, featureless, and dead state. In fact, a reductionist universe is dead already. But a reductionist universe is also contrary to the obvious fact that order does, in fact, exist in the universe. So where does order come from? Surprisingly, it comes from the very same processes that produce chaos. Order and chaos are actually twins, although they're fraternal and not identical. I'll explain all that a little further ahead. But how do order and chaos relate to entropy? More specifically, how can order arise when the second law of thermodynamics states that entropy, or disorder, always increases? Well, actually viewing entropy as simply disorder is somewhat of a misconception. In the 1940s, Claude Shannon developed the modern theory of information.8 After studying information in detail, he came up with the astounding conclusion that information and entropy are really the same thing! 8 Shannon's work at Bell Labs followed his work on code decryption during WWII. The people at Bell Labs were interested in sending signals through noisy channels, which tends to corrupt signals. Through clever encryption, Shannon proved it was possible to send signals error-free as long as the information rate is kept below a certain threshold. This led to error-correcting codes, making modern communication systems and computers possible. 5
  • 12. This leads to an interesting corollary to the second law of thermodynamics, namely that information cannot be destroyed. Physicists, led by Stephen Hawking and Leonard Susskind, have concluded that entropy is “hidden” information. I'm not sure I agree with the “hidden” part, but I guess they have their reasons for saying that.9 I have a slightly different interpretation. Information is constantly being created in the Now, which becomes permanently stored as the Past. We sense the passage of time as information being added to the universe. You could think of the Past as a filing cabinet being filled with information, but that information can only be perceived in the Now. The Future is nothing but an empty filing cabinet with no information it it at all, so our sense of Future is merely a mental extrapolation based on what has already taken place and what is taking place. So only Now truly exists, which represents the totality of all changes taking place and influenced by the Past. Shannon showed that information is fairly easy to quantify, drawing similarities with Boltzmann's formula for entropy. The hard part is assigning a qualitative value to information. Information is neither “good” nor “bad” but certain kinds of information seem more meaningful than other kinds. I think that is where order and chaos come into play. Creationists argue that evolution isn't possible because it would violate the second law of thermodynamics. In the face of entropy, how could life forms have arisen, becoming more and more complex over time, unless they were created and fashioned by a conscious and willful divine Entity? Reductionists like Carl Sagan argued that life arose through a random process; if atoms keep banging into each other over a sufficiently long time,10 they will eventually form DNA molecules. If you keep randomly shuffling a deck of cards, it will eventually arrange itself in perfect ascending order. Could random natural process possibly account for the incredible complexity of life? Reductionism says yes. To me, the creationism argument is a false dichotomy. It's not a choice between increasing order or increasing entropy; both can increase together, and in fact they actually do just that. Think of a river that flows downhill due to the force of gravity alone. Imagine that the river bed is filled with rocks, logs, and other debris and that the river banks are very uneven. Now the general flow of the river is always downhill, but you will see eddies and whirlpools here and there. Now for the most part, those eddies and whirlpools don't move downstream. In fact, some of them may even move upstream momentarily. Now would you say those eddies and whirlpools defy the law of gravity? Of course not. The water molecules always move downhill, but the features of the river don't necessarily have to. The gravitational force is actually what causes those features to form in the first place, along with the highly nonlinear process known as fluid turbulence. Turbulence produces unpredictable, chaotic motions that somehow arrange themselves into stable, ordered features. Very mysterious, no? Entropy, order, and chaos work in much the same way. Entropy is the “engine” that keeps the whole process moving. Yes, the system as a whole (the universe) will tend to move toward the most probable state, thereby increasing its entropy. But although the universe began in a very improbable, low-entropy state and is currently winding down, nonlinear processes abound in nature. These processes create chaos, which is completely unpredictable. An it is chaos that nudges the universe into creating the beautiful order and structure seen everywhere. The universe isn't “dead.” It's very much alive and it's engaged in an incredibly rich and diverse creative process. Well, how does this process actually work? What's the mathematical equation that governs it? Well, I'm afraid I can't describe the process through a single equation – maybe nobody can. But I 9 This came about by studying what happens when objects are dropped into black holes. If all information about them is erased, then this would violate the second law of thermodynamics. Hawking and Susskind concluded that the information isn't lost; it becomes encoded or “hidden” as entropy on the black hole's event horizon. 10 Or as Sagan would say, “After billions and billions of years.” 6
  • 13. can describe some examples how this process can work on paper. Benoit Mandelbrot was a brilliant engineer/mathematician who spent much of his career studying how order comes from chaos, although he didn't describe it quite that way. He published his results in Fractals: Form, Chance and Dimension. His ideas were not widely understood by the scientific community, at least initially. But his was a case of someone who was very much ahead of his time; thanks to Mandelbrot, fractals have become a vibrant field of study. Here's one of the ways the process works. Take the formula z′ = z2 + c. The first thing we note is that the expression on the right side is nonlinear, owing to the z2 term. The z′ (z prime) stands for a new value of z based on the old value of z in the right side of the equation. Thus, the formula also contains feedback. The value of c is a number that we want to test using the formula. Next, we let z′, z, and c be complex numbers. Now don't get scared or flustered by that. It just means that each of them has a real and an imaginary part. Using the rules of complex algebra, we can rewrite the formula as two separate formulas: z′ (real) = z (real) × z (real) – z (imaginary) × z (imaginary) + c (real) z′ (imaginary) = 2 × z (real) × z (imaginary) + c (imaginary) Now, we can plot imaginary numbers as points on an x-y graph: the real parts correspond to x values and imaginary parts correspond to y values. We pick a real and imaginary value for c, say (0,0) and plug it into the formula to calculate z′. We feed z′ back into the equation as z, and repeat the calculation over and over. Now one of two things will happen: a) the value of z′ will become chaotic and zoom out of the x-y plane, or b) the value of z′ will settle down to a nice, predictable set of values that keep repeating. If a) happens, then c is thrown out. If b) happens, then c becomes part of the Mandelbrot set, and we plot the real and imaginary parts of c on our x-y graph. Over many trials involving different values of c, a distinct and very beautiful 2-dimensional pattern emerges. The pattern is a fractal that has very unusual properties. I'm not going into those properties here,11 but the point is this: the formula that is used to generate the fractal creates both chaos and order at the same time. The “chaos” consists of unstable numbers that are not part of the set; the “order” consists of stable numbers that are part of the set. Chaos and order, Yin and Yang: The process of making a fractal is a type of self-ordering process. People who have studied self-ordering processes have identified three necessary conditions: 1) the system cannot be in a state of equilibrium, 2) there must be at least one degree of freedom, and 3) nonlinearity must be present. It is almost certain that all three of these conditions exist in the universe. The first two are obvious: the universe is certainly not in a state of thermodynamic equilibrium because entropy is still increasing, and there are at least three degrees of freedom present in the very space that things occupy. The only necessary ingredient we're not quite sure about is the nonlinearity. But the very fact that self-ordering processes seem to be taking place is a very good indication that nonlinearity is an underlying feature of our universe. This feature simply cannot be described using linear equations, so the self-ordering process is not amenable to mathematical expression or analysis. In case you are inclined to think that fractals have no relationship to reality, you may want to observe nature more closely. Fractal-like objects are ubiquitous, from the veins in a leaf, to a head 11 I discussed them in more detail in Reality Riddle. 7
  • 14. of broccoli, mountain landscapes, to ocean waves breaking on rocks. Even the rhythm of your heart is a fractal pattern as a function of time. How do these things arise? Well, many of self-organizing processes are very local in nature, but lead to highly organized structures on very large scales. This kind of processes is called cellular automation. Here's an example of how this works: suppose there is a row of boxes, each of which can be either full or empty. Now we add a simple rule for each box: if the two neighbors on either side are either both full or both empty, then the box becomes empty. Otherwise the box becomes full. Now fill some of the boxes and watch the row “evolve” one step forward using that rule. The process is repeated over and over and as the rows evolve, complex large-scale patterns emerge from one very simple rule applied on a very local scale. You could try this yourself using the cells of a spreadsheet. This brings up the very controversial subject of the evolution of life. Biochemists have now successfully “sequenced” the entire human genome. Every gene consists of a sequence of so-called letters imprinted on a strand of DNA. These letters form a code, which instructs the cell what to do but more importantly determines whether the cell is part of a plant or animal, and what kind of plant or animal it is. There is no question that a person's genes determine many of his or her physical attributes, from eye color to hair texture, height, bone structure, etc. This is obvious simply by looking at family resemblances, especially between identical twins. However, the big question is how the letters imprinted on the DNA strands shape the individual. The biochemists say the genes simply tell the cells which proteins to produce and that some genes are turned on while others are turned off. Well, that's not much of an explanation. How does a liver cell know it resides in the liver, where it's supposed to be making liver enzymes, instead of in the big toe, where it wouldn't have to do much of anything? An embryo starts out as a single cell, which divides many times before the individual cells begin to branch out as nerve cells, bone cells, skin cells, etc. Where is the “template” that tells each cell where it is in relationship to all the others? Well, the creationists have a ready answer for that: God tells the cells what to do and when to do it. That sounds very unscientific, but I'm afraid the reductionists don't have much of an answer either, based on the model of a dead, reductionist universe. However, the principle of cellular automation might explain how a complicated structure like a human being could arise from each cell knowing who its neighbors are and following simple rules written in the code letters of its DNA. I'm not saying that's exactly how it happens, but I'm saying that it could be close to the truth. Could the process of cellular automation explain how life originated in the first place? Well, I don't know, but it's certainly more plausible than atoms banging into each other and forming life by sheer luck. It also avoids having to invoke a special one-time creation event as the cause. The boundary between life and non-life seems to be rather sharp. However, the study of chaos shows that boundaries are often very sharp between linear and chaotic behavior. So it certainly seems plausible and even possible that life could have been initiated in a sudden chaotic manner from non-life. Science has been pushing God into the gaps ever since Newton, and maybe even before him. Each time some phenomenon was explained by a natural law or process, there was less and less room for a supernatural explanation. Now I realize that this theory of chaos and entropy may push God still further into the gaps. Is there any room left for Her at all? Of course there is, and I think there's a lot more room for Her compared to a reductionist philosophy based on random chance alone. Think of the ramifications of all this: God could have simply willed creation into existence ready-made, complete with stars, planets, plants, animals, and people just like it says in Genesis. Or She could have designed a universe that started out in a completely formless, uniform, and highly improbable state; a complete void with zero entropy, but with a strong propensity for creating chaos and order out of nothing and absolutely no way for Her to predict exactly how the whole thing would end up. Then She could just sit back, let the whole thing unfold in front of Her, and really enjoy the show. Now I ask: if you were God, which kind of universe would you choose to create? 8
  • 15. Appendix A – Order is in the Eye of the Beholder One of the books that inspired this essay was The Cosmic Connection by Paul Davies. There is one paragraph on Page 109 that's worth quoting: “Information theorists have demonstrated the 'noise', i.e. random disturbances, has the effect of reducing information. (Just think of having a telephone conversation over a noisy line.) This is, in fact, another example of the second law of thermodynamics; information is a form of 'negative entropy', and as entropy goes up in accordance with the second law, so information goes down. Again, one is led to the conclusion that randomness cannot be a consistent source of order.” Well, this doesn't quite jibe with the information theory I learned in graduate school, or what I know of Claude Shannon's work. As far as I know, there is no such thing as “negative entropy,” and I think Stephen Hawking would agree with me that information doesn't “go down” – ever. He and Leonard Susskind refer to entropy as just “hidden information,” and I guess I could sort of go along with that. But the point is that entropy and information are essentially the same. I think there's a common misconception that entropy lacks any information just because it's random. Randomness contains the same quantity of information as non-randomness, because a random state is just as unique as a non-random state. However, randomness does seem to lack a quality we call order, which we need to define. I'll try to clarify these distinctions through a simple example. Suppose you're sitting across the table from an alien from the Alpha Centauri system and you each have a deck of cards. Your deck is the standard 52-card variety with four suits of deuces through tens, three face cards, and an ace. Now you shuffle the deck about a dozen times and start drawing cards one at a time, and notice that they're all in order! You keep drawing and they keep coming out in order. So your heart's pounding and you're getting all excited, and you start to sweat. And then there are only two cards left: the king and ace of spades. Could the next card be the king, making all 52 cards come out in perfect order? That would be one chance in 52! or about one chance in 8 ×1067. You draw the next two cards and they're the king followed by the ace! The alien just stares at your cards and shrugs its shoulders. To it, those cards are just showing random symbols. Now the alien gets out its deck of 53 cards, which have all sorts of weird hieroglyphs printed on them. In fact, each card has a completely unique symbol on it because its species uses a base-53 number system. The alien shuffles its deck a number of times and starts drawing cards. To you, the cards appear to be in random order with no discernable pattern whatsoever because each card has a unique symbol printed on it. But you notice the alien is getting very nervous and excited as it draws down the deck. Near the end of the deck, the alien is so excited it can't even hold itself together. It draws the last card and faints dead away. You look at the cards on the table, and they just look like a pile of random hieroglyphs. But to aliens from the Alpha Centauri system, the arrangement of those cards has meaning: all 53 cards came out in perfect order in their base-53 number system. You see, strictly from information theory, there is nothing really special about any arrangement of cards versus any another. They're all equally probable. No matter what arrangement you dream up, you would have to shuffle the deck about 1068 times for there to be a decent probability of that arrangement coming up by chance. This is how I started to change my thinking about entropy, information, order, and chaos. Entropy and information are quantitative measurements, whereas order and chaos are qualitative measurements. It's actually very hard to define what order is. It's like beauty – you know it when you see it. You might define order as information with chaos removed, but then you would have to define what chaos is. Yin and yang. Here's another analogy.12 Suppose you're building a giant wall of bricks, say 1,000 bricks wide by 1,000 bricks high. There's a huge pile of bricks lying at the construction site. There are two kinds 12 I just love analogies, don't you? But some people, like my daughters, don't seem to like my analogies very much. 9
  • 16. of bricks: some have white 0s painted on them and others have black 1s painted on them, so you could think of those bricks as information. You call over your assistant, whose name happens to be Claude Shannon, and ask him, “Hey Claude, how much information is over there in that pile of bricks?” Claude counts the bricks and informs you there's one million bits of information. Now before you start building the wall, you decide it would be nicer to create a pixelated copy of the “Mona Lisa” using the 0s and 1s instead of just randomly laying the bricks next to and on top of one another. So that's what you do; and after you finish, you stand back and admire your version of the “Mona Lisa,” and ask, “Hey Claude, how much information is in those bricks now?” You think he'd be so impressed by your work that he'd say there are a couple of billion bits up there. But Claude simply counts the bricks and tells you there are one million bits of information in the wall. You see, Claude doesn't appreciate art, so to him, every arrangement of 1s and 0s is just like any other. What you should be asking him is how much order (or lack of chaos) is in those bricks. Generating pseudorandom number sequences is similar to generating chaos. There are methods that measure the “statistical complexity” of pseudorandom numbers generated by algorithms, as described in a paper entitled Intensive Statistical Complexity Measure of Pseudorandom Number Generators, by H.A. Larrondo, C.M. González, M.T. Martin, A. Plastino, and O.A. Rosso. According to my “new” way of thinking about order and chaos, Larrondo et al may have stumbled on a way to measure order indirectly by measuring chaos. Maybe it's an equation like this: Order = Information – Chaos I think the only truly random processes are “natural,” especially quantum ones, like as radioactive decay. In the famous “Schrödinger's cat”13 thought experiment, the process that triggers the release of cyanide and kills the cat is from a radioactive material placed near a Geiger counter. Apparently, Schrödinger realized that a pseudorandom number generator just wouldn't cut it in that experiment because it wouldn't be random enough. Now you might say that there's no real difference between an algorithm that generates random numbers and a radioactive decay process that generates 0s and 1s. But there is. Albert Einstein thought that quantum processes, like radioactive decay, were like little machines that are programmed to spit out beta or alpha particles every so often. He called the programming “hidden variables.” He challenged his nemesis, Niels Bohr, with this by publishing a paper in 1935. He said that said Boh'rs version of reality – quantum uncertainty – was bogus.14 Well, it turns out that experiments performed in the 1980s proved Einstein was wrong and Bohr was right, so Bohr got the last laugh; or he would have if he and Einstein had still been alive by then.15 When I was in the army, I saw some super-secret radio transceivers that scrambled (encrypted) human voices. The encryped transmissions received by an ordinary radio sounded like noise, as if you were listening to Niagara Falls. But it wasn't random noise at all, it was really chaos. The information in the message wasn't diminished – it can't be – but the circuitry changed ordered {silence + human voices} into chaos. Those secret transceivers must have used pseudorandom number generators to do that because the process was completely reversible so the receivers could change the chaos back into ordered {silence + human voices} again. The whole science of breaking secret codes, Shannon's area of expertise in WWII, depends on the reversibility of the encryption process. In principle, every code can be broken – with a sufficient amount of brute force – because they all use reversible algorithms. I think a completely unbreakable code would have to scramble messages using random numbers from an irreversible process like radioactive decay. But then nobody would be able to unscramble the messages, including people who are supposed to receive them. So there are even qualitative differences between chaos generated by reversible processes and chaos generated by irreversible processes, although it's pretty hard to tell the two apart. 13 It's also known as the “Fluffy experiment” named after Schrödinger's pet cat, Fluffy. Just kidding. 14 Actually, he wasn't quite that rude. He just politely asked whether or not Bohr's theory was “complete.” 15 I covered Bell's inequality experiments in excruciating detail in my essay Reality Riddle. 10
  • 17. Appendix B – The Ice Box Conundrum Whenever I think about entropy I always come back to the same ice box problem that sticks in my head. Say you have a perfectly-insulated box with food items at room temperature, and you want to cool the food down in a hurry so it won't spoil. You go to a store where they sell dry ice (frozen CO2) and you bring a chunk of it home, stick it in the box, and close the lid. Now an expert in thermodynamics will say that you disrupted the thermal equilibrium of the box at room temperature by putting a cold chunk of dry ice in it. In other words, you opened the system to the outside and lowered its entropy by forcing it to be in an unnaturally-ordered state: {warm food + cold ice}. Now over time, heat will flow from the food into the dry ice, which makes some of the CO2 evaporate. This confirms the second law of thermodynamics as it was originally stated: heat flows from bodies at higher temperatures to bodies at lower temperatures. If the box is perfectly insulated, the amount of heat energy inside stays the same, but the entropy increases because a gas has more entropy than a solid. What this means is the number of “microstates” of the system has increased while that elusive property we call “order” decreases. Eventually, the food and the dry ice will reach thermal equilibrium where everything is at the same temperature. This maximizes the number of microstates the system can occupy, which maxes out its entropy. Suppose the box is not only perfectly insulated but it's also perfectly sealed. If not all the CO2 evaporates, there's still some dry ice in the box and all the original CO2 molecules are still in there. Now here's the part that bothers me: most textbooks that discuss entropy say there's always some probability that systems in thermal equilibrium could spontaneously go from a disordered state into some highly-ordered state. They say the probability might be vanishingly small, but it could happen. In other words, there's some miniscule probability that all the CO2 gas molecules could suddenly decide to refreeze and dump heat back into the food, returning the system to its original state. Since dry ice that spontaneously decides to refreeze is exactly the same as the dry ice you put there originally, the entropy of the entire system will have to go back to its original low value. The authors of the textbooks wave their hands around and say, “Don't worry, this won't happen because the number of microstates is unbelievably large, so the probability of going all the way back to square one is vanishingly small.” But this just won't cut it because vanishingly small is still greater than zero, so this still could happen; but the second law of thermodynamics says it simply can't happen. Period. This is what I call the ice box conundrum. I thought about this for a long time and I think I came up with a solution. When rolling dice, it doesn't matter whether you roll one die a million times or roll a million dice all at once. Either way, the probabilities of the dice coming up certain ways are the same because all rolls are statistically independent. In other words, previous rolls don't change the probabilities of future rolls. This is different than the changes happening inside the ice box. As each CO2 molecule vaporizes, the number of possible microstates increases, so entropy increases gradually; here, the probabilities do change depending on what state the system is in. It can't get from the initial low-entropy state to any of the high-entropy equilibrium states in one giant leap because those states aren't included in the list of possible low-entropy microstates. Pathways to those states have to open up first. Here's why going in the reverse direction wouldn't work. In the textbook version of a system in equilibrium, the system jumps around from one state to another; all states are equally probable and each jump is statistically independent from all the others. So in theory, the system could jump all the way back to its original low-entropy state in one jump like rolling all the dice at once. But a real physical system like the ice box can only move into the states that are available to it. Unlike dice rolls, the moves are not statistically independent. If a tiny pathway to a lower-entropy state opens up, it soon closes again before it can be filled. A few CO2 gas molecules might refreeze from time to time, but no permanent pathway is open for the system to get back to its original state. 11
  • 18. Appendix C – The Post-Reductionist Universal Law Newton's laws, special and general relativity, and quantum theory all have something in common: they all hinge on fields. Newton saw nothing wrong with action at a distance, so he didn't bother to postulate a field in his theory of gravitational attraction between two masses; his equations spoke for themselves. But others who followed him made sure to add a gravitational field. Einstein explained gravitation as space-time curvature, which can also be interpreted as a disturbance of the space-time field. Quantum mechanics is based on the Schrödinger wave function, Ψ, which is a kind of field, although nobody is sure what Ψ really is. Modern quantum field theory, which produced the standard model of elementary particles, proposes many different kinds of fields. The elementary particles are knots in those fields; individual electrons are knots in the electron field, individual quarks are knots in the quark field, etc. The vacuum isn't empty; it's filled with fields of every type and description, including the all-pervasive Higgs field, with virtual particles popping in and out of existence as a result of quantum fluctuations in those fields. Nobody yet knows what string theory, or M-Theory, will come up with, but I'm sure new fields will be in it. The one thing that's lacking in all of this is a unifying law or principle that make everthing hang together. Some scientists in the past and present have proposed a different way of thinking. I'll call this the “post-reductionist” view. Whereas reductionism views the whole (the universe) as being the sum of its parts (a linear superposition of all fields throughout space), post-reductionism is a holistic theory that proposes there is a unifying law or principle that expresses itself through the action of the parts. Pierre de Fermat and Joseph Louis Lagrange were two pioneers of this philosophy. Fermat proposed that the path taken by a light ray is the path that minimizes the transit time. Physicists generally reject that notion, favoring the wave theory of light to explain refraction, although they have to admit that Fermat's conjecture does work. Reductionist thinking doesn't allow for light rays to seek out paths that minimize transit times. Instead, light waves are influenced locally by the optical properties of the media through which the waves propagate, and the waves themselves are electromagnetic fields governed by Maxwell's equations. One of Lagrange's ideas was the principle of least action, where moving objects follow paths that minimize the total “action” summed over time. Lagrange came up with a definition of action as follows: Action = Kinetic Energy – Potential Energy. Suppose you're on the ground and throw a ball to your friend standing on a flat roof. You want to know what path the ball follows, knowing only its initial velocity and the location of your friend. Applying Lagrange's method, you would express the incremental action, dS, in terms of the ball's mass, m, its horizontal and vertical distances from you, x and y, and the gravitational acceleration, g, over a time interval, t: dS = { ½ m [(dx/dt)2 + (dy/dt)2] – mgy } dt The path of the ball is expressed as the function y(x) is found by minimizing the integral of dS over the total time it takes the ball to go from you to your friend, which of course you don't know ahead of time. Now actually doing the Lagrange computation is fiendishly hard, taking up several pages of very difficult calculations. What you end up with is a parabola: y(x) = Ax – Bx2 , where A and B depend on the ball's initial velocity. Now you might ask why any person with a sane mind would go to all that trouble when you could just use Newton's laws of motion and come up with the same result with a few lines of relatively simple calculus? Well, you wouldn't use Lagrange's method for this particular problem, but the fact that it actually works provides some deep insights about the universe. Richard Feynman's high school physics teacher showed him this, and it made a deep and lasting impression on him. In fact, his quantum field theory uses a 12
  • 19. methodology that is closely related to Lagrange's least-action principle. Instead of going through all the excruciating pain of calculating the Lagrange integral, you could approach the ball-tossing problem another way. Start out by drawing a straight line between you and your friend and calculate the total Lagrange integral by summing the actions at all the points times the increments of time it takes the ball to go between the points. Then move the points one at a time (except the points where you and your friend are standing) up and down just a little and see whether those movements increase or decrease the total action. If a movement decreases the action, keep moving it, otherwise go the other way. If you keep doing this over and over, you eventually reach a point where no little movements will reduce the action any further. That's the path. The part that impressed Feynman so much was the fact the ball seems to “know” the “best” overall path to follow. This is a very holistic approach to the problem of ball throwing. Instead of gravity tugging on the ball and changing its velocity ever so slightly, the ball just “knew” where to go. Now this sounds absurd, but Feynman used this approach to explain the famous double-slit experiment in quantum mechanics. In his interpretation, a particle doesn't blindly follow a path though the slits. Instead, it first explores every possible path through the slit at the same time and it then “chooses” the one path with the highest probability based on some fundamental principle. Using a Lagrangian approach, let me propose my post-reductionist universal law: “Every change maximizes the total degrees of freedom of the universe.” The first element of this law involves change. Without change, the law wouldn't make any sense. The second element is holistic. It implies that everything, from elementary particles, to baseballs, to planets knows its place in the entire scheme of things and how to maximize the total degrees of freedom of the universe.16 Not only that, everything will act accordingly. Remember the example of the ice box in Appendix B? Well, as soon as the dry ices was placed in the box, heat energy began flowing from the warm food to the cold dry ice. As this occurred, the got colder and lost some degrees of freedom; however as a result, the total degrees of freedom were increased. As CO2 molecules absorbed heat from the food, they evaporated and created many more degrees of freedom for the CO2 molecules than were lost by the food molecules. In other words, the food molecules slowed down, and gave up some of their degrees of freedom for the greater good of the universe. How did they know how to do that? That's the great mystery. Before going further, let's find out how many degrees of freedom typical things have. Entropy is a well-known quantity for well over 100 years and it's been measured accurately. The entropy of one kilogram of steam at a pressure of one atmosphere and a temperature of 100ºC has been measured at 7.35 kJ/ºK. Boltzmann's entropy formula17 is S = k log W W is the number of degrees of degrees of freedom, and k is Boltzmann's constant, which is a very small number: 1.38 ×10-26 kJ/ºK. Rearranging the formula, W = 10 S/k Plugging in the values for S and k, we see that W for a kilogram of steam at 100ºC is equal to 10 followed by over 1026 zeros – not just 1026 mind you – but 10 followed by 1026 zeros!! This is just an insanely large number. Entropy isn't just a byproduct of time, it' really the driving force behind creation. It's easy to see how creating more degrees of freedom makes gas expand, but most people don't think of that as much of a “creative” process. If that were all entropy did, it would turn everything into random nothingness – and entropy does have a very bad rap sheet in that regard. But there's much more to 16 “Degrees of freedom” sounds less sinister than “entropy.” However, maximizing one maximizes the other. 17 This formula is carved on Boltzmann's tombstone. 13
  • 20. it than that: entropy actually may be pulling everything together too. Erik Verlinde has come up with an amazing theory that says that gravity is caused by entropy. I can't really do justice to this theory, so I strongly recommend reading On the Origin of Gravity and the Laws of Newton on his web site: Verlinde's theory is based on the holographic principle that Leonard Susskind and Stephen Hawking discovered through studying black holes. Every finite volume of space containing mass-energy has a finite number of degrees of freedom (microstates). This number is determined by the Bekenstein bound.18 Verlinde says that when mass-energy is distributed over the finite microstates, it produces a temperature, a macroscopic property. Multiplying that temperature by the increase in the entropy that occurs as the two bodies come together equals work, and it's the same quantity of work gravity does on those bodies according to Newton's law. Verlinde believes this is no mere coincidence. Instead, some fundamental law of maximizing entropy is forcing the bodies to come together. The force is manifested as Newton's gravitational force. He says, “The holographic principle has not been easy to extract from the laws of Newton and Einstein, and is deeply hidden within them. Conversely, starting from holography, we nd that these well known laws come out directly and unavoidably. By reversing the logic that lead people from the laws of gravity to holography, we will obtain a much sharper and even simpler picture of what gravity is. For instance, it clarifies why gravity allows an action at a distance even when there in no mediating force field.” So we've come full circle from Newton's action at a distance, to field theories, and finally back again to action at a distance. I don't think this is the entire story, however. The law stating, “Every change maximizes the total degrees of freedom of the universe” may explain a lot of things, including the forces found in current field theories.19 But even if entropy turns out to be the driving force behind it all, I still don't think it's the only creative mechanism in the universe; alone it doesn't account for all the order and structure found everywhere. We need another ingredient for order (and chaos), and I believe that ingredient is a strong local nonlinearity that permeates everything. One source of local nonlinearity could be quantum interactions. The quantum properties of things are binary for the most part: spin up, spin down, positive charge, negative charge, etc. When there are quantum interactions, information is exchanged – a quantum computation of sorts. Modulo-2 arithmetic is highly nonlinear and so are feedback processes. We saw earlier how cellular automation can create structure and order, and this phenomenon may be occurring at the sub-atomic level through quantum interactions. Maybe modulo-2 arithmetic and feedback take place during quantum interactions. But this is getting very speculative, so I'll stop right here. This is an entirely new way of thinking about reality and it needs a lot more work to flesh it out as a good scientific theory. Unfortunately, there aren't enough minds working on it right now. Breaking the prevailing deterministic-reductionist paradigm will be almost as tough as it was for 16th century astronomers in overturning Ptolemaic gobbledygook. But at least the 21st century scientists only have to worry about losing their research grants, and not being burned the stake for heresy. One think is abundantly clear, at least to me. Reductionism is dead, or at least its days are numbered. If and when the Theory of Everything is found, I think scientists will be astounded by the utter simplicity of the universal law that governs it, and by the amazing complexity that emerges from such a simple law. 18 The Bekenstein bound gives the maximum degrees of freedom expressed as entropy: S ≤ 2π k RE / ħc, where R is the radius of a sphere enclosing the volume and E is the mass-energy (expressed as energy) inside the volume. The constants k, ħ, and c are Boltzman's constant, Planck's constant, and the speed of light. 19 Obviously, it should produce results that are consistent with current theories; otherwise, it wouldn't be a very good law. But it should also explain those results in a more fundamental way than the current theories do. 14
  • 21. Appendix D – Introduction to Radical Post-Reductionism: Wheelerism Quantum mechanics clashed with Newtonian physics and relativity right from the beginning. Even some of the scientists who ushered in quantum theory, such as Erwin Schrödinger and especially Albert Einstein, began to have misgivings when they realized the full ramifications of what they had wrought. On the opposite side, Niels Bohr and his Copenhagen crew weren't particularly bothered by the fact that something could be in multiple places or in multiple states at the same time. In 1935 Erwin Schrödinger proposed his famous cat experiment, where a live cat is placed in a sealed box along with a Geiger counter that triggers a release of deadly cyanide gas.20 A sample of radioactive material emitting beta particles is placed near the Geiger counter. The radioactive atoms have a known half-life, and based on their proximity to the Geiger counter, there is exactly a 50% probability that the Geiger counter will be activated within ten minutes. Everything is sealed up nice and tight so nothing, not even the sounds of a dying cat in agony, could escape the box, and a 10-minute timer outside the box is started as soon as the Geiger counter is activated. After ten minutes the box is opened to see whether the cat is alive or dead. The question is: during those fateful ten minutes, what was the state of the cat? Was it dead, alive, both, or neither? Now at first, this sounds like a really dumb question because how could a cat be both dead and alive or neither? Most people would say that if the cat is alive after opening the box, it was alive the whole time, and if it is dead, then it started out alive and became dead at some point before the box was opened. But that's not how quantum physics works. You see, the strict Copenhagen interpretation is that the Geiger counter, the cyanide, and the cat are all sealed in the box where no information can get out, so they're all included in the same quantum wave function that keeps the radioactive atoms in a superimposed state of decay and non-decay. A measurement must be made to see if any of them decayed or not. In this interpretation, the cat is both alive and dead until the box is opened and an observation (measurement) is taken. Then the entire wave function “collapses” and the cat is either still alive or it becomes dead at that moment.21 The real question is whether the cat itself counts as an observer. Now cats are pretty smart animals, but presumably they're not as smart as humans.22 If you accept a cat as being a valid observer, the wave function would collapse before the box is opened. But in that case, which kind of animal wouldn't count as an observer? A rabbit? A snake? A fish? A slug? A bacterium? This highlights the problem known in quantum-mechanical circles as “the measurement problem.” John Wheeler was among a new breed of thinkers, who solved the measurment problem in a pretty radical way: he stated that history doesn't exist until we create it. We make the whole thing up when we observe things. It's as if dinosaurs didn't really exist until someone dug up their fossils.23 I call this philosophy “Wheelerism.” To prove his point, Wheeler came up with all sorts of interesting thought experiments, including one based on the famous double-slit experment, called “Wheeler's delayed-choice experiment.” A form of the delayed choice experiment was actually carried out in a lab, and it did seem to validate the notion the present influences the past.24 Of course, Wheeler has many critics in reductionist circles who argue that you can't really show that history doesn't exist by doing a lab experiment – the time delays are too short. In response to that criticism, he imagined a much bigger experiment called “Wheeler's astronomical experiment.” In 20 There is absolutely no evidence that Schrödinger actually did this experiment on a live cat. If he had, these questions might have been answered by now. 21 An old-fashioned reductionist wouldn't buy any of this, but that way of thinking is passé, as we have seen. 22 This is debatable. Most of the cat owners I know are very well trained, and cats must have a superior intelligence in order to train humans so successfully. 23 Some creationists deny the existence of dinosaurs even after dinosaur fossils are dug up. However, they are not to be mistaken as Wheelerites. 24 This apparent paradox is explained fully in Reality Riddle. 15
  • 22. this experiment, a very distant star emits light that travels toward the Earth. A very massive object, such as a black hole, sits between the star and the Earth. This object forms a gravitational lens, allowing light from the star to take two completely different paths to the Earth – it's like the double-slit experiment on steroids. Now depending on how you decide to detect the light from that star, you can either get an interference pattern from light going in both paths around the lens at the same time (making it a wave), or you can use a telescope to see which path the light took (making it a particle). Either way, you're creating your own version of history because the light passed the lens billions of years earlier. Although Wheeler's astronomical setup does give you plenty of time for making delayed choices (billions of years), the main problem with it is that you need to work with one photon at a time for the experiment to prove anything. I think it would be pretty hard to get a star to emit one photon at a time, and that would also make the star awfully dim, so I don't see much chance of anyone actually carrying out this experiment. There's one aspect of Wheelerism I really do like, although I wouldn't quite carry it to the extremes some do. I'm referring to the “it from bit” conjecture. As I stated often in Reality Riddle, it seems plausible, and even likely, that everything we observe in the universe is essentially just information – a dataverse. But that's not all. According to Wheeler, reality has two distinct parts that are modeled after computer technology: hardware (the “it” part) and software (the “bit” part). The software component consists of observers like us, and that is the “real” part of real-ity. The hardware component (the “-ity” part) is just the nuts and bolts, like electrons, quarks, gluons, gravity, etc. The software controls the hardware; without software, the computer is just a dead machine; not real. Now here's the truly weird part of Wheelerism: the software is constantly creating history by modifying and improving the hardware. It's like your computer suddenly decides to upgrade its memory from 8 gigabytes to 16 gigabytes, so it goes online, orders a couple of sticks of RAM, and installs them all by itself. Reality is like HAL in 2001: A Space Odyssey. The most extreme form of Wheelerism says that quarks didn't always exist, despite what the cosmologists say about the early universe and its evolution. It says quarks were invented by Murray Gell-Mann and George Zweig in 1964, and they were “discovered” right on cue in 1968. The same thing is true about the neutron, the electron, the atom, and so forth. Those particles didn't exist either until the “software” (the physicists) decided to upgrade the “hardware” (the universe) by inventing them and creating history. So it should have surprised no one when the Higgs particle was discovered in 2013, because Peter Higgs had already created it in 1964, although a machine big enough to make Higgs particles had to be built first. Now this is getting way too metaphysical for a “scientific” essay, so I'm going to have to dial it back a little. However, there is a grain of truth that points back to Schrödinger's cat experiment and the measurement problem. The quandary was how to separate the observer from the observed in experiments involving quantum particles. Borrowing some of Wheeler's ideas, history doesn't exist until some kind of “record” of it is made. But I don't think it takes an intelligent observer, such as a human or even a cat to record it. A record could consist of a track of a positron in a cloud chamber, or anything else that leaves a physical impression of some sort. By defining things that way, quantum objects like electrons, photons, etc., have no histories because they carry no records of any kind. All electrons are exactly alike, and there's no way of telling where they've been or what they've done in the past. They are defined only by their wave functions. This could mean that all quantum particles are connected through a common wave function, and the universe is holistic and very interconnected at its core. I think it would have to be holistic in order to carry out a universal law requiring that all changes must maximize the total entropy of the universe. Cats, on the other hand, are unique, non-quantum creatures. They have memories and personalities. They have kittens, get old, and sometimes they die from cyanide poisoning. In short, they do have histories and wave functions don't apply to them. Parts of Wheelerism aren't very plausible and are even pretty disturbing, but thinking about it did help me resolve the measurement problem. 16
  • 23. Appendix E – Order, Chaos, and the Emergence of Consciousness I'm really going out on a limb with this appendix because as an engineer, I have practically no professional experience whatsoever in the fields of in neurology, psychology, or psychiatry. But that won't stop me from talking about those things, because I have opinions on just about everything and I'm not shy about sharing my opinions with anyone who will listen. To recapitulate what I've said so far: science will sooner or later undergo a paradigm shift away from orthodox reductionism. Field theories will be replace by a more holistic and integrated view of the universe because scientists must come to realize that while reductionism explains many things, it doesn't explain everything, In fact, it may not even explain most things. The new paradigm will be based on a single universal law with all our existing physical laws being seen as special cases. I suggested earlier that such a law might be: “Every change always maximizes the total number of degrees of freedom.” We can see that this is a holistic law because it encompasses everything all at once. The same law that causes gas molecules to expand also causes massive objects to collapse toward each other as a primitive form of organization called gravitation. There are countless other ways the universal law operates that are just waiting to be discovered. Below the surface a powerful organizing principle is at work. It operates when systems having degrees of freedom are not in equilibrium, and when interactions are nonlinear. This organizing principle causes chaos and order to spring out of nowhere from what might be otherwise considered “dead” material. Even as the law of entropy relentlessly drives the universe toward randomness and “heat death,” this organizing principle works to create order through chaos. This inevitably generates increasing structure and complexity, ultimately leading to life and consciousness. One thing is certain: reductionism and molecular biology have utterly failed to provide a coherent explanation of how life functions after it is created, let alone offer any rational theory of how life emerged from non-living matter in the first place. Without recognizing any universal organizing principle, we would be forced to abandon science altogether and invoke special creation by an intelligent and purposeful Creator as the only plausible explanation for life. But this is a false dichotomy – we don't just have a choice between reductionism and creationism; I believe science will ultimately discover the organizing principle and show that the emergence of life is a natural and inevitable outcome of change. The line dividing life from non-life is sharp. Even unconscious life, at the level of a bacterium, is amazing and purposeful. A bacterium does live a purposeful life, although its “purpose” may be limited to consuming food, eliminating waste, and reproducing copies of itself.25 Simple life is amazing enough, but the emergence of consciousness from living matter is almost beyond belief. The dividing line between consciousness and unconsciousness isn't quite as sharp as the one that divides living from non-living. We'll see there are different levels of consciousness, with somewhat fuzzy lines between them. Take an earthworm for example. The nervous system of an earthworm is rather primitive, but it does actually have one, around 300 neurons in all. There's no brain or eyes, but an earthworm can respond to outside stimuli. It likes to dig tunnels, and it seeks out other earthworms to mate with, so it evidently knows the difference between a potential mate and a twig from a tree.26 However, its primitive consciousness is just barely “aware” of its surroundings. Next up on the ladder of consciousness are the leeches, snails, and slugs. These animals have between 10,000 and 20,000 neurons, which is couple of orders of magnitude more than an earthworm has, but there doesn't seem to be much improvement in overall intelligence. These 25 Actually, some human lives seem to have similarly limited purposes. 26 Earthworms have both male and female reproductive organs, so they could theoretically mate with themselves. I don't know if they do that, however. 17
  • 24. animals don't have spinal cords or brains; just ganglia that are spread throughout their bodies. When we get to the fruit fly, there is a quantum jump in brain power – yes, it actually has a brain. With about 100,000 neurons and about 107 synapses (connections between them), a fruit fly registers brainwave activity while in flight. Now that's quite an improvement. Ants are interesting creatures. They only have about 2½ times the number of neurons as fruit flies, but they leverage their tiny brains with all other ants in their colonies, which can number as many as 40,000. They act collectively, and can do things together that no ant could do alone; in fact, collectively, they have ten billion neurons, more than most mammals.27 Honeybees act collectively too, but a bee can go off alone without acting stupid. They have almost a million neurons with 109 synapses. We're about to leave the class of insects, but before we do, guess which insect has the most neurons.28 I'm not going to go up the entire animal kingdom, but we eventually end up with mammals at the top of the heap. You need to mostly count neurons in the brains of a mammal instead of the total number of neurons in their bodies, because most of the action takes place inside the brain. The brain has neurons and synapses that form a very highly non-linear network. So not only is there a fundamental non-linear biological process, which creates order and chaos that allows the brain to emerge; but the brain itself is a non-linear process, which creates order and chaos that allows consciousness and intelligence to emerge. Here's what physicist James Crutchfield says, “Innate creativity may have an underlying chaotic process that selectively amplifies small fluctuations and molds them into macroscopic coherent mental states that are experienced as thoughts. In some cases the thoughts may be decisions, or what are perceived to be the exercise of will. In this light, chaos provides a mechanism that allows for free will within a world governed by deterministic laws.” Wow, that's quite a statement! I think it kind of capsulizes much of what this essay is about. Again, there is a universal theme: entropy is the driving force behind everything, while an undercurrent of order and chaos that comes from non-linearity. From order and chaos, complexity emerges in stages. At the bottom level is dead matter organizing into stars, galaxies, planets, etc. through the primitive push/pull balancing act of entropy. Biological activity emerges as a higher level that uses new chaotic processes to organize complex body structures that eventually lead to nervous systems and a brain. The brain has its own chaotic process that organizes consciousness, free will, and intelligence. The same organizing principle operates on different levels, each level involving different chaotic processes that allow the level to emerge, and so on. Once we arrive at consciousness, it also splits into higher levels of complexity. In the mammal class, there is simple consciousness, self consciousness, and cosmic consciousness. The three levels were described by Richard Maurice Bucke, a 19th century psychiatrist from Ontario, Canada. Most mammals experience simple consciousness. These mammals are fully aware of their surroundings, have memories, and may have a full range of emotions like love, fear, anger, joy, sorrow, and even remorse and shame. Mammals with simple consciousness can plan ahead and even use reasoning and logic to solve problems. This isn't conjecture; it's a proven fact. Gable is a border collie who lives at the University of Lincoln in the UK, where behavioral psychologists study him. Gable has managed to associate 54 human words as names for 54 different toys. When his trainer tells Gable to fetch a particular toy from a pile in another room, he will go to that room, pull out the toy from the pile, and return it to his trainer. That's pretty good, but here's the amazing part. Once his trainer placed an unfamiliar toy in the pile and gave it a name that Gable was never taught. The trainer told Gable to fetch that toy using that name, but Gable was confused and didn't know what to do. He was instructed to fetch that toy by name several more 27 When an individual ant is separated from her colony, she becomes pretty stupid. At least that seems to be the case. 28 That honor goes to the cockroach with 1,000,000 neurons. 18
  • 25. times. Finally, Gable went into the room, found the new toy, and brought it back to his trainer. Gable could reason that the toy his trainer wanted was not one of the 54 toys that he knew by name. So he searched for a toy that was not one of those 54 toys he knew until he found it. Looking only at the number of neurons in the brain gives misleading information about intelligence because the overall size of the animal has to be factored in. Very large animals like whales and elephants need more neurons to just move their huge bodies around. But it's interesting to note that cats have almost twice as many neurons as dogs, 300 million versus 160 million, while both animals are of the same order in size. Chimpanzees (5-6 billion neurons) are considered Number 2 in the intelligence hierarchy, and of course we humans (19-23 billion neurons) are Number 1. As smart as cats and dogs are, they still only possess simple consciousness. Self consciousness is the next level up, and humans (and maybe chimpanzees) have it. A self-conscious being not only thinks, but it knows it's thinking. This brings about a whole new level of complexity. One way to tell if an animal is self conscious is by placing it in front of a mirror. If it recognizes the image in the mirror as itself, then it probably has self consciousness. We humans usually don't reach that stage until we're almost a year old. Put a mark of lipstick on a child's forehead and place her in front of a mirror. If she's attained the level of self consciousness, she will immediately try to rub off the mark on her forehead; a baby doesn't associate the image of the baby's forehead in the mirror with her own forehead until she's reached that level. Adult chimpanzees seem to recognize themselves in mirrors. Dogs don't; but what dogs lack in self consciousness, they more than make for up for by learning to adapt so well to the peculiar ways of human beings.29 Bucke classified self consciousness as an emergent phenomenon, and he said it only emerged in the human race quite recently. Looking at this from a post-reductionist perspective, we see that it would be the inevitable result of the self organizing principle; it happens when simple consciousness becomes sufficiently complex and chaotic. Today, virtually every adult human being is in a state of self consciousness. This enables us to think abstractly on several levels at once, as in the statement, “I know that I know that I know.” We can also think symbolically at a very high level, and we can manipulate abstract mathematical symbols to solve problems. Self consciousness is also a nonlinear and chaotic process, and when it becomes sufficiently complex and chaotic, it will inevitably organize into what Bucke called cosmic consciousness. Bucke's description of cosmic consciousness seems identical to what Buddhists call satori, a calm state of pure knowing, without any fear, anger, or self-centeredness. In that state, a person is consciously aware of the connectivity and unity that underlies the universe. It seems that people who are in satori directly experience the laws of the universe operating within their own minds. Relatively few humans have attained that level of consciousness, and still fewer have sustained it for any length of time.30 However, Bucke believes that cosmic consciousness will eventually become the normal state of consciousness of the human race as it continues to evolve. Pierre Teilhard de Chardin was a French philosopher, paleontologist, and geologist. He was a 29 While chimps and dogs are comparable size, chimps have over ten times as many neurons, so they should be way smarter than dogs, right? But consider this: when a human points at something, dogs instinctively know to look in the direction the human is pointing, whereas chimpanzees don't have a clue about what the human is doing. Long ago, dogs learned how to get along with humans and they almost became like us. They do what we tell them to do (sort of), and they seem to go out of their way to please us. Because of this, dogs get to live in our houses, eat our food, play with our children, go on trips with us, and sometimes they're even allowed to lie on our beds. On other hand, adult chimps are vicious, hateful creatures that will attack and kill humans if they are given the chance. Because of this, chimps get to live alone in steel cages. Now I ask: which animal is really smarter? 30 Bucke himself purportedly experienced a fleeting moment of cosmic consciousness in 1872. Although the experience was temporary, it had a profound effect on Bucke that permanently changed him. 19
  • 26. staunch believer in evolution, both of the human race and of the universe as a whole.31 His ideas were clearly post-reductionist. He also believed that the evolution of the universe is being orchestrated by the conscious creatures who inhabit it, with everything and everyone evolving toward an end state he calls the Omega Point. I can see clear parallels between Teilhard's views and John Wheeler's “it from bit” conjecture. Both Teilhard and Wheeler believe in the primary status of consciousness (“bit”) and the secondary status of the physical universe (“it”). Both held the belief that the “bit” controls and determines the “it.” I think the more likely scenario is that both the “it” and the “bit” emerge together from one universal law and its corollary organizing principle through chaos. The law itself has primary status; the universe is secondary. Just don't ask me how or why the universal law and the organizing principle originated, because I have no idea. Created matter organizes into more complex structures that eventually become chaotic. Chaotic structures leads to order at a higher level, which may even add new processes of organization as the universe marches on with increasing degrees of freedom. Those new structures may also open additional pathways that maximize the total degrees of freedom.32 This process goes on and on until chaos produces a whole new level organization – life. At the level of living things, the role of ordinary physical laws is significantly diminished; life is governed by a different set of laws. It is here that reductionism fails completely because quantum mechanics and Newtonian physics simply cannot account for most processes that occur in living forms; microbiology doesn't provide a complete picture either. Darwin's theory can explain parts of the evolutionary process, but the power of natural selection is somewhat limited, and its effects on living forms are almost trivial. Eventually, life evolves complicated and highly nonlinear neural networks and brains. Those provide a whole new stage for chaos and order to play out their roles. There is another exponential increase in complexity and chaos, then order produces consciousness. First there is only primitive consciousness, on the level of an insect, but that is followed by higher levels, with each level setting the stage for the next. The physical brain continues to provide the foundation for the edifice of consciousness, like the foundation of a building. There may also be entirely new organizing processes operating on consciousness itself that transcend and bypass the physical brain altogether. Looking at the entire picture, there seems to be a universal hierarchy at work: Elementary particles operate on the lowest level, obeying the laws of quantum physics and nothing else. For them, time is symmetrical, they have no individual identities, and the past does not exist. Quantum particles organize into macroscopic objects that have identities and histories. Here, time is not symmetrical, and the past emerges. Macroscopic objects organize into larger and more complex physical structures that obey Newtonian and relativistic laws (approximately). Structural complexity increases until chaos produces a new order called life, which obeys an emergent set of laws that science presently does not understand. There is a hierarcy among living things, some having evolved into organisms of extreme complexity, where chaos produces ordered nervous systems, increasing in size and complexity until primitive consciousness finally emerges. Chaos organizes primitive consciousness into higher states of order, and so on, ad infinitum. It's hard for me to visualize where this process will lead, but I'm positive it will be a fantastic journey. 31 He also happened to be a Jesuit priest. Needless to say, his unorthodox views on creation and evolution didn't exactly endear him to the Vatican. 32 Proponents of the big bang theory are certain that there was an event called inflation, when the universe expanded exponentially soon after it came into being. Physicists proposed several possible mechanisms for inflation, but there seems to be no rationale for why inflation started and stopped. Here's my suggestion: The universe may have originated in a state of near-zero entropy. The universal law requires that change maximize the total degrees of freedom of the universe. At that time, inflation was the only available mechanism for doing that, so it did. At some point during inflation, the universe entered a different state where inflation could no longer maximize the total degrees of freedom, so it stopped. A different form of expansion accomplished the task of maximizing the total degrees of freedom, which is ongoing today. The universe may attain states where different processes of change will emerge, which will fulfill the universal law more effectively than the present process, and so forth. 20
  • 27. Appendix F – We're Living on the Hairy Edge As I wrote the essay Is Science Solving the Reality Riddle, I kept getting this nagging feeling that the answers to some of the questions concerning reality might have something to do with fractals and Mandelbrot sets. At that time, I didn't really understand the full implications of fractals, and I still don't; but I've done a bit of research on fractals since then and came up with some amazing connections between fractals, three dimensions, order and chaos. First, I looked at a very simple fractal known as the Koch snowflake, named after Swedish mathematician Helge von Koch. To make a Koch snowflake, you start out with a simple equilateral triangle with each side having a length, s. You add three more equilateral triangles to each side to make a star of David with 12 sides. Then you add more 12 equilateral triangles to each of those sides, and so on. This drawing shows the evolution of the snowflake: The nth evolution is denoted by the symbol S(n). Starting out with the triangle, T, you get the following sequence of figures: T → S(1) → S(2) → S(3) → S(4) → S(5). Now you can carry this on forever if you want, and the resulting shape will be a fractal. This snowflake has very unusual properties. The length of the perimeter of the snowflake is given by the very simple formula P = 3s(4/3) n. The funny thing is that as n → ∞ so does P. That's right, the perimeter, P, of the fractal becomes infinite. And I don't just mean that it has an infinite number of points – all lines have an infinite number of points – I mean P has an infinite length! One ramification of this is that you can't really define a Koch snowflake by a formula, like the formula y2 = r2 – x2 for a circle, or any other kind of formula for that matter. You can only define it by describing the process that generates it. Now, although the Koch snowflake has a perimeter of infinite length, it sure looks like it has an inside and and outside. And in fact it does. So what's the area inside the snowflake? I'm not going into the whole derivation, because you can look that up, but the important thing is that the area is finite: A = 2 s2 √3 /5. So, the perimeter of a fractal encloses a finite area even though the perimeter itself has infinite length. Very strange. Now fractals can be generated in other ways too, the Koch snowflake being a very simple evolution. There's another class of fractals are are generated from a process that creates Mandelbrot sets, named after Benoit Mandelbrot. You can represent any point in 2-dimensional space as a complex number: z = x + iy.33 You can generate a Mandelbrot set in two dimensions as follows. Using the formula z´ = z 2 + c, pick any point you want c = x + iy and compute z´ from the starting point z = 0 33 Up until now I've denoted the imaginary number √-1 by the letter j. Now I'm going to change that to the letter i, for reasons that will become clear shortly. 21
  • 28. using the rules of complex algebra. Next, feed z´ back into the formula as z, and compute a new z´ and keep c the same. Keep doing that over and over. Two things might happen: a) the values of z´ settle down to very predictable numbers that repeat, or b) the values of z´ chaotically zoom off into the stratosphere. If a) occurs, then c is part of the Mandelbrot set, and if b) occurs, it is not. What you'll find is this: there's a boundary that separates the numbers in the Mandelbrot set (order), from the numbers not in the Mandelbrot set (chaos). This boundary is a fractal perimeter, having similar properties to the perimeter of a Koch snowflake. The perimeter encloses a finite amount of area inside it, but the perimeter itself will have an infinite length. You may ask whether this type of thing could be extended into three dimensions? The answer is yes – sort of. There are no mathematical objects having three dimensions that follow the kinds of algebraic rules that complex numbers follow; so although you can represent points in 3-dimensional space as sets (x, y, z), there are no consistent algebraic rules for these sets. Luckily, through some mathematical trickery, you can still generate a fractal surface in three-dimensional space called a Mandelbulb. An example of one of these strange objects is shown in a figure near the front of this essay. The colored surface of this Mandelbulb is all “fractally” and uneven. Points in space “inside” the surface are part of a Mandelbrot set (order). Points not “inside” the surface are not part of that set (chaos). The surface itself is thus a boundary between order and chaos. Since the Mandelbulb is a surface that encloses a finite volume, it must have two dimensions (at least nominally) and so it must also have an area. What's the area equal to? Infinity. Just like the perimeter of the Koch snowflake is infinite, the surface of a Mandelbulb is infinite. Very strange. Now everyone who has studied scientific literature probably knows about a place called “Flatland” where hypothetical 2-dimensional creatures live. Flatland is ordinarily thought of as a traditional 2-dimensional surface, like a flat plane or the curved surface of a sphere. Well, what would happen if we were 2-dimensional creatures living on the surface of a Mandelbulb? How would we characterize the area of our home? What kind of features would we see there? Now I think some of you might just see where this is all going, and here's where things start to get a little freaky. It turns out that there is a class of mathematical objects known as quaternions. They were discovered by the mathematician William Rowan Hamilton. These objects extend the idea of complex numbers into four dimensions.34 Quaternions do follow a set of consistent algebraic rules, although they're strange rules. For one, multiplication isn't commutative. In ordinary algebra, and even complex algebra, the multiplication operation is commutative: A × B = B × A (whether A and B are real or complex). This isn't the case in Hamiltonian algebra. Here, the order of things is important, like in matrix algebra. Here is Hamilton's table for the rules of multiplication: × 1 i j k 1 1 i j k i i -1 k -j j j -k -1 i k k j -i -1 Hamilton saw the whole shebang in a flash of insight; he summarized it by: i2 = j2 = k2 = ijk = -1. Let's put all of this into practice. Suppose of you have a point, z, in 4-dimensional space. This 34 Note that mathematics jumps from 2-dimensional complex numbers into 4-dimensional quaternions and completely skips over the third dimension. This may be very significant. Or maybe not. 22
  • 29. point can be expressed by four numbers: z = a + ib + jc + kd. The numbers a, b, c, and d are simply the values assigned to the four dimensions, and i, j, and k are just markers or labels for the three “non-real” dimensions. The unlabeled value, a, is the “real” part of the quaternion.35 The nice thing is that there are consistent algebraic rules for adding and multiplying 4-dimensional points. So a formula like z´ = z 2 + c makes perfect sense when z, c, and z´ are all quaternions. The value of z 2 is found by multiplying z by itself: z 2 = (a + ib + jc + kd) × (a + ib + jc + kd) = (a2 – b2 – c2 – d2) + i(2ab) + j(2ac) + k(2ad) Adding the quaternions z 2 and c together is just a matter of summing up their “real” parts, along with summing up each of their “non-real” parts, i, j, and k. By testing every possible quaternion in 4-dimensional space, c, we'll end up with a Mandelbrot set of all points that are stable and don't cause z´ to explode. There should be a some kind of “perimeter” or “surface” that separates the quaternions in the Mandelbrot set (order) from the quaternions not in the set (chaos). How many dimensions will this “perimeter” have? Well, based on the fact that a one-dimensional perimeter encloses a two-dimensional area, and a two-dimensional area encloses a three-dimensional volume, my guess is that it will nominally have one less dimension than the four-dimensional space it encloses. Logically, it should then have three dimensions and it should also have fractal properties, and I'm going out on another limb and say that this 3-dimensional border has a volume that approaches infinity. But does such a 3-dimensional fractal boundary exist? Well look around, because we already may be living in one. Like the Flatland people living on the surface of a Mandelbulb, we can move our 3-dimensional bodies around our 3-dimensional space and explore it at will. Of course, we can't see the 4-dimensional space that our 3-dimensional surface occupies, but we may be able to detect some “fractalish” properties of the space we live in if we are clever enough and dare to look for them.36 Now I'm going out on yet another limb to say that maybe the reason space has three dimensions, instead of two, four or five, is because of the mathematical properties of quaternions. I know it's dangerous to ascribe properties of reality to mathematics alone. That's what string theorists are doing, and I think it's leading science down a rabbit hole, as I said in Reality Riddle. At best, I believe mathematics just mimicks what nature does. But I just can't help this nagging feeling that a fractal universe makes sense in a weird sort of way. After all, the universe does seem infinitely large, even if it was created a finite time ago. And then there are all those fractal objects seen everywhere in nature; these may be projections of the fractal universe itself. 37 Finally, it seems natural to expect creation to be happening in a place that's at the hairy edge between order and chaos, which is another reason I find this conjecture so intriguing and even plausible. If space has three dimension only out of mathematical necessity, then you may ask how time fits into this model I just contrived? My answer is – and always was – that time isn't really part of a space-time continuum (although you can sometimes make calculations more convenient by “spatializing” time). Instead, space and time are fundamentally different things. Time only measures changes and evolution – we really can't navigate through time at will like it's space. At the very basic level of elementary particles, time is bi-directional and symmetrical, and the quantum realm is changeless and eternal. Time doesn't emerge as a measurable or meaningful property beneath the macroscopic level; time emerges when things have unique identities and histories. 35 If you're into Minkowsky space-time, you might use a system like this for tracking “world lines” in 4-dimensional space-time, but that's not the point here. 36 Don't ask me how to look for them, because I'm simply not clever enough. My job is only to plant the seed. 37 Fractals have the properties of self-similarity and scale invariance, where patterns repeat over and over on smaller and smaller scales forever. 23
  • 30. Appendix G – Why Reductionism Cannot Fully Explain Biology By definition, reductionism is an explanation of complex life-science processes and phenomena in terms of the laws of physics and chemistry. The most basic unit of life is the cell, which is a wonderfully complex chemical factory. In very simple terms, cells are tiny protein assembly plants. Proteins are made from 22 standard amino acids under the direction of RNA, with the help of molecular machines called ribosomes. RNA is copied from the genes stored in the double helix of DNA. Proteins make up many of the cellular structures, such as the cell membrane, and can even act as tiny machines fed by complex chemical reactions. These reactions are becoming well-understood, so it can be argued that reductionism is very successful in explaining all of the inner workings of the cell in terms of complex chemistry. However, we really can't go much further than describing how RNA is copied from DNA, and how RNA translates into proteins. Here's a much more difficult question: How does this produce this? ↓ ↓ The standard answer (according to reductionism) is that a complete blueprint of a human organism is contained within 23 pairs of chromosomes that are found in each and every cell. The DNA code consists of a base-4 number system represented by the letters A, T, G, and C. Each of those letters stands for a molecule that links up with another molecule to form one rung of the DNA strand. An A (adenine) links up with a T (thymine), and a G (guanine) links up with a C (cytosine), but A-T and G-C rungs can also be reversed as T-A and C-G rungs, so each rung can have one of four possible configurations. Hence, DNA uses a base-4 number system to encode information. If a strand of DNA has n rungs, then there are 4 n possible configurations, which are quite a lot. But the real question is how much information is actually stored in our genes?38 The human genome has recently been “sequenced” (decoded) completely, so the answer to that question is now available and it's quite surprising: The total information stored in the human genome is only about 1.5 ×109 bits! Although the structure of our bodies surely must be defined by our DNA39, the number of bits contained in DNA is simply not enough to specify the digital template of a human body.40 Clearly, DNA does much more than encode data for making proteins out of amino acids, but what and how? 38 Remember that every cell contains the same 46 chromosomes, so the total information in the 100 trillion cells in a human being is equal to the information in one cell. The same information is duplicated 100 trillion times. 39 This is quite apparent merely by observing the similarities between identical twins, who have the same DNA. 40 On the next page you'll find out how many bits are required to do that. (Hint: it's a huge number.) 24
  • 31. Everyone knows that a human being starts out as a fertilized egg, or zygote. The zygote divides over and over until a ball of undifferentiated cells, called a morula, is formed. These cells are attached to each other, but they still operate pretty much as independent one-celled animals. Later, the morula forms a hollow structure, called a blastula. It is at this stage in development that the cells of the embryo begin to differentiate into what will eventually become the tissues and organs that comprise the human body. But how does each individual cell know how to differentiate? What orchestrates the development of an embryo into a fetus, which becomes a child and then an adult? I believe the answer is that those 46 human chromosomes don't define the complete end product at all. Instead, they define a relatively simple process carried out at the cellular level that ends up assembling a complete human being. The end design is defined by the assembly process itself. Let me explain this with a crude analogy. When a master carpenter makes an inticate cabinet using materials from a lumber yard, he has the final design of the cabinet in his mind's eye. Each step in making the cabinet is directed toward fulfilling that design, and that requires quite a lot of information. On the other hand, making a cabinet from a preassembled kit requires very little information. It involves simple steps such as attaching Part B to Part A, Part C to Part B, and so forth. You don't even have to know what the cabinet will look like in order to complete the process. The cabinet's final design emerges during assembly by following simple steps. Remember the self-organizing principle as illustrated by mathematical cellular automata? The resulting order is not a design at all – it just emerges from a process of carrying out very simple rules contained in each cell. In a similar way, each cell in the developing embryo carries out logical steps that are directed in part by information communicated among the cells. Although the logic may be relatively simple, the end product is incredibly complex. The DNA does not define a human being in terms of a complete design template. It defines the process of making a human being, and this only requires information that can easily be stored on 46 human chromosomes. I recently read an article about teleportation – as in, “Beam me up, Scotty!” Some reductionist scientists believe that someday it may actually be possible to disassemble a human body, extracting all the information contained within it, transmitting the information to a remote location, and using the information to reassemble the body from raw materials. In the teleportation piece I cited, the amount of information required to reassemble a human body was estimated to be 2.6 × 1042 bits.41 This is why reductionism fails to fully explain biology: If the whole is equal to the sum of its parts, we simply cannot reconcile the fact that an incredibly complex human organism – represented by 2.6 × 1042 bits – cannot be encoded into chromosomes that can only contain 1.5 × 109 bits. By abandoning reductionism, we can see how a genetic mutation actually affects the whole organism – not by changing the “template” of the organism, but by altering the program that makes the organism. For example, a small change to one gene can cause a fruit fly to grow an extra pair of wings. That small genetic change caused a glitch in the assembly program that results in an extra pair of wings. What will the end result from a given genetic change be? There's only one way to find out: by seeing what develops from the programming glitch. A superior end product produces positive feedback that will reinforce the glitch in the next generation. An inferior end product produces negative feedback that will suppress it. This properly explains evolution and selection. This also allows us to see why “intelligent design” isn't needed in order to explain the complexity of the universe. Complex things don't need to be designed. In fact, it seems that complexity actually disconfirms design. Designed objects (like a sleek Ferrari Spider automobile) generally tend to be much simpler than many objects, such as Mandelbrot sets, that are not designed. 41 One of the main problems with teleportation is the time that it would take to transmit a complete human blueprint through space. Using a 30 GHz bandwidth, it would take almost 5 ×1015 years to accomplish that feat, which is a lot longer than the 9 months needed to assemble a human baby from scratch using a genetic code. 25
  • 32. Appendix H – Chaos, Dice and Einstein I recently finished reading the book Chaos by James Gleick. That got me thinking about quantum randomness and Albert Einstein's famous remark, “God doesn't play dice with the world.”42 Niels Bohr, the father of the Copenhagen School, reportedly responded with, “Stop telling God what to do!” You see, Bohr embraced the idea that quantum processes are truly random, or what mathematicians like to call stochastic, whereas Einstein was convinced until his dying day that the universe was fundamentally deterministic and “knowable.” I discovered that chaos falls somewhere in the middle. Whereas a stochastic process is just plain unpredictable, a chaotic process is deterministic and unpredictable. There is a certain class of chaotic processes that feature “strange attractors.” To see them, you have to plot the state of the system under study in something called state space, which is multi-dimensional. Every point in state space represents the complete state of the system, which can include hundreds, thousands, or even millions of variables or dimensions. When a system undergoes change, the point that plots the state will trace a path through state space. That path is called an attractor. Some attractors form closed loops, in which means the system is oscillating or changing periodically. When a system goes into chaos, however, the path never crosses itself, which is why it's called a strange attractor. Consider this set of differential equations: dx/dt = σ (y – x) dy/dt = x (τ – z) – y dz/dt = xy – βz These are called the Lorenz equations, named after Edward Lorenz, a mathematician from MIT who used them to model atmospheric convection and showed that weather is unpredictable.43 The variables x, y, and z are the state variables, and σ, τ, and β are parameters that can be adjusted to tune the equations and change the system's behavior. Note that the equations are linked and they are non-linear.44 The presence of non-linearity gives rise to a strange attractor shown in phase space below, sometimes referred to as the Lorenz butterfly because of its shape. 42 Most likely what he really uttered was, “Gott nicht würfelt mit der ganzen Welt.” 43 The same equations have been used to simulate many different sorts of physical systems, which seems to show (at least to me) an underlying unity in nature. 44 Interestingly, Einstein's Field Equations (EFEs) used in general relativity are linked, non-linear partial differential equations, unlike other fundamental equations of physics, such as Newton's equations of motion, Maxwell's equations of electromagnetism, and Schrödinger's wave equation, which are linear. The physical meaning behind EFE non-linearity is that a gravitational field, containing energy, creates its own gravitational field. Could gravity become chaotic under certain extreme conditions? That would be truly amazing. 26
  • 33. On the surface, the behavior of a chaotic system defined by the Lorenz equations is qualitatively very different than a stochastic process. The state trajectory of a chaotic system is a continuous line, so the position s of each point is dependent on the position s – ds of a point on the line at a prior time t – dt. Thus, a chaotic system is fundamentally deterministic. However, the system is unpredictable because there is no function f(t) that defines state trajectory. You can't calculate x analytically for t + Δt in the future when Δt is large . Therefore, a chaotic system is both deterministic and uncertain. In contrast, a stochastic process is uncertain, but it is not deterministic. Its state trajectory is just a random series of points, with no apparent functional relationships among them. When I say “no apparent,” it's because I think there might be hidden determinism that underlies stochastic events. Hidden determinism sounds an awful lot like something Einstein said in his famous EPR paper, where he claimed hidden variables underlie quantum uncertainty. Experiments in the 1980s based on Bell's theorem essentially destroyed the concept of hidden variables forever; however, the hidden determinism I'm referring to is at a deeper level than Einstein's hidden variables. In Erwin Schrödinger's famous Cat Experiment, he used radioactive decay to trigger the release of cyanide gas to kill Fluffy. It had 50/50 probability of triggering the cyanide release within a 10-minute time period. He chose that particular method of execution because he wanted to suspend Fluffy in a state of quantum superposition, 50% alive and 50% dead, for the entire 10 minutes. There wasn't much point in running the experiment using a 5-minute timer instead of radioactive decay to trigger the cyanide, because everyone knew that Fluffy would be 100% alive for 5 minutes and 100% dead for 5 minutes. But what if there's a chaotic process that deeply underlies radioactive decay? Would Fluffy still be in a 50/50 quantum state? Or would Einstein be right? If an atomic nucleus is unstable, it has a tendency to eject a particle to lower its internal energy and make it stable. We call an unstable atom radioactive because it gives off radiation when it decays. When isotopes have too many protons and not enough neutrons to keep the nucleus stuck together, they might eject a positron, which adds one neutron and subtracts one proton. This changes the isotope into another element that's one position back in the Periodic Table. Sodium-22 is like that. It has a half-life of around 2.6 years and changes into the stable isotope Neon-22 by emitting a positron. Although a collection of billions of unstable atoms has a well-defined half-life, the decay of an individual atom is completely random. The atom literally has no memory. It doesn't care if was created 13.5 billion years ago or last Tuesday. It has the same chance of decaying either way. Some physicists compare an atomic nucleus to quark soup, or maybe it's more to accurate to call it quark Jell-O®. In this model, the nucleus is in constant turmoil with all kinds of internal jiggles we can't actually see, but because protons and neutrons have internal structure, it seems reasonable that they do jiggle around inside the nucleus. Now suppose the protons and neutrons are all jiggling around chaotically, similar to a Lorenz process. Every so often, those jiggles might combine in a way that forms a big bulge in the Jell-O® that spits out a positron. The thing we don't see – the jiggling around – could be a chaotic process that's deterministic, but the thing we do see – the spitting out of the positron – may still look like a stochastic process. This is just one possible example. So maybe Einstein was right in a weird kind of way. Maybe deterministic chaotic processes really are behind the scenes of quantum uncertainty. Mathematicians will insist that they have all sorts of tests that show when a number sequence is generated by an algorithm. After reading the literature on this topic, it seems to me that these tests use circular logic. One of the tests is to compare two number sequences when the system is started in two different “nearby” states. If the two sequences diverge deterministically, then it means the numbers are “computable” and the process is non-stochastic. But if the strings diverge randomly, then it means the numbers are not “computable” and the process is stochastic. This raises two objections: First, you can't know which system states are “nearby” unless you already knew the 27
  • 34. system is a machine, which already tells you that the numbers are computable and hence non-stochastic by definition. Second, how do you know whether the two number sequences diverge randomly or deterministically? That simply defines random as something that produces random results. Here's where I stand on this issue: It is possible, at least in principle, to design a machine that generates a finite series of bits that is mathematically indistinguishable from a finite series of bits generated by a stochastic process. In other words, it is possible to fool even the best mathematicians into believing that a deterministic process is stochastic, as long as they are only able to make external observations. A chaotic process can do that by keeping its complete states hidden and revealing only portions of them. I will now show a proof of concept design that does that. Using the Lorenz equations, suppose we program a machine that computes the three state variables to 128-bit precision, which represents (2128) 3 = 2384 unique machine states. If the complete floating-point values of x, y and z were shown, any mathematician could see they evolve into a Lorenz butterfly and declare the numbers are computable. But suppose we take the last significant bit of each floating-point value of x, y, and z, add those three bits together, and use the result as the output bit. Could our mathematician friends detect any patterns in that sequence of zeros and ones? Could they guess what the machine states are? Could they declare with certainty that the bits are computable and therefore non-stochastic? I really doubt they could do any of those things. Now it's true that all algorithms must eventually repeat because computer memory is finite and all machines have only a finite number states. If the machine enters any state for the second time, the algorithm must repeat. This is a fundamental law of computing. The nice thing about the Lorenz attractor is that it's strange – it never intersects itself – so our hypothetical Lorenz machine could visit every one of the 2384 states without visiting any of them twice. Now let's put our hypothetical machine to work and generate some quantum mechanical numbers, like the spin of an electron. No matter in which direction we measure an electron's spin, it can only point up or down – a zero or a one. Let's assume that the spin of an electron can change its spin state unpredictably45 once every Planck-time interval, which is roughly 2×10 43 times per second. Let's assume the electron has been doing that since the dawn of time, roughly 13.8 billion years ago. So over entire history of the universe, our little stochastic electron may have undergone 8.7×10 59 spin changes – a very long sequence of 0s and 1s – without repeating the sequence. Could our Lorenz machine match that? Let's check. The number of Lorenz states divided by the maximum number of spin changes an electron can have each second equals the length of time our Lorenz machine could keep up with an electron changing spin states before the machine starts repeating the sequence. That number is 2 384 / 2×10 43 seconds = 6.2×10 64 years. That's close enough to eternity to suit me. Do electrons change into fresh spin states once every Planck time, or do they only change when somebody decides to measure them? Who knows? Either way, this proof-of-concept example shows that a simple algorithm can duplicate whatever an electron decides to do. The key is to use an algorithm of a chaotic process having a strange attractor, revealing only part of the system state while keeping the rest hidden. So here's another conjecture to consider: There is no such thing as a random event in nature and deterministic processes underlie everything. Randomness is what we perceive when we are only presented with a thin sliver of reality. I'm sure this would have made Einstein very happy. I don't know whether it's possible to prove or disprove this conjecture at this point. Maybe some genius like John Bell will come along with a theorem that will show how to do that. But why go to all the trouble of inventing some unproven (and unprovable) conjecture about 45 Notice I said “unpredictably,” and not “randomly.” 28
  • 35. computers and algorithms? I only presented this hypothesis to show that it's possible to simulate something like quantum randomness with a chaotic algorithm that's deterministic. There are still some riddles that determinism doesn't have answers to, like what quantum entanglement is and how it works. Maybe we should just accept quantum mechanical weirdness the way it presents itself to us, and stop looking behind the scenes for hidden meanings and secret algorithms. But I just can't help thinking that true randomness (whatever that is) simply does not fit into the digital picture of reality that Nature is giving us. In another one of my essays, Is Science Solving the Reality Riddle?, I proposed that there may be no physical reality at all – the “it” from “bit” conjecture. Nature seems to reveal herself as information. Classical thermodynamics and information theory are turning out to be two sides of the same coin. The more you look, the more the universe seems to be part of some kind of digital algorithm, and by definition all digital algorithms are deterministic. I grew up in the analog age when TV was still black and white. Music was recorded on tiny grooves that wiggled back and forth on the surfaces of vinyl disks. Telephone conversations traveled as waves that propagated along wires. Radio waves carried information by continuously modulating their amplitudes or frequencies instead of simply turning them on and off. But in the final analysis, it turns out that there is no such thing as “analog.” In an analog world, arbitrary amounts of precision are possible, which could contain infinite amounts of information. Someone suggested how it would be possible to condense the entire Encyclopedia Britannica into two straight lines drawn on a piece of paper. The ratio the lengths of those two lines can be expressed as a decimal fraction, which is a string of numbers. If you draw the lines with enough precision, you could theoretically encode the encyclopedia into those digits. After all, the number π is just a ratio of a circle's circumference to its diameter, and the digits of π go on and on forever. Of course, it's impossible to achieve arbitrary levels of precision because we live in a noisy universe, and noise would swamp out most of the information contained in the ratio of the lengths of two lines. Claude Shannon unlocked the secrets of information by representing information as discrete units of binary digits instead of wavy lines, thereby turning the somewhat vague notions about information in the analog age into the science of information theory in the digital age. Some philosophers, and especially theologians, find the idea of a deterministic universe quite disturbing, eliciting scenes from The Matrix movie. Determinism implies a lack of free will and human lives without any purpose. Thoughts and emotions need to be spontaneous and unpredictable to be genuine. If determinism underlies everything, are humans nothing more than digitally-programmed automatons without souls? I think chaos theory obviates those fears. We have shown that chaotic systems are both deterministic and unpredictable. But you might ask if we're in a dataverse, then where's the computer? The correct answer is you don't need a chunk of hardware to have a logically consistent mathematical structure. A logically consistent mathematical structure simply exists because it's true. Here in the information age we sort of got stuck in the paradigm of having someone design computer hardware, and then install the software and run it on the computer. But that's just our paradigm. The formula 1 + 1 = 2 is true with or without hardware. The “it” literally comes from the “bit” and not the other way around. Previously, I discussed a machine with a finite number of states and how that number limits the output of the machine. However, if the “machine” only consists of space instead of wires and transistors, then you can make the machine arbitrarily large by expanding space. Bear in mind also, that a dataverse won't need a central processing unit. The processing would be carried out everywhere with information (entropy) expanding everywhere. Is it just a coincidence that increasing entropy and expanding space are both fundamental features of our universe? Or are they both driven by the same process? Might not the purpose be to accommodate an increasing number of machine states in order to preserve the illusion of randomness? 29
  • 36. Appendix I – Reductionism and Bell's Cat It's fun sometimes to mix metaphors, which is why I'm introducing the concept of Bell's Cat. By now we're familiar with Schrödinger's cat-in-the-box experiment and Bell's inequality. It turns out that both paradoxes are closely interrelated and stem from the fallacy of reductionism. Erwin Schrödinger believed the Copenhagen School had taken his own wave equation way too literally, so he proposed the cat-in-the-box as a way of ridiculing their interpretation of quantum mechanics. Well, that backfired because the Copenhagen School just took his cat-in-the-box experiment and doubled down on their bet. The issue was: Exactly what is an observation and where should science draw the line between the observer and the observed? The Copenhageners concluded there is no line between the two, and the entire universe is actually one giant wave function. The radioactive source, the Geiger counter, the cyanide, the cat, the box, and the observer are all inextricably blended together as a superposition of wave functions. Not only is there no objective reality on the microscopic quantum level, there is no objective reality at any level. Observation literally creates reality, and intelligence – whatever that is – is the necessary agent that brings about reality. Of course, carrying this idea to its logical conclusion results in Wheelerism, which I discussed in Appendix D, and eventually leads to the many worlds theory. In addition to the obvious paradoxes raised by Wheelerism, the other thing I really don't like about this idea is that it inevitably leads to solipsism. In case you don't know what solipsism is, it's the notion that you are the only real thing that exists, and that every other object in the universe, both living and non-living, are nothing more than constructs of your own mind. When you look away from the moon it ceases to exist, and when you look back it pops into existence. This is exactly the attitude of psychopaths, who view other people as mere objects to be manipulated for their own selfish purposes. In other words, you run the risk of turning into a Ted Bundy if you take Schrödinger's wave function too literally. (Just kidding.) I covered Bell's inequality in detail in Is Science Solving the Reality Riddle? I just re-read The Cosmic Code 46 by the late Heinz Pagels, who presented an excellent interpretation of Bell's experiment. Pagels said the conventional interpretion of Bell's inequality forces us to make an unpleasant choice: If we insist on objective reality, we must give up the idea of local causality and vice versa. We can either have objective reality or local causalilty, but not both. Since most physicists prefer to keep local causality, they must conclude from Bell's inequality that there is no objective reality. I think you can see where this leads and how it relates to Schrödinger's cat. So is there a way out of this dilemma? Pagels said yes and I agree. Both of us came up with basically the same reasoning: While experiments may show there is no objective reality on the microscopic quantum scale, there definitely is objective reality on the macroscopic scale. Why? A tiny amount of information exists at the quantum level, such as charge, spin, etc., but there is no history and no memory. History and memory are irreversible and entropic. Therefore, objective reality (everything recorded in our universe that is familiar to us) emerges from entropy. We can draw a clear line separating the observer from the observed very close to the quantum level, enabling us to dispense with the silly notion that we're all just part of one giant wave function. The logic of reductionism says we're all made of atoms and atoms are wave functions. Therefore, we exist only as a superposition of wave functions. That's ridiculous. Dispensing with reductionism immediately solves Schrödinger's cat paradox and also allows us to avoid having to make the unpleasant choice demanded by Bell's inequality. Bell's Cat clearly illustrates that science can begin solving the reality riddle by first rejecting reductionism. 46 Unfortunately this book is no longer in print, although the Kindle version is still available. Heinz Pagels was one of the few writers of popular science books I've come across who really “got it.” 30