Upcoming SlideShare
×

# Direct representation second draft

1,296 views
1,174 views

Published on

A Grand Unified Field Theory and new foundation for the complete, consistent, and closed mathematical representation of the universe.

Published in: Technology, Spiritual
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Try http://www.beyond-information.blogspot.com/

Are you sure you want to  Yes  No
• Has many interesting aspects- however, I can't read over 400 slides - do you have a condensed summary?

Are you sure you want to  Yes  No
• Be the first to like this

Views
Total views
1,296
On SlideShare
0
From Embeds
0
Number of Embeds
16
Actions
Shares
0
20
2
Likes
0
Embeds 0
No embeds

No notes for slide
• Background independent means the theory does not depend on the preexistence of anything except the singularity. It derives all of physical existence from the infinite singularity. In particular, DR does not depend on the preexistence of quantity, dimensions, geometry, time, or space. It does not treat time or space as an abstract mathematical stage or coordinate system that events occur in. DR defines the cause, structure, composition, and geometry of time and space themselves in terms of how they relate to the infinite singularity. In DR, time is not just treated as the fourth scalar dimension of spacetime geometry. While that representation is correct as far as it goes, it is incomplete. It is only a partial representation of time in nature. Consequently, mathematical physics theories based on the geometric scalar representation of time can never be more than an incomplete, partial representation of existence. Time is represented as a scalar in physics. That means its only properties are its magnitude and its sign. We can compare two scalars so if a and b represent the time of two events A, and B, if a &lt; b we can say event A comes before B or event B comes after A, and using the |b-a| we can represent the amount of time between events A and B. If b-a is &gt; 0 then event B occurred after A, while if b-a &lt; 0, event B occurred before A. That is all the scalar geometric representation of time can represent. There is something very important missing in the geometric representation of time. It has no distinguished representation of the present moment. It makes no distinction between the past, present, and future. That is a big hole in General Relativity. It means GR is fundamentally incomplete. The missing aspect of the scalar geometric representation of time is that time is a process as much as it is a state and a relation. The fundamental problem is GR is based on geometry. Geometry represents states and their relations, but it does not represent process. Time is a process, just as much as it is states and their relations. In fact, it is the process aspect of time that distinguishes the existence of the present moment from the past and future. It is the process aspect that represents the current moment of time and that causes the arrow of time. The mathematical representation of GR treats all of spacetime as a single 4D block of spacetime geometry that instantaneously blinks into existence at the start of the big bang. This 4D geometric manifold is then supposed to represent all possible pasts and futures in the universe. Physicists then use a set of non-mathematical meta rules, guidelines, and measurement protocols to define local space time reference frames and infer a relative representation of time from the spacetime geometry. The problem is the real world is more than sets of states and relations. The real world is more than geometry. Things in the real world are a superposition of state, relation, and process. To represent time completely, we have to be able to represent its process related aspects, while simultaneously representing its syntactic or geometric aspects. The problem with geometry is it is process independent. The real world is process dependent. Existence does not make a distinction between process and geometry. Existence is process and geometry. Everything finite exists in time. That means time has to be a component in the representation of everything that exists. Furthermore, to represent that component completely, its process related aspects must be included in its representation. Since time is a component of everything finite that exists, that means we must also include the process related aspects in the representation of everything finite that exists. Doing so gives rise to the existence of symmetry, energy quantization, the conservation of energy, cause and effect, the arrow of time, entropy, and the distinguished representation of the present moment of existence.At the level of individual energy and dark energy quanta, nature makes no distinction between the representation of state, relation, and process. It uses a single representational primitive that is a superposition of state, relation, and process. That primitive is the energy quantum. In particular, energy quanta are a superposition of a set of quantum states, a set of relations, and a set of processes. Every energy quantum is a single entity that consists of the superposition of its states, relations, and processes. That is the fundamental cause of particle wave duality. Every energy quantum represents the superposition of a particle and a wave. What has been missing in this picture is that it also represents a superposition of processes. States, relations, and processes are context dependent. Their superposition defines their context of existence. Their superposition composes and is their context of existence. It is their existence. Every energy quantum exists in some context. The state, relations and processes of energy quanta are dependent on the context they exist in because they are composed of the energy quanta that compose that context. They are dependent on the energy quanta that compose them. In turn, their quantum field interactions determines the state, relations, and processes of the energy quantum they compose. By contrast, mathematical structures are defined as a set of objects (i.e., a collection of states) and the set of relations or operators that relate those objects. In other words, mathematics makes a distinction between states and relations, and it does not include a primitive representation for process at all. Mathematics uses separate primitives (i.e., distinct symbols) to represent states and relations. It makes a distinction between numbers and operators. We use meta-rules in mathematics and physics to represent processes indirectly as the sequence of sets of results we obtain from applying a sequence of relations or operators to a sequence of states, but process is not a first class primitive representation in mathematics. Mathematics is also exponentially more complex than existence because it distinguishes between states, and relations, while nature treats the superposition of states, relations, and processes as its only representational primitive. In other words, nature only has one representational primitive – an energy quantum. Everything that exists is composed of energy and/or dark energy quanta. Nature represents everything that exists for a Planck time or longer in terms of the quantum field composition of energy quanta. As a result, it represents everything that exists directly, in context. Specifically, nature represents everything that exists directly by value, using value semantics. By contrast, mathematics represents things by reference, using reference semantics. Mathematics relies on an observer to define the context (domain and range) of representation, and it relies on an observer to define specific sets of states and relations within each of those limited domains and ranges. Obviously, this creates a lot of complexity, it is observer dependent, and it has great difficulty representing systems with multiple context dependent states, relations, and processes. Combining representations defined over different domains and ranges is also difficult, error prone, labor intensive, and sometimes impossible. A lot of things in the real world are simply too complex to represent using current mathematics. At best, current mathematics can only give us an incomplete, partial, indirect representation of parts of existence. By contrast physical existence is mathematically complete and consistent over the universal domain. It is not observer dependent, and it does not depend on an observer to define domain and range limited relations between separate states, nor does it rely on an observer to define mathematical functions or processes. Instead, nature defines everything that exists in context in the universal domain. Since nature’s mathematical representation is based on value semantics, it is impossible for it to be incomplete or inconsistent. Everything in nature represents its own existence directly. Nothing that exists, exists indirectly. Every single thing in existence only represents its own existence. Everything that exists is itself. Because of that, it is literally impossible for physical existence to ever become incomplete or inconsistent. No ontological consistency rules are required to enforce the consistency of existence. No supervisory process, or observer needs to watch over existence and make sure it follows the rules of physics and remains consistent. Because of direct representation, everything that exists, exists zero distance from itself, so light speed limitations on the speed of information transfer play no role in limiting the consistency or completeness of existence. In fact, since existence is a direct representation, it isn’t even represented in terms of information. All forms of information are indirect representations. Information cannot represent anything directly, let alone everything that exists. By contrast, direct representation cannot represent anything indirectly. It can only represent things directly. In turn, that means at the topmost ontological level, the field of representation splits into two mutually exclusive components – direct representation, and indirect representation. Direct representation represents, composes, and is existence, while indirect representation represents information about existence. Direct representation is complete, consistent, and observer independent in the universal domain, while indirect representation and information are incomplete, inconsistent, and observer dependent. Existence requires no designer, and no omniscient programmer. Existence is a self-bootstrapping direct mathematical process / state / system. It is a self-organizing, self-bootstrapping, self-modifying, transfinite recursively composed quantum computer. Existence creates its own ‘quantum computer’ from the infinite singularity. It creates its own ‘program’ and its own evolving quantum state as it expands from the singularity. The self bootstrapping process that creates existence from the singularity, and the mathematical basis for the direct representation of physical existence are explained later in this presentation. The consistency and completeness of existence arise as a necessary consequence of its mathematical representation. It is mathematically, logically, and physically impossible for direct representation to ever be inconsistent or incomplete. By borrowing a page from nature’s construction manual, we can create mathematics that are complete and consistent in the universal domain. We can create mathematics that have no observer dependencies. We can create mathematics that are not based on the preexistence of dimensionality, geometry, time, space, or anything except the infinite singularity. In turn, using the direct representation of mathematics, we will, for the first time in history, be able to compute a complete and consistent representation of the quantum field configuration of whatever part of physical existence we desire and have the computational resources to represent. That covers a lot of ground. The implications are staggering. It means we will be able to understand, and do anything that is possible in the universe, given the availability of enough energy. That means we should be able to directly predict, manipulate, and control the quantum field composition of existence. We will possess the knowledge required to create anything that can exist in the universe. We will be able to generate, control and manipulate vast quantities of energy and use it for mankind’s benefit and profit. We will be able to develop nano-scale engineering and manufacturing. We will be able make great advances in medicine, chemistry, and materials science. We should be able to create artificial gravity and light speed, or even faster than light propulsion systems for spacecraft. The galaxy, and eventually the universe, will be our backyard.Direct representation (DR) describes the cause, structure, composition, geometry and relations between time, space, energy, dark energy, matter, and dark matter. Time and space are both quantized and are composed of energy and dark energy just like everything else that exists. The only difference is they are composed of zero point quantum field virtual energy and virtual dark energy. That means the energy and dark energy they are composed of is part of the background energy field of spacetime that we measure all other energy relative to.Observer independent means nature is not dependent on observers, observation, or measurement for creating or enforcing the laws of physics that govern the actual operation and ongoing construction of existence. Only the human discovery and mathematical representation of the underlying laws of physics is observer, observation, and measurement dependent. In other words, it is important to distinguish between the true laws of Physics as they exist in Nature herself, vs. man’s indirect information based mathematical representation of those laws. The current mathematical descriptions of the laws of Physics are only indirect, partial, incomplete descriptions of the representation Nature uses to construct and control the ongoing evolution of the universe.Physical existence itself is a natural complete and consistent mathematical system. Nothing that exists is inconsistent with itself. A complete and consistent system cannot be represented completely and consistently using mathematics that are incomplete and/or inconsistent. If we want to be able to represent nature completely and consistently it can only be done using mathematics that are complete and consistent over the universal domain. The onlypossible way to achieve that is with direct mathematics. In other words, mathematics must be redefined to make its structureandsemantics isomorphic to those of nature. Currently, the structure of mathematics is partially isomorphic to that of existence, but the current semantics of mathematics is the logical converse of that of existence. The universe represents existence in terms of value semantics, whereas mathematics represents things in terms of reference semantics. This problem exists deep in the foundation of mathematics. The current mathematical definition of ‘relation’, and the definition of the set membership operator are both based on reference semantics. Current mathematics is observer dependent, whereas the existence of existence is observer independent. We will have to correct these errors if we want to be able to represent physical existence completely and consistently using mathematics. I suspect these issues were not previously discovered because the problem isn’t caused by the logic of mathematics itself. Mathematical logic is self-consistent. It is just incomplete. The problem is caused by the fact that we use the indirect representation of information to represent mathematics. The use of indirect representation forces us to use reference semantics. The fundamental problem is, because we are observers, we represent existence and mathematics relative to ourselves. But existence cannot represent itself relative to us, or relative to any observer. No observer could exist in the singularity. Hence existence must be able to represent itself in a way that is not observer centric. The only way for it to do that is for existence to represent itself relative to itself. Instead of existence representing itself relative to something else by reference using reference semantics, existence represents itself directly in terms of its relation to itself using value semantics. In other words, existence is based on composition by value, instead of composition by reference. Composition by value requires value semantics. Everything that exists is composed of the composition of energy and dark energy quanta by value. This then leads directly to direct representation. This problem is the mathematical analog of the old Ptolemaic cosmological view that the universe revolves around the earth. We simply replaced a universe that revolves around the earth with a universe that is described relative to an observer. To fix this, we have to represent existence in a way that is observer independent. There is a lot more to this than removing the dependence on an observers chosen coordinate system by using a general covariant representation. We have to create a version of mathematics with a representation that is not observer relative, or observer centric. The only way to do that is to avoid indirect representation because indirect representation requires the use of reference semantics. Indirect representation requires reference semantics because it cannot represent anything directly. That is the very thing that makes it indirect. The only way around this is to use direct representation. Direct representation is the logical converse of indirect representation. It represents everything directly using value semantics. Because of that, it is completely observer independent. We can’t physically avoid indirect representation in current computer systems, but we can avoid it logically and semantically. To do so, we can create a software virtual machine that implements an indirect representation of direct representation. We then represent everything else in terms of that simulated direct representation. The abstraction layer provided by the direct representation virtual machine then ensures the resulting computational system can only represent things using direct representation by-value semantics. Direct representation provides a mathematically complete and consistent representation of existence that is not dependent on observers, observation, or measurement. This is impossible using current mathematics because, as Kurt Gödel proved, current mathematics is incomplete and/or inconsistent. I discovered the causes of mathematical incompleteness and inconsistency and intentionally formulated DR in a way that avoids them. DR is based on a reformulation of mathematics that is both complete and consistent. DR explains the cause of nature’s own ‘laws of physics’ and it explains how nature ensures the consistency of the true laws of physics throughout the entire universe. It also explains the cause of mathematics, life, observers, and observation. In DR, they are all a natural outcome of the direct mathematical process that causes and composes all of physical existence.Information independent means DR uses an alternative to the representation of information. That alternative is vastly superior to information when it comes to the representation of physical existence. It can represent everything in the universe completely and consistently using a single ontology and a single self-organizing, self-modifying, recursive computational process with zero domain limitations. It is also not subject to Heisenberg Uncertainty, undecidability, or halting problems. It is not based on the ‘It from Bit’ quantum mechanics hypothesis that physical existence is composed of information or bits. The quantum physics hypothesis that physical existence is composed of information at its deepest levels is false. Its derivation contains a logical fallacy. That fallacy will be exposed and explained later in this presentation.In DR I had to redefine the fundamental mathematical concepts of number and relation. Unfortunately, these redefinitions are essential, because they are the only possible way to create a mathematical representation of physical existence that can be isomorphic to, and exist in one to one correspondence with, all of physical existence. The current definition of ‘number’ is inconsistent with physical existence for three reasons. First, it is based on reference semantics while existence is based on value semantics. Second, physical existence expanded from the infinite singularity, so its origin is infinity, not zero. Third, numbers are based on a well founded cumulative hierarchy composed from the transfinite recursive composition of empty sets. The problem is the empty set. The empty set represents nothing. It represents the complete absence of existence. Nothing does not, and cannot exist anywhere in the universe. It has never existed, and it is impossible for it to exist. It is logically and physically inconsistent with existence. That means numbers are based on a contradiction with existence. Since mathematics is based on numbers, it means current mathematics is a deductive logic system based on a false premise. It means mathematics is logically unsound, relative to the representation of existence. This doesn’t mean we should abandon mathematics. Mathematics is still a very useful, and very valuable tool. We just have to be careful when using it for physics. We need to understand that mathematics provides an incomplete and occasionally inconsistent representation of physical existence. Put another way, current mathematics provides a partial representation of physical existence that occasionally computes results that are inconsistent with physical existence. However, even a partial representation is much better than no representation.Fortunately, the definition for direct numbers is an extension of the existing definition with the exception of a replacement for the empty set, so the mathematics are structurally isomorphic to that of current numbers; the semantics are just different. That means we can reuse existing mathematical structures. We just need to extend them to accommodate the change in origin, and carefully consider the change in semantics from composition by reference to composition by value. Direct representation redefines the fundamental mathematical concepts of ‘number’ and ‘relation’ to eliminate all anthropocentric dependencies on observers, observation, measurement, and information. The redefinitions are absolutely essential if we are to eliminate all anthropocentric observer biases from the mathematical description and representation of physical existence. They are essential if we are to formulate a mathematics that is both complete and consistent over the universal domain.Direct representation is extremely powerful from a representational perspective. It provides a mathematically complete and consistent description of existence. That means in theory, it can represent everything that can exist both consistently and completely. It exponentially simplifies the mathematical representation of physical existence. With direct representation, the counterintuitive nature of parts of quantum physics is easily explained. Simply put, with direct representation, the entire structure of existence all falls logically into place, is consistent, and for the first time in history, is consistent with common sense. The cause of particle wave duality, the cause of quantum superposition, the cause of Heisenberg Uncertainty, the cause of quantum wave collapse, and the cause of quantum non-locality are all easily explained in direct representation. Counter-intuitive “observer effects” where the act of observation, or decisions made by an observer appear to affect the outcome of quantum experiments are also explained. Generally speaking, most of these counter-intuitive phenomena arise because we are currently attempting to represent existence using an incomplete observer centric representation. Thus our current representation of existence is incomplete and not completely objective. The remainder are caused by an incomplete understanding of time, space, and energy. Specific explanations for the cause of each of these phenomena will be presented after the necessary prerequisites are covered. The truly beautiful part of direct representation is its stunning explanatory depth, breadth, and above all, simplicity and elegance. Direct representation can explain the cause of everything in existence starting from the singularity. It reduces the explanation of all of existence to the ongoing operation of a single self-bootstrapping, self organizing, self modifying representation and computational process. The same process and representation can explain everything from the origin and composition of time and space, the relations between energy, dark energy, and spacetime, the origin and cause of all the relations between all the fundamental forces, quantum field interactions, and eventually, all of physics. The same representational and computational process can also explain the origin of life and biological evolution. It can explain the neural connection patterns and synaptic organization of the brain, the neural representation of thought, and the solution to the hard problem of consciousness. It can even explain the primary factors that govern the success and failure of businesses, economies, and governments. As far as I know it is the first theory of everything with a mathematical foundation strong enough to truly be a full theory of everything in the universe.
• For example, in DR, Heisenberg Uncertainty is non-existent. Heisenberg Uncertainty is not a fundamental property of existence. It is a consequence of observation and measurement at quantum scales. Note that I am not saying Heisenberg Uncertainty is false or that it is not a real bona fide, experimentally verifiable, physical phenomena. It is very real. What I am saying is that the only reason it exists is because we cause it when we measure phenomena at quantum scales. In order to represent information about phenomena, we must observe and measure those phenomena. It is impossible to measure complementary observables at the same time. For example, to measure position accurately the measurement must occur at a single instant in time. But to measure frequency, it must be measured over as long a period of time as possible. Clearly, we cannot measure both phenomena precisely at once with one measurement. With one measurement, the more precisely we measure time, the less precisely we can measure frequency and vice-versa. If we try to use two different measurements, then we run into another problem. At quantum scales, measurements measure a change in the configuration of the measured phenomena relative to the background quantum field. That change is caused by the interaction of the quantum field that composes the measurement apparatus with the quantum field that composes the measured phenomena, relative to the change caused by the interaction between the measurement apparatus quantum field and the background quantum field of spacetime (i.e., the ground state). When we measure things at quantum scales, the act of measurement changes the subsequent quantum field configuration and quantum state of the phenomena we measure. Thus, subsequent measurements of those same phenomena are likely to be different than they would have been if the first measurement had not been performed. As a result of these two effects, the more certainty we have in the measurement of one observable, the less certainty we can obtain in the measurement of its complement. The prevailing QM hypothesis of “It from Bit”, that existence is composed of information at the smallest scales is false. That false belief stems from a failure to unambiguously define the term “information”, combined with a logical error in the derivation of “It from Bit”. Later in this presentation, I will analyze the definition of ‘information’ in excruciating detail, point out its internal contradictions, and conduct a thorough post-mortem that specifically identifies the logical fallacies that lead to the erroneous belief in “It from Bit”.Existence is not composed of information. It is composed of energy and dark energy quanta. Energy quanta and information are not the same thing. At quantum scales, nature does not even represent itself indirectly. Observation and measurement are only needed to represent things in terms of information. Since existence does not represent itself in terms of information, it has no need to observe or measure itself, so it does not experience any Heisenberg Uncertainty.Nature already exists. It does not need to observe itself or measure itself to represent itself in terms of information. Physical existence is not composed of information. It is composed of energy and dark energy quanta. Energy and dark energy quanta exist directly, not indirectly. Their existence is based on value semantics, not reference semantics. In other words, physical existence represents itself directly in terms of the existence of energy and dark energy quanta, not indirectly in terms of information about the existence of energy quanta. Only intelligent observers represent existence indirectly in terms of information. Nature has no need to do that. At the lowest levels of existence, nature can’t represent anything indirectly. It represents everything directly. Only highly evolved observers with brains, and artificial information systems created by those observers have the capacity to represent things indirectly. Attributing the human brain’s ability to represent things indirectly to all of nature is a classic example of anthropocentric bias. We mistakenly attribute our brain’s ability to represent things indirectly to all of nature. We mistakenly believe that since we represent things indirectly, the only possible form of representation must be indirect. Since we represent existence indirectly relative to ourselves as observers, we assume the only possible type of representation is indirect representation. This error is another variation of the old Ptolemaic earth centered cosmos, except we’ve replaced the earth centered cosmos with the observer centric indirect representation of information about the cosmos. Instead of representing the universe as revolving around the earth, we represent the universe indirectly in terms of how it relates to our observations and measurements. Apparently, we failed to consider that the existence of existence cannot depend on our observation, measurement, or information about it. Existence must exist independent of all observers, observation, and measurement. Existence expanded from the singularity in the Big Bang. There can be no observers in the singularity. There is nothing that can compose an observer in the singularity. There isn’t even any space or time in a singularity for any observer to exist in. If existence could not represent itself indirectly from the perspective of an observer in the singularity, then why should it do so now? Think about it.
• Most of the missing antimatter in the universe still exists in the hidden mirror sector. We just can’t observe it because it’s arrow of time is backwards relative to ours. That means it exists outside our light cone.The singularity partitions the universe into matter and energy in the observable universe, and dark energy and dark matter in the hidden mirror sector. Spacetime is a combination of the singularity, virtual energy and virtual dark energy.From our perspective, the ‘missing antimatter’ exists on the other side of the singularity, in the hidden mirror sector. Just as the observable part of the universe has a ‘missing antimatter’ asymmetry, the unobservable hidden mirror sector has a ‘missing matter’ asymmetry. Just as matter dominates in the observable part of the universe, antimatter dominates in the unobservable hidden mirror sector. Dark matter is antimatter in the hidden mirror sector. It cannot annihilate with matter because of its reversed arrow of time. In other words, its future proceeds backwards in time relative to ours. As time progresses, most matter and antimatter get further and further away in spacetime. The only antimatter we detect is that small fraction which happens to be created in our present. To understand this, it is first necessary to understand the present moment in time. There are actually two present moments in existence. There is the observable present, which moves from the now toward the future, and there is a mirror sector present which moves from the mirror sector’s now into the past relative to us. The two present moments each move away from the singularity in opposite temporal directions, except for inside black holes in the observable universe, and inside white hole in the hidden mirror sector. There, the present moments move back towards the singularity. They continue doing so until they reach the singularity, at which point time restarts in the next big bang at the next quantum state transition.
• Mathematics is represented in terms of information. Information is a kind of indirect representation. Nature represents existence directly in terms of energy quanta and their relationships, not indirectly in terms of information about existence. In mathematics, a mathematical structure is ‘abstract entities with relations between them’.The problem is physical existence is not abstract, it is concrete. In addition, the structure, relations, and processes in physical existence are dynamic, not static. The only constant in nature is change. The quantum energy field that composes existence is always flowing and changing. Yet we try to represent nature using static geometry and invariant sets of equations with fixed relations. At best, only a few percent of existence can be represented that way. Physical existence is a direct mathematical structure that represents itself in terms of a context dependent dynamic encoding based on value semantics, not a fixed context free, static encoding based on reference semantics. In addition, at quantum scales, nature does not distinguish between entities and relations. Nature’s mathematical primitives combine the representational behavior, and state of entities and relations in a single unit of physical existence, an energy quantum. This can be considered as a further generalization of infinite order topos theory [Lurie, 2009] in mathematics. The generalization removes the distinction between arrows and objects in category theory, and replaces them with a new representational primitive that represents the superposition of state, relation, and process. Instead of only allowing higher order relations, it also allows higher order objects. It creates a mathematical system based on a well-founded cumulative hierarchy created via transfinite recursion of symmetric differences in infinity instead of a well-founded cumulative hierarchy of numbers created via the transfinite recursion of empty sets. Physically, those symmetric differences are energy and dark energy quanta. Energy and dark energy quanta are nature’s version of positive and negative numbers – except its numbers are based on value semantics instead of reference semantics. First order symmetric differences in the infinite singularity compose temporal and anti-temporal field energy quanta. They compose white and black hole quantum microsingularities. The event horizons of those quantum scale microsingularities represent energy and dark energy temporal field quanta respectively. Their transfinite recursive composition composes all other forms of energy, dark energy, space, time, matter, dark matter, and antimatter. Energy quanta compose by value, not by reference. Therefore, all of physical existence is composed of the interaction of energy and dark energy quantum fields by value. In turn, this means nature represents all of existence in context. Thus it can represent context dependent state and behavior. Furthermore, it can do so without any a priori dependence on the existence of observers, observation, measurement, or information.The formal derivation of energy quanta, quantum states, their semantics, and the quantum field structure of time and space will be presented later in this paper. We will show how to consistently derive all of existence from the singularity. Before that is possible, we need to dispel some common misconceptions and develop a consistent set of definitions for terms like ‘relation’, ‘representation’, ‘infinity’, ‘nonexistence’, ‘existence’, ‘abstraction’, ‘information’, ‘observer’, and ‘observation’. We need to understand how each of those terms relates to each other, before we can use them to derive a consistent and complete mathematics capable of completely and consistently representing all of physical existence. Direct representation derives the very existence of time, space and all ‘fundamental forces’ and their relations directly from the singularity. It appears to lead to a universe mostly like that predicted by Special Relativity, General Relativity, and the standard model of particle physics, with a few differences. First, it does not create all of spacetime as a static block of geometry at the instant of the big bang. Instead, the quantum field only represents the current quantum state of existence. It only represents the present moment in time. The quantum field is generated by a self-bootstrapping, self-organizing, self-modifying transfinite recursive process that operates on its own current state to create its next state, thereby creating the arrow of time and creating the present moment. This is a feature that GR and current mathematics cannot model because of their dependence on statically defined symbols, relations, and fixed formulas. Self-referential symbolic mathematics systems contain fundamental information theoretic limits (incompleteness) they cannot overcome. These limitations show up in the form of observer centric representation, domain limitations, incompleteness, inconsistency, uncertainty, undecidability, and an exponential growth in representational complexity relative to increasing domain size. Direct representation eliminates all of those problems. It looks like it will be able to greatly simplify and unify all of physics, and explain many of the unsolved problems in physics. It also explains how the brain represents thought and consciousness. In theory, it should be capable of representing all of existence completely and consistently. In other words, given unlimited time and memory, it could represent the entire quantum state of existence.DR predicts that time and space are both kinds of quantized quantum energy fields. In particular, it generates the geometry of time and space and the zero point virtual quantum energy field directly from the singularity. In other words, it doesn’t assume the a priori existence of geometry or any laws of physics. It generates geometry and all the laws of physics directly from the singularity. It turns out spacetime is (mostly) composed of a cubic lattice. That lattice generates Euclidean geometry as an approximation of Minkowski spacetime at classical scales. It breaks down below the scale of spacetime structure (pi Planck lengths). DR predicts that time is a quantized scalar energy field. It is the only truly fundamental force. All other forces are composed in terms of the temporal energy field. In DR, all forms of energy contain a temporal field component. Temporal quantum field potentials dominate all other forces and are the most important component in determining cause and effect. In absolute terms, the temporal field is about 15570 times stronger than the strong force, but its energy composes the zero point virtual quantum field of spacetime, so we cannot measure it directly. DR predicts that gravity is not a fundamental force. Instead, it arises due to the quantum field dynamics caused by the interaction of the temporal, electromagnetic, color, and weak fields. Perhaps most surprisingly, DR predicts that light is stationary relative to the spacetime quantum foam, and that the speed of light is due to the rate at which the zero point quantum field that composes spacetime expands. In part, spacetime is composed of the electromagnetic field. The expansion of spacetime carries photons and their electromagnetic field with it. The expanding spacetime field is slowed down by gravitational fields, and by the presence of matter. Mass acts like a sink for the spacetime field. The spacetime field is absorbed by opaque matter. It can pass through transparent matter, but it is slowed by it. That is the cause of the index of refraction. Experiments designed to detect a moving spacetime field, such as vacuum mode Michaelson interferometer experiments fail to detect it because the matter in the vacuum tube walls blocks the very motion the experiment is designed to detect. Null results from these experiments should not be relied on to rule out the presence of absolute motion in the quantum foam. In this model, information is only an indirect, observer centric representation of physical existence; it is not physical existence itself. Physical existence itself is a mathematical structure, but that structure is based on direct representation instead of indirect representation. In direct representation, energy quanta are the functional equivalent of nature’s ‘numbers’. However, nature’s numbers represent a far richer mathematical structure than scalars, vectors, or tensors do in mathematics. In particular, energy quanta combine the mathematical concepts of state, relation, and process in a single indivisible mathematical entity. That’s what causes particle-wave duality. What’s more, that entity can compose and be composed of other entities. Thus an energy quantum can compose hierarchical networks of higher-order states, relations, and processes. Energy (and dark energy) quanta interact and compose an evolving mathematical structure that is the entire quantum state of existence. That quantum state doesn’t represent existence indirectly, or abstractly – it is existence. -------------------------- Observer centric properties of mathematics ---------------------------------Because mathematics is represented in terms of information, it is necessarily observer centric. It requires an observer to formulate mathematical equations and interpret their meaning. That process also requires observation. The phenomena to be modeled have to be observed, measured, and encoded symbolically. The equations themselves have to be observed, encoded, and their meaning interpreted. That also requires the observer to make decisions. The representation, encoding, decoding, and interpretation of information, observation, measurement and decisions are all observer-dependent, observer-centric activities. Nature doesn’t represent itself in terms of information. Nature doesn’t make decisions. Existence doesn’t need to represent information about itself. It already is itself. When we represent existence in terms of information, we represent the world from a human perspective and we end up conflating human abilities such as our ability to measure things, represent them indirectly in terms of information, and make decisions - with nature as a whole. Humans are only a miniscule part of nature. We must not conflate human abilities with those of nature as a whole. With the exception of life, nature cannot and does not observe itself, measure itself, represent itself indirectly, interpret meaning, or make decisions. All of those things are anthropocentric. Most of nature can’t even decide whether to represent something as a zero or a one, let alone true or false. Consequently, it cannot represent itself in terms of a bivalent code. Physical existence is not any kind of indirect representation. Nature is far more efficient than that. Nature doesn’t represent itself using a bivalent code. It represents itself using a univalent code. Energy quantum exists. Period. Nature doesn’t sit around asking itself whether or not an energy quantum can exist, and it doesn’t represent energy quanta that don’t exist. There are no parallel universes or multiverses that represent alternative quantum mechanical possibilities or quantum states. Such concepts are mathematically and logically inconsistent and they violate the first law of thermodynamics. It would take an infinite amount of energy to represent all logically or mathematically possible quantum states. The total amount of energy in the universe is finite. Besides, only one infinity can exist. The universe is everything. There can only be one everything. We must remember that the existence of multiple types of mathematical infinities is based on the existence of different types of numbers, which themselves are based on the existence of the empty set. The empty set is inconsistent with physical existence, so any mathematical conclusions deduced from its existence are unsound. That includes conclusions about the nature of infinity.----------------------------------------- Inconsistencies in current Mathematics relative to Physical Existence --------------------------------Current mathematics is based on reference semantics. In particular, the set membership operator in axiomatic set theory is based on reference semantics. In other words, it is anthropocentric. Physical existence is based on value semantics. The mathematics required to represent it consistently and completely have to be based on value semantics. The origin of physical existence is the infinite singularity. The origin of the number system is zero. That means the origin of numbers and the origin of physical existence are mathematically inconsistent.Numbers are based on empty sets. The empty set is inconsistent with physical existence. Nature’s numbers are based on the infinite singularity. All of existence is constructed in terms of its direct relation to the infinite singularity. The infinite and the finite are defined and exist relative to each other. They cannot exist without each other. Infinity is well-defined in direct representation. In fact, all numbers are defined relative to infinity in DR. All numbers contain an infinite component in DR. Consequently, DR can represent what happens inside black holes without contradiction. It can even represent the singularity without contradiction. That is not the case in indirect representation or current mathematics.
• Process physics is based on process philosophy. Instead of viewing existence in terms of physicalism, i.e., as a universe primarily composed of solid ‘matter’, or particles and the relations between particles, it views the universe as a dynamic process. Instead of asking “what makes particles move, change, and transform?”, it asks “What causes stability; i.e., why do some things not move, change, and transform? Why are some particles stable?”. See: http://www.mountainman.com.au/process_physics/HPS13.pdffor a summary of process physics. In reality, existence is not based just on materialism, nor is it just based on interacting dynamic processes. It is based on energy quanta. Energy quanta are the superposition of states and relations (which can in some cases represent particles of ‘matter’) and processes. Some energy quanta configurations form ‘matter’ (i.e., states and relations) and they all represent processes. In other words, eastern and western philosophical traditions are both partly correct. They are both incomplete, partial representations of reality. Physical existence is actually a combination of both, with the added twist that it is based on value semantics, instead of reference semantics.
• Charles Darwin’s concept of the evolution of life is just one instance of a far more general process. At its core, evolution is a process of repeated cycles of variation and selection. The core process of evolution operates all the way down to the level of quantum energy field interactions. All of existence is evolving. Existing quantum energy and dark energy fields interact with each other locally and create quantum field energy and dark energy patterns. Some of those energy and dark energy patterns are stable. Those patterns that are stable persist and interact with other local persistent patterns. Thus, stability is self-selecting, and self organizing. Over time, variation and selection cause higher and higher order stable quantum field energy and dark energy patterns to build up. Those patterns compose all of existence. The same self-organizing, self-modifying evolutionary process creates all of the time, space, fundamental forces, subatomic particles, atoms, molecules, stars, planets, life, and consciousness in the universe. The original input for the quantum evolution process is the singularity. Subsequent inputs are the result of its previous outputs. Thus quantum evolution operates on itself, modifying its state, relations, and processes over time. Over time, that process and its ongoing variations produce all of existence. Competition is natures’ way of optimizing the allocation of scarce resources. Since everything is composed of energy and dark energy, everything that exists competes for energy and/or dark energy. At the smallest quantum scales, that competition takes the form of competition for stable energy and dark energy patterns. Stability is caused by the chance occurrence of symmetrical quantum energy fields. The more symmetries that exist, the more stable degrees of freedom exist, and the more stable the energy and/or dark energy pattern is. Only stable energy patterns persist. Only those energy patterns that persist can interact with other energy patterns to create higher order, larger, more complex structures and forms. The result is the creation of the temporal field, followed by creation of the electromagnetic, color, and weak fields. The quantum evolution of the fundamental force fields is followed by the evolution of spacetime, and the subsequent evolution of quarks, mesons, leptons, fermions, baryons, atoms, stars, supernovas, the heavier elements, planets, gravitational black holes, galaxies, galactic clusters, and galactic super clusters they compose. As time progresses, more and more complex structures are formed. In turn, the existence of more and more complex stable structures allows nature to represent more and more abstract variation and selection processes. The more complex a structure is, the more complex a variation and selection process it tends to require to create and maintain its existence. As the variation and selection processes become more abstract, they go from simple selection based on stability to selection based on the existence of dynamic attractors, to stability based on self-organizing criticality, to stability based on self-reproduction, and onward to stability based on the ability to adapt to a changing fitness landscape. Eventually, after molecular population densities increase and molecules become complex enough to support the evolution of self reproducing systems, competition for stability results in the production of self-reproducing molecules. Competition among those self-reproducing molecules eventually leads to the evolution of self-reproducing organic molecules and the first primitive forms of life. The same process continues, producing higher and higher order life forms. Life forms become more and more adept at finding and utilizing energy sources. They become more and more efficient. The life forms that succeed are those that are best adapted to their environment. They are those that use energy most efficiently. Efficiencies are important at the level of individual cells, individual organs, subsystems, individuals, packs, flocks, tribes, groups, corporations, government, and species levels. More efficient use of energy allows larger populations to live and thrive in environments with limited resources. It also allows individuals and civilizations to do more with less energy, with a smaller environmental footprint. Nature rewards that which works. It rewards efficiency and productivity. This is as true in the natural world as it is in human activities and human groups. Those groups that are the most productive and can do the most with the least energy tend to grow at the expense of their less efficient rivals. The environment is constantly changing, so evolutionary fitness demands not just the ability to adapt to a particular environment, but also the ability to adapt to whatever changes occur in that environment. Species that become too well adapted to a particular environment may perish if they lack the ability to adapt to environmental changes as efficiently as those species they compete with. Different species compete with each other for scarce resources, including energy, food, habitat, and mates. Eventually, in more advanced species, the same process eventually leads to the creation of money, and market economies. Money and the market economy are simply a more advanced way to optimize the allocation of scarce resources among competing populations. It all results from the same fundamental process of quantum evolution.The increase in the level of abstraction of the selection and variation procedures does not stop with the evolution of life. It continues on to create consciousness and higher and higher order abstract intelligence. As species got larger, specialized excitable cells evolved for long distance communication between cell populations. Those cells evolved into nerves. Eventually, the level of abstraction of the selection and variation procedures increased until they reached the level of the abstraction of abstraction itself. The result is the evolution of the neuron. Neurons are the direct representation of abstraction of abstraction. They are the direct representation of the first order abstraction of abstraction itself. The brain uses neurons to represent existence directly in terms of abstractions. It also uses neurons to represent relations in terms of abstractions. The quantum evolution process continues inside our brains to create higher-order abstractions; i.e., it creates neurons that represent abstractions of abstractions. By doing this, the brain can represent existence and thoughts at multiple levels of abstraction in context. It allows us to think, understand our environment, and be conscious.The quantum evolution process is ongoing. Within the brain, neurons compete for representation within the cortical minicolumns in the cerebral cortex. The brain selects the best existing abstract representations of whatever we perceive or think about at each level of abstraction within each cortical minicolumn. Those representations dominate their competitors. They are rewarded with increased levels of neurotrophic growth factor, synaptic plasticity, and increased synaptic activation . They grow at the expense of their weaker rivals. In this way, nature selects the best, most efficient abstract representations currently available to interact with others in the chain of abstractions that compose the current context of thought at each level of abstraction relevant within the current context of thought. The hippocampus then composes the representation of higher order abstractions from the collection of lower order abstractions that represent them. We end up with the ability to represent thought and existence at higher and higher levels of abstraction. We end up with the ability to perceive and experience existence, think about it consciously, and understand it in terms of the abstract relations between existence, abstractions, abstractions of abstractions, abstractions of abstractions of abstractions, etc., in an ever ascending hierarchy of higher and higher levels of abstraction. What’s more, the whole system is self-organizing. Competition for neural representation automatically optimizes the allocation of neural real estate and metabolic resources based on what we experience, think about, and learn. The same mechanisms allow us to recover from neural dysfunctions, neuron death, and even strokes, provided the damage isn’t too extensive.Direct representation is a new theory of mathematics that explains how the quantum evolution process works. It redefines and extends parts of the current foundation of mathematics to make it complete, consistent, and observer independent. It creates a new foundation for mathematics that exists in one-to-one correspondence with all of physical existence. It creates a mathematics that can consistently and completely represent the existence of everything in the universe, including a consistent mathematical representation of infinity, the singularity, and the interaction of all energy and dark energy quanta. Direct representation and quantum evolution explain the origin and creation of all of existence, starting from the infinite singularity. Along the way, it explains many of the most fundamental unsolved problems in Physics. These include an explanation of energy quantization, quantum entanglement, dark energy, dark matter, time, space, gravity, the unification of QM and General Relativity, the solution for the missing antimatter problem, and the solution of the QM vacuum catastrophe.Energy quantization is caused by spontaneous symmetry breaking in the singularity. Spontaneous symmetry breaking creates symmetric differences in the singularity. Those symmetric differences are virtual energy and virtual dark energy strings. Virtual energy strings that are integer multiples of a Planck length compose symmetric units of energy called energy quanta. Virtual dark energy strings compose dark energy quanta. The composition of energy and dark energy quanta compose time, space, and everything else that exists in the universe.The quantum evolution process is recursive because it operates on its own outputs. In other words it uses the output state of processing at time t to represent the input state for processing at time t +1.The process has to be self-modifying because the process itself is part of existence. As quantum evolution modifies the quantum state of existence, it also modifies the part of existence that represents the quantum evolution process.
• Energy Quantization:Energy quantization is caused by the chanceformation of stable quantum loops in the seething sea of changingvirtual energy strings that compose the quantum vacuum. The formation of stable quantum loops causes symmetry breaking. The infinite symmetry of infinity is broken to create a new unitary symmetry in the finite. At the lowest level of existence, those quantum loops are unitary differences in infinity. The quantum loops are the event horizons of quantum microsingularities. The lowest level unitary differences are temporal and anti-temporal field energy quanta. At higher levels, those differences represent differences in the composition of higher order networks recursively composed from hierarchies of quantum field energy differences. Each quantum field network represents the intension of a type of system. Mathematically, each quantum field network forms the orthogonal cross-section of a fiber. In some cases, the quantum field relations in the intensional network compose attractors. In turn, the energy quantum and its attractor can be hierarchically composed of lower level energy quanta, their relations, and their attractors. Conversely, the network that represents the intension can act as a node in the intension of a higher level network. Thus, nodes inside a quantum field network can themselves be lower-level networks. That allows fibers to compose fiber bundles. The relations between the networks that compose each node then compose the relations between the fibers in the fiber bundle. The top level attractor represents the extension of a type of system. Its component attractors represent its intension.Quantization can cause composition or decomposition of intensional quantum states. Composition occurs when an extension absorbs an energy quantum. Decomposition occurs when an extension emits an energy quantum.Composition:Composition occurs when a member of an extension&apos;s intension absorbs an energy quantum. Quantum absorption occurs when a finite difference in the quantum vacuum becomes large enough to represent a complete wavelength of energy. That closes a string, thereby causing quantization and creating a new temporal, or anti-temporal field quantum. The temporal and anti-temporal field quanta are stable because of the invariance caused by their rotational and reflective symmetries. Those symmetries make them invariant under phase change and spin. Of course, spin invariance is lost as soon as higher level compositions of energy quanta are formed.Composition can cause variation or selection. Variation is a change in the energy level of an instance of an existing type. Composition can increase the energy level of an existing system extension, thereby changing an existing quantum state and reducing its stability by increasing its potential or kinetic energy. In turn that change can cause the system to change the behavior of its attractor.Alternatively, composition can cause selection by causing an intensional quantum state change that adds a new quantum state, thereby increasing the dimension of the extension, adding a new order parameter and creating a new emergent type of extension. That increases diversity, i.e., it increases the number of discrete types of things. The latter case occurs in spontaneous symmetry breaking that causes a phase transition.Decomposition:Decomposition occurs when a member of an extension&apos;s intension emits an energy quantum. This occurs when the system represented by the extension settles into a lower quantum energy state. Decomposition can also cause variation or selection. Variation is a change in the energy level of an instance of an existing type. Decomposition can reduce the energy level of an existing system extension, thereby changing an existing state and increasing its stability by reducing its potential or kinetic energy. The closer the energy gets to the ground state of the intensional context it exists in, the more stable it becomes.Alternatively, decomposition can cause selection by causing an intensional quantum state change that removes an existing quantum state, thereby reducing the dimension of the extensions&apos; quantum state, removing an existing order parameter and changing to a lower dimensional type of extension. This can cause emergent forces, states and behaviors. It may represent a new lower ground state in the intensional context. The latter case occurs in spontaneous symmetry breaking that causes a phase transition.From the perspective of attractors, selection results in the composition of a different attractor. System behavior can change radically when its attractor is changed. This can result in emergent behavior. An example would be the change in behavior between two hydrogen atoms and an oxygen atom, versus the behavior of a water molecule. Variation:Variation is a change in the energy level of an existing type. That change can increase or reduce the energy level of an existing system extension. Variation increases diversity.Diversity:Diversity represents a change in the intensional state of a system. It represents nature&apos;s direct representational synthesis of emergent relations, states, processes, objects, and events.Selection:Selection follows variation. Selection is typically caused by one or more variations, but that cause may be indirect.Selection is the addition or removal of a quantum state of an extension or its intension. Selection causes the selective retention of stable forms. Only those forms that have attractors persist. If too much energy is fed into a system, its stability decreases, until at some point its attractor becomes disorganized, and the system it represented decomposes and ceases to exist. Stable forms are symmetric. Symmetry is an invariance under transformation. It is an invariance under energy variation. Stable forms have a stable minimal energy state with a deep enough valley that it is unlikely the state will change due to likely variations in the current context of existence. If a new extension is retained, it is because its intension is internally consistent. The quantum energy field relations that represent its intension compose an attractor. That attractor is consistent within its environment. At any one time, those things that are causally related via composition of energy quanta fit together in some sense. As time goes on and variation continues, there is an ever greater variation of extensions that still fit together. That is how nature creatively produces a diversity of harmonious, stable systems. At any given time, there is an abundance of harmony and consistency in nature. Variation produces an ever increasing diversity, and selection produces unity; i.e., it tends to produce multiple instances of the same type of entity, process, or system. Unity:Systems with unity are unitary, they are internally consistent, and they have a boundary.Unity results from the selection of stable consistent collections of energy quanta. It represents the creation of an extension, and thus the creation of an instance of a type of system. Multiple instances of the same type of system tend to reoccur, because the same sets of stable relations tend to be generated from the same sets of stable precursors. This tends to create repeating types of energy patterns and particles with many instances. In turn, if they are stable enough, those types combine via intensional composition to create higher dimensional emergent types. The end result is the transfinite recursive emergence of hierarchies of types.Unity represents nature&apos;s direct representational analysis of emergent relations, states, processes, objects, and events.Unity occurs because systems are composed bottom up from sets of persistently stable energy patterns. Only those patterns that survived long enough to compose an intension can be part of that intension. That means the lower level components of existence tend to be older than the higher level components. Lower level components generally have longer half-lives. They are more stable. There are some exceptions to this. Sometimes a higher level component can exist in a deeper energy well than its lower level constituents. For example, free neutrons are less stable than neutrons bound in a nucleus. A neutron that does not have any protons undergoes beta decay and decays into a proton, and a W- boson in just under 15 minutes. The W- boson then decays into an electron, and an electron anti-neutrino in about 3 x 10-25 seconds. However, a neutron that exists with protons in a nucleus is stable. Apparently, the combination of neutrons and protons in an atomic nucleus exist in a more stable energy state (a deeper energy well) than the quantum energy field that composes the neutron alone. Speaking of instability, the W- boson has very high mass. It is almost 100 times as massive as a proton. It is more massive than an entire atom of iron. Apparently its composition includes a high energy gradient with substantial local spacetime curvature. The energy sink that represents it must be very unstable. Thus the weak force it carries is limited to a short range.Dimension:Dimension refers to the number of quantum states in the quantum state tensor that represents the intension of a system. As the dimension of a system increases, its complexity tends to increase combinatorially. Its degrees of freedom tend to be reduced combinatorially. It becomes more specialized, with more interesting behavior, but the increased complexity tends to make it less stable. More moving parts means more things can go wrong. To counteract that tendency, as systems evolved towards higher orders of complexity, they increased stability and reliability by evolving active feedback systems for dynamic stabilization and control. Networks contain many interacting feedback loops. Stability is further increased via self-replication of existing energy patterns and existing networks. Some carbon-basedmolecules evolved the ability to self-reproduce via polymerization. Eventually, they evolved mechanisms that allowed them to exert top down control over their own replication.An example of this process is Dr. Jack Szostak’s proposed model of abiogenesis; i.e., the origin of primitive life from the interaction of inorganic components.
• Incremental improvements in molecular reproduction are conserved and amplified in the molecular population, while those that don&apos;t are not. Molecules that can reproduce themselves and produce a physical barrier (a cell wall) to protect themselves from the environment have an additional stability advantage. Eventually, self-reproducing molecules evolved into cells. Initially cells reproduced the same way molecules did, by simple fission. Eventually cells evolved sexual reproduction. Instead of reproducing all of their genes, they only reproduced half of them. They also reproduced half of the genes from their sexual partner. This created a combinatorial explosion in genetic diversity which allowed sexually reproducing organisms to adapt to changes in their environment much faster than cells that reproduced by simple fission could. The increased complexity of sexual reproduction was more than offset by the increase in diversity and the enhanced ability to survive in a changing environment. Another consequence of sexual reproduction was mortality. Species that reproduced sexually further extended their ability to adapt to changing environments by evolving death. By dying, members of a species that had already reproduced made way for the next generation. Limiting longevity increased a species ability to adapt to a changing environment. At least early in the history of life, the adaptation rate offered by a shorter lifespan and a more rapid turnover of generations more than offset any advantage a longer lifespan produced via an individual organisms ability to adapt to its environment. Back then, adaptation was largely molecular based. Multicellular organisms, and intracellular communications mechanisms didn&apos;t evolve until later. Cumulative adaptive advantage could only be secured through reproduction and the combinatorial diversity created by sexual replication.RNA also provided another big evolutionary advantage. It was the evolution of top down control over a system&apos;s composition. Instead of being limited to bottom up composition, life could take advantage of a mix of top down control and bottom up composition.DNA then evolved as a replacement for RNA. It evolved because of its ability to reduce errors in genetic replication. Once life was sufficiently well adapted to its environment, the value of consistent replication became higher than the value of random genetic mutation. This also increased the value of top down control by better preserving the genetic lessons learned by previous generations.
• In effect, all of existence is the ongoing output of a very large quantum computer. That quantum computer is composed from the interaction of every energy and dark energy quantum in the universe. That quantum computer incrementally computes and composes the quantum states, quantum fields, and quantum field interactions that constitute all of existence. That includes the existence of the infinite singularity, virtual energy, virtual dark energy, energy quanta, dark energy quanta, time, space, black holes, all quantum field interactions, gravity, matter, dark matter, antimatter, subatomic particles, atoms, molecules, planets, stars, solar systems, galaxies, galactic clusters, galactic super-clusters, life, thought, meaning, and consciousness.
• The entire universe is a giant quantum computer, composed of the singularity and all relations between it and every energy and dark energy quantum in existence. That quantum computer has no programmer. The quantum computer is self-organizing, self-programming and self-modifying. It creates its own ‘hardware’ and its own ‘program’ from the singularity as it executes. Its hardware and its program are the evolving quantum state of existence.
• All quantum state transitions are causally related throughout the universe. The quantum state transitions occur in quantum leaps, ‘beneath’ time and space. In other words, quantum state transitions compose the quantum field structures that compose time and space themselves. In other words, time and space exist above the level of quantum state transitions. That means quantum state transitions exist outside of time and space. They do not occur in time and space. That explains why quantum non-locality is possible.From our perspective, quantum state transitions each occur instantaneously, inside a Planck time.
• The very definition of infinity makes it impossible for nonexistence to exist.A thing that is infinite has no bounds. That means it has no bounds in any dimension. It has no dimensions period. That means it is unbounded in time. In turn, that means it can have no first cause, no beginning and no end. Infinity is eternal. It cannot be created or destroyed. Since infinity exists eternally, everywhere, there can be no nonexistence. The concept of nonexistence is a logical contradiction and a physical impossibility.The law of conservation of energy also makes it impossible for nothing to exist. Energy cannot be created or destroyed. Therefore, energy always exists in some form. Since energy always exists, nonexistence can never exist.
• The empty set is an abstract fantasy. It is a misconception in the sense that it is a physical impossibility. It is an abstract mathematical fallacy that is inconsistent with the very existence of the universe and everything that exists in it. The empty set does not describe existence, the finite, the infinite, or the singularity. It does not describe any part of reality. If truth is defined as correspondence with that which exists, then the empty set is false. The empty set is fundamentally inconsistent with all complete, consistent, and true descriptions of physical existence. Since the concept of ‘number’ is based on the transfinite recursive composition of empty sets, that means numbers do not exist in one-to-one bijection with physical existence. In particular, the origin of numbers does not match the origin of physical existence. The origin of physical existence is infinity. It is the grand unified field in the infinite singularity. All of spacetime and existence expanded from the infinite singularity in the big bang. The finite is composed of symmetric differences in infinity. In non-unitary form, those symmetric differences are virtual energy and virtual dark energy strings. In unitary form, they are energy and dark energy quanta. To represent existence faithfully, our representation of numbers must exist in one-to-one correspondence with physical existence. In part, that means the origin of the numbers must be the same as that of physical existence. In other words, instead of basing numbers on the transfinite recursive composition of empty sets, they must be based on the transfinite recursive composition of symmetric differences in infinity. Some would argue the empty set doesn’t represent nonexistence because it actually exists as a non-physical abstract concept; it is just an empty bag or an empty collection. It is just a starting reference point to base the numbers on. That argument also fails. There is no such thing as a non-physical abstract concept. Every concept is represented by the physical existence of neurons and energy in a living brain. Even mathematics does not exist without energy to represent it.Empty bags do not exist in nature. In nature everything that exists is composed of something. At a minimum, it is composed of energy or dark energy. Even an empty bag must contain and be composed of energy. Even the spacetime inside an empty bag is composed of zero point virtual quantum field energy. Others would argue that the empty set is only a logical contradiction if the total amount of energy in the universe is not zero. Their argument is that negative energy may cancel out energy in such a way that the total amount of energy in the universe is zero. This is the old idea that the universe can be created from nothing. It too is false. Nothing comes from nothing. Creatio ex nihilo is false.The total amount of energy in the universe is a very large, but finite constant. This fact is stated by the law of conservation of energy. Some people think the total amount of energy in the universe is zero because ‘negative energy’ in the Dirac Sea, or quantum vacuum cancels out normal energy, but it does not work that way. When one considers the word ‘negative’, it is important to understand which reference frame negative is relative to. In the case of the Dirac Sea, negative is relative to the zero point virtual quantum field energy. It is relative to the background energy of spacetime we measure all other energy relative to. In other words, it assumes the zero point virtual quantum field energy is zero, because all other energy is measured relative to that zero. From that reference frame, the energy in the Dirac sea is less than zero. However, relative to the absolute reference frame of the grand unified field energy that composes the singularity, the zero point virtual quantum field energy is far above zero. According to QM it is 10^121 GeV/m3. Others have argued that this QM calculation must be an error because that much energy would have a tremendous amount of mass and so it would be necessary to have a ridiculously large cosmological constant to cancel out its gravity and keep the universe from collapsing. The problem with that line of reasoning is that it ignores the fact that the zero point virtual quantum field energy is composed of virtual photons and virtual gluons, both of which are massless bosons. Massless bosons have no mass. They compose the basic background energy that composes spacetime itself. That means their energy all exists ‘beneath’ spacetime. In other words, the zero point virtual quantum energy field has no mass and no gravity and its value cannot affect the cosmological constant. This will be explained in further detail later in this presentation.A related issue is the common belief that the singularity inside a black hole has an immense gravitational field. The problem there is the composition of singularities is poorly understood. It is commonly held that all the mass that gets consumed by a black hole ends up in the singularity and the singularity’s mass causes the black holes’ gravitational field. What isn’t commonly understood is that all the mass and spacetime inside the event horizon of a black hole is converted into massless dark energy bosons before it becomes part of the grand unified field that composes the singularity. The singularity does not destroy energy. Energy simply exists in a different form in the singularity. In that form, energy has no mass.The singularity itself is massless. A black holes’ gravitational field is composed from the curvature of the spacetime that surrounds the black hole’s event horizon. The spacetime curvature is not caused by the mass of the singularity. It is caused by the difference between the real component of the energy density of the zero point virtual quantum field that composes spacetime (10^121 GeV/m3), and the absolute zero real component of the energy density of the singularity. A black hole functions as a sink for the zero point quantum field virtual energy that composes the spacetime surround its event horizon. That energy difference is its primary power source. The consumption of the spacetime field around the black hole’s event horizon creates curvature in the spacetime field. That spacetime curvature is the black hole’s gravity. In other words, the mass of the singularity does not create the black holes’ gravitational field. The singularity is massless. The singularity only creates the black holes’ gravitational field indirectly. The gravitational field is part of the spacetime surrounding the black hole’s event horizon, not part of the singularity. Conclusion: The empty set is a physical impossibility. It is an existential contradiction.
• According to axiomatic set theory, the natural numbers are based on a well-founded cumulative hierarchy composed from the transfinite recursive composition of empty sets. The false premise is the existence of the empty set. All of existence expanded from the infinite singularity in the big bang. Therefore, the origin of existence is the singularity. It is infinity, not zero. To put numbers in one-to-one (bijective) correspondence with physical existence, we must move the origin of the number system to infinity.We also need to fix an error in the definition of the set membership operator. In current mathematics, the set membership operator is based on reference semantics. That makes numbers observer dependent. In physical existence, set membership is based on value semantics. The existence of physical existence is observer independent. Existence exists whether we are around to observe it or think about it or not. Therefore, things physically exist by value, not by reference. Eliminating these inconsistencies removes the empty set and the universal set from set theory. It makes both concepts unnecessary. It makes mathematics direct instead of indirect. It makes mathematics complete and consistent in the universal domain. In other words, it allows direct mathematics to represent all of physical existence consistently.Finally, to put numbers in one-to-one correspondence with physical existence, we need to remove the distinction between state and relation. States and relations represent different aspects of existence. Each one alone is an incomplete representation of existence. In nature, energy and dark energy quanta exist as a superposition of state and relation. That is what causes particle wave duality. To measure a particle’s state, we need to measure it at a particular point in time. To measure a relation between two or more particle states (for example to measure a particle’s velocity), we need to measure it over a period of time. That means we cannot use the same measurement to measure a state and a relation. Even worse, when we take a measurement at quantum scales, the act of taking the measurement changes the future state of the particle being measured. Furthermore, the more precisely we measure a particle’s state, the more it perturbs its future state, so the more certainty we have about one state, the less certain we can be about subsequent measurements of the state of the same particle. The end result is the Heisenberg Uncertainty principle. Nature does not suffer from Heisenberg Uncertainty because it has no need to observe or measure itself. It already is itself. Nature represents itself directly, not indirectly. Because it represents itself directly, nature has no need to distinguish between states and relations. Nature represents itself directly in terms of energy and dark energy quanta. Energy and dark energy quanta represent a superposition of state and relation. With no need to measure itself, and no need to distinguish between state and relation, there is no Heisenberg Uncertainty. In turn, that eliminates a major cause of incompleteness and inconsistency in the representation of the totality of existence. Direct representation eliminates the other potential cause of incompleteness and inconsistency (light speed limitations on the speed of information transmission), because in direct representation, everything only represents itself as itself, and everything is always zero distance in time and space from itself. Direct representation doesn’t represent anything in terms of information. Since everything in direct representation is itself, direct representation is complete and consistent in the universal domain.
• See: The Unreasonable Effectiveness of Mathematics in the Natural Sciences by Eugene Wigner. Reference: http://www.physik.uni-wuerzburg.de/fileadmin/tp3/QM/wigner.pdf“The first point is that mathematical concepts turn up in entirely unexpected connections.Moreover, they often permit an unexpectedly close and accurate description of the phenomena inthese connections. Secondly, just because of this circumstance, and because we do not understandthe reasons of their usefulness, we cannot know whether a theory formulated in terms ofmathematical concepts is uniquely appropriate. We are in a position similar to that of a man whowas provided with a bunch of keys and who, having to open several doors in succession, always hiton the right key on the first or second trial. He became skeptical concerning the uniqueness of thecoordination between keys and doors.”The fact is we make a rather narrow selection when choosing the data on which we test our theories. &quot;How do we know that, if we made a theory which focuses its attention on phenomena we disregard and disregards some of the phenomena now commanding our attention, that we could not build another theory which has little in common with the present one but which, nevertheless, explains just as many phenomena as the present theory?&quot; It has to be admitted that we have no definite evidence that there is no such theory.The enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and there is no rational explanation for it.Why do the laws of nature seem to be written in the language of mathematics? In other words, why are the laws of nature frequently consistent with the language of mathematics?The reason mathematics is unreasonably effective at representing nature is because its transfinite recursive structure happens to be isomorphic to that of the transfinite recursive composition of energy and dark energy quantum fields. In other words, both nature and mathematics are based on transfinite recursive composition. Instead of basing numbers on the transfinite recursive composition of the empty set, direct numbers are founded on a cumulative hierarchy constructed from the transfinite recursive composition of symmetric differences in infinity. This is the same way nature works. At the lowest level of existence, strings are finite symmetric differences in the infinite singularity. Differences in those differences create string vibration and rotation modes. Those vibration and rotation modes represent virtual bosons. They represent virtual energy and virtual dark energy. Closed string loops represent energy quanta and dark energy quanta. Virtual energy and virtual dark energy strings represent first order relations between energy and dark energy quanta.
• Differential equations are an effective representation of physical existence because they are based on derivatives. Derivatives are based on a quotient of infinitesimal finite differences. First order virtual energy and dark energy strings are infinitesimal finite differences between the infinite singularity and itself.Integrals are an effective representation of physical existence because they represent the inverse of the relations represented by derivatives.
• Information is notoriously difficult to define because it means different things to different people in different contexts. What is especially important for Physics is to carefully define the relations between physical existence, information, energy, the observer, measurement and observation. It is particularly important to clearly understand where to draw the line that separates the mental representation of physical existence, information about physical existence, and the energy quanta that compose physical existence itself. In particular it is important to understand that existence is composed of energy. It is not composed of information. Information and energy are not the same thing. Before we can understand this in detail, we need to clearly understand what information is. A detailed understanding of information and its relation to direct representation will allow us to leverage that understanding and use it as a basis for the derivation of direct representation.
• “It from Bit” is from an influential position paper by Physicist John Archibald Wheeler titled Information, Physics, Quantum: The Search For Links’, from ‘Complexity, Entropy, and the Physics of Information’, The Proceedings of the Workshop on Complexity, Entropy, and the Physics of Information Held May-June, 1989 in Santa Fe, New Mexico; Santa Fe Institute, Vol VIII, pg 5There are at least 4 different reasons the It from Bit quantum mechanics hypothesis is physicallyandlogicallyimpossible:Information is observer centric. There could not have been any observer’s in the infinite singularity at the beginning of the big bang. No physical existence composed of information could be complete or consistent because the finite speed of light makes it impossible for any observer to have any information about the current state of existence any distance from themselves. As soon as any quantum state change occurred any distance from an observer, the current state of existence would become temporally and structurally inconsistent. Since different parts of existence are located different distances from each other and different distances from every observer, every observer’s information about the current state of existence would be inconsistent in time. Even worse, some parts of existence are so far apart they are outside each other’s light cone, so it is impossible for them to have any information about each other. Yet, as far as we know, the laws of physics are consistent across the entire universe. In fact, all of existence is complete and consistent. Nothing that exists is inconsistent with its own existence. Nothing that exists is incomplete; i.e. at the quantum scale, every energy quantum is indivisible. Every quantum exists completely or not at all. Since everything is composed of energy quanta, everything is complete. 3) Heisenberg Uncertainty would cause any existence composed of information to be incomplete because it makes it impossible for any observer to obtain complete information about the current state of existence. Yet we know existence is complete because all of existence is composed of energy and dark energy quanta and all energy quanta are complete and indivisible. There is no such thing as a partial energy quantum. There is no such thing as an incomplete energy quantum. That contradiction makes it physically and logically impossible for existence to be composed of information.4) The universe contains the infinite singularity. Infinity is boundless. It is mathematically impossible for it to be inconsistent or incomplete. Yet any existence composed of information would have to be incomplete and inconsistent. The completeness and consistency of physical existence proves existence cannot be composed of information.In addition, It from Bit was based on incomplete information. Due to that incomplete information, its logical derivation contains a formal fallacy of composition. In other words, the logical argument used to derive It from Bit is unsound. In fact, “It from Bit” is backwards. What we really have is “Bit from It”. Observer’s use a measurement apparatus with some kind of sensor and/or signal transducer to amplify a signal from one or more energy quanta, condition the signal if necessary, sample it, and convert it into bits of information, typically via an analog to digital converter. The analog to digital converter produces an energy pattern which represents bits of data which the observer interprets and uses to represent existence indirectly in terms of information. The electrical signals that represent the bits of information exist physically, but they are not composed from the same energy quanta that they represent. Energy quanta aren’t composed of bits of information. Observer’s use measurement equipment which produces bits of information which observers then interpret and infer meaning from. The source of information is the same in living observers. Our bodies just use biological sensors and transducers to detect changes in energy instead of man made devices. Information is an indirect representation of existence. Existence is not composed of information. Conflating the indirect abstract representation of existence with physical existence itself is a major source of confusion and error in quantum physics. It from Bit is a case in point.At least in part, the belief that existence is composed of information derives from the fact that a reduction in the uncertainty about the micro state of existence and a reduction in the uncertainty of the state of an information system can both be quantified in terms of entropy. In particular, the thermodynamic entropy of an isolated system near thermodynamic equilibrium and Shannon’s information entropy can both be represented by the same equation, up to a constant factor. Alas, this fact is nowhere near sufficient to soundly deduce existence is composed of information.In essence, all Shannon’s mathematical theory of communication (MTC) tells us is that information and statistical thermodynamics can both be modeled by a finite state space with known joint probability distributions between the occurrence of components that compose the states in their respective state spaces. In both systems, if we know the joint probability distributions between the occurrence of the set of components that compose the set of states in a finite system and we can observe part of the state space configuration, we can use that knowledge to reduce our uncertainty about the configuration of the unobserved portion of the state space. In the case of a thermodynamic system, uncertainty about the configuration of the system’s future microstate is reduced when the system approaches static thermodynamic equilibrium because as a system approaches thermodynamic equilibrium, the amount of potential difference in the system is reduced. Less potential difference means fewer future state configurations are possible. We call that reduction in uncertainty ‘entropy’. In the case of an information system with a fixed set of messages composed of symbols drawn from a fixed alphabet with known joint probability distributions between the occurrence of particular symbol sequences in the messages, uncertainty is reduced after each symbol in a message is observed because we can use our knowledge of the joint probability distribution between symbol occurrences to increase the probability we can predict the remainder of the symbols in the message. Similarly, we can use the same knowledge to help us predict the remaining messages in a finite set of messages.Information entropy and thermodynamic entropy are both represented by the same form of mathematical equation because both systems are modeled in terms of finite state spaces with known joint probability distributions. In the case of a particular thermodynamic system, its quantum state probability distributions are provided by calculating its quantum mechanical wave functions. In the case of an information source with a fixed set of messages composed from a fixed alphabet, the joint probability distribution between character sequences in the messages in its message set can be calculated in advance directly. In both systems, entropy represents a reduction in uncertainty regarding the unobserved distribution of states in each system’s state space configuration after observation of part of its state space configuration. Just because one or more aspects of two different things can be represented by the same equation, it doesn’t mean both things are the same thing. In logic, this common mistake in reasoning is known as a formal fallacy of composition. A fallacy of composition occurs when we invalidly impute characteristics of one or more parts of a thing to the whole of which they are parts. In this case, physics has invalidly inferred that existence is composed of information because the entropy of an isolated thermodynamic system and the entropy of a particular information system can both be represented using similar equations. Entropy is only one aspect of the behavior of existence. It only represents one aspect of energy and even then, it only represents it in the context of an isolated thermodynamic system that is near thermodynamic equilibrium. Entropy is nowhere close to a complete description of energy, let alone existence. The same is true of information entropy as it relates to a particular information system and a particular set of messages composed from a specific sets of symbols drawn from a fixed alphabet. Information entropy is nowhere close to a complete description of information. Another problem is that It from Bit was derived from a logical contradiction based on the nonexistence of a continuum in mathematics or physical existence. Since a true continuum does not exist mathematically or physically, the conclusion was existence must be based on the converse of a continuous representation; in other words, existence must be based on a discrete representation. That logic is okay as far as it goes, but there was an assumption that that discrete representation must be the discrete representation of information. That assumption is incorrect. There are other types of discrete representation that are not based on the representation of information. Direct representation is also a discrete representation, and in fact, it is far simpler than the binary representation of information. For one thing, direct representation uses a univalent encoding instead of a bivalent encoding. Because of that, direct representation doesn’t need to ask any yes or no questions, or measure anything. Direct representation doesn’t need to decide whether something should be represented as a 0 or a 1. It doesn’t need to decide whether or not an energy quantum exists. It doesn’t need to make any decisions. Since no decisions are required, the whole problem of undecidability never occurs. Direct representation simply represents what exists directly. It represents things directly in terms of the energy quanta that compose them. The whole concept of ‘making decisions’ is observer centric. It is anthropocentric. Requiring that nature base its existence on a representation that requires it to make decisions is obviously anthropocentric. In essence, it makes the physical existence of existence observer dependent. That is subjective, not objective. In principle, it is no better than the ancient belief that the universe revolved around the earth. It has simply replaced a universe that revolves around the earth with an existence described in observer relative terms. An observer’s representation of physical existence in terms of information is observer dependent, but the existence of physical existence itself cannot be observer dependent. Information is observer dependent, decision dependent, and indirect. It also requires energy be used to represent both the existent and the non-existent because energy would have to be used to represent 1’s and 0’s in any bivalent code. There are an infinite number of non-existent things, so that would require an infinite amount of energy. Energy is a finite conserved quantity. Energy is also quantized. Since energy is quantized, it is quantifiable, and that means it must be finite. Since existence is observer independent, most of it does not make decisions. Since existence is a direct representation, it cannot be composed of information, because information is an indirect representation. There are several different sources of confusion here:First, as explained above, just because part of the behavior of two different systems can be described using the same equation, it does not mean they are the same thing. One must remember that equality in mathematics is only equality up to isomorphism. Second, an amount or quantity of information in the MTC does not describe the meaning or semantic content of information. It has nothing to do with what information means or is. Third, there is an additional source of confusion because we infer a quantum wave function collapses when we observe an observable. Due to temporal contiguity, we infer that quantum mechanical wave functions collapse because we observed the information represented by the observable. In fact, quantum waves collapse not simply because we observe them, but because the physical act of detecting an energy quantum necessarily requires a quantum field interaction with the measurement apparatus and that interaction causes a change in the quantum’s quantum field structure. Quantum wave collapse occurs at the time of observation so we infer the cause to be observation, when in fact the cause is actually due to the interaction of the quantum energy field with the quantum energy field that composes the measurement apparatus. We must remember temporal contiguity is not sufficient to establish cause and effect. Fourth, In addition, in the previous line of reasoning, physics conflates the quantum mechanical mathematical representation of a probability density wave with the composition of physical existence itself. While a probability density wave is one way to represent a related set of changes in a quantum energy field, it can’t be the only possible description. Nature does not write down mathematical equations and represent its existence indirectly or symbolically. We must remember that our equations are only an indirect, incomplete, partial representation of part of physical existence.Confusion also arises because energy quanta can be used to represent information, and information can be used to represent energy quanta. However, those facts do not mean energy quanta are composed of information. Just because energy quanta can represent information, and information can be used to represent energy quanta, it does not mean energy quanta are information. To understand why this is true, we will need a far more complete understanding of information than a simple probabilistic model of informational entropy can provide. In particular we need to understand the relation between information and its observer, we need to understand the relation between information and semantic meaning, we need to understand the relation between information and energy, we need to understand the relation between information and measurement, and we need to understand the relation between information and existence. We also need to understand the detailed process used to create information about physical existence. We need to understand information from its creation all the way through its interpretation and the representation of its meaning in an observer’s brain.It is also important to understand that ‘It from qubit’ is also false, in the sense that an energy quantum is not a binary representation. Nature does not ask yes or no questions, nor does it decide whether or not an energy quantum should, or will, exist or not. There is no energy quantum exists (yes/no) decision, or encoding. An energy quantum does not represents a ‘bit’ of existence. It is not the response to a yes/no question. It is simply a unit of existence. In particular, it is important to understand:Most of nature makes no decisions. Only observers make decisions. The very concept of decisions is anthropocentric.Nature has no representation for the nonexistent. Nature does not waste energy representing things that do not exist. If something does not exist, it simply has no representation. Nature represents everything that exists directly. It does not use indirect representation to represent any part of physical existence.If nature used a code to represent existence it would be univalent, not bivalent. In fact, there is no code and no information in existence. There is only energy. Time, space, matter, antimatter, and dark matter are all composed of energy. All fundamental forces are caused by the quantum field interactions between the composition of energy and dark energy quanta.The It from Bit hypothesis that energy quanta are bits of information is internally inconsistent and false. Information is an indirect observer dependent representation based on reference semantics, whereas energy quanta are a direct observer independent representation based on value semantics. Direct and indirect representation are logical converses. Reference semantics and value semantics are logical converses. Observer dependent and observer independent are logical converses. I could go on. I have identified no less than eight major information theoretic properties of direct and indirect representation that are logical converses. A thing cannot be itself and its logical converse at the same time.
• The GDI provides a useful starting point and philosophical framework for a definition of information, even though some of its points are incorrect.
• According to the GDI, σ is an instance of information, understood as semantic content, if and only if: GDI.1: σ consists of one or more datum. A datum is a single data item. Data are the stuff of which information is made.GDI.2: The data in σ are well formed.GDI.3: the well-formed data in σ are meaningful. GDI describes information as consisting of data. The meaning of data is almost as ambiguous as that of information, so before we can fully define information, we need to understand what data is. We&apos;ll cover the definition of data shortly. According to the GDI, data consists of environmental data, semantic data, and/or syntactic data.  Data is well formed if it has correct syntax. In (GDI.2), “well-formed” means that the data are clustered together correctly, according to the rules (syntax ) that govern the chosen system, code or language being analyzed. Syntax here must be understood broadly (not just linguistically), as what determines the form,construction, composition or structuring of something (engineers, film directors, painters, chessplayers and gardeners speak of syntax in this broad sense). For example, the manual of your carmay show a two dimensional picture of the two cars placed one near the other, not one on top of the other. This pictorial syntax (including the perspective projection that represents space by converging parallellines) makes the illustrations potentially meaningful to the user. Using the same example, the actualbattery needs to be connected to the engine in a correct way to function: this is still syntax, in termsof correct physical architecture of the system (thus a disconnected battery is a syntactic problem). And of course the conversation you carry on with your neighbor follows the grammatical rules ofEnglish: this is syntax in the ordinary linguistic sense. Syntax is often represented by specifying rules that determine the valid sequences of symbols from some alphabet. Regarding (GDI.3), this is where semantics finally occurs. “Meaningful” means that the data mustcomply with the meanings (semantics ) of the chosen system, code or language in question. However, let us not forget that semantic information is not necessarily linguistic. For example, in the case of the manual of the car, the illustrations are such as to be visually meaningful to the reader.
• The Diaphoric Definition of Data (DDD):A datum is a putative fact regarding some difference or lack of uniformity within some context.
• From the perspective of an observer, diaphora de re are raw input from the environment. Data as diaphora de re are a lack of uniformity in the real world out there. Diaphora de re are pure data or protoepistemic data (data before the abstraction of meaning), that is, data before they are epistemically interpreted. As “fractures in the fabric of being” they can only be posited as an external anchor of our information, for diaphora de re are never accessed or elaborated independently of a level of abstraction. The last point is far more important than a casual reading implies. Notice that the last point creates a fundamental difference between diaphora de re and the other types of diaphora with respect to their location, and with respect to their representation. In particular, diaphora de re exist outside an observer’s mind, or outside an information system whereas diaphora de signo and diaphora de dicto only exist inside an observers mind or inside an information system. Diaphora de re are not abstracted whereas diaphora de signo and diaphora de dicto are. This distinction is far more important than it may seem on the surface. An abstraction is a partial (and thus incomplete) indirect representation of part of something in some abstract context. Conversely, diaphora de re are a complete direct representation of part of existence. In the former case, we have an indirect partial representation of part of existence in some abstract context, whereas in the later case we have a direct complete representation of part of existence. Those are two fundamentally different representations. The former is based on reference semantics, while the later is based on value semantics. The former is incomplete, while the latter is complete. The context of the former is abstract and indirect, while that of the latter is concrete and direct. The former is an indirect representation about part of existence whereas the later is part of existence.The representation of diaphora de re is fundamentally different than that of diaphora de signo and diaphora de dicto. Hiding those differences by calling all kinds of diaphora information is a major ambiguity in the definition of information.This may seem like a fine distinction, but its ramifications are enormous. It changes how we conceive of our entire relation to physical existence. It means nature’s representation of physical existence is not based on information. It means information only exists inside information systems and inside intelligent observers. It means physical existence is not composed of information. It means information is an abstract representation of existence, it is not existence itself. In short, Information ≠ Existence.
• Diaphora de signo are the lowest level form of information inside an information system. They represent the raw bits that information is composed from.Of course, from a physical perspective Diaphora De Signo are themselves composed of energy quanta at the level of physical existence.
• Diaphora de signo is the indirect representation of the physical sign or phenomena that signifies the presence of diaphora de re (internally). Examples of Diaphora de signo would be the perceptual detection of the photons emitted by a star in our retina before those signals are further abstracted or interpreted by the brain, or the detection and measurement of the photons reflected by an object in the real world. In other words the diaphora de signo are the output of some process that detects and transduces a difference in diaphora de re (before that output is encoded). This process involves signal acquisition, signal sampling, measurement, and conversion to an internal data representation. An example is conversion from an analog signal that represents the temperature of some substance via the analog electrical resistance of a thermocouple, followed by periodic sampling of that resistance and conversion by an analog to digital converter into a digital bit stream that provides a digital representation of the temperature of each sample.
• While diaphora de signo are represented as individual bits, or as a raw data stream, diaphora de dicto result from the classification and encoding of those bits or that data stream. Data as diaphora de dicto encode diaphora de signo, or encode a change in diaphora de signo; for example the encoding of the letters A and B in the Latin alphabet as a series of bits in the ASCII code. Production of diaphora de dicto involves data sampling, signal classification, and signal encoding. The result is representation of a primitive data value in indirect representation in a computer; e.g., an ASCII character, or an integer, or a floating point number. In universal representation the result is the representation of a first level abstraction of some percept or qualia.
• If we extend the definition of information to include diaphora de re, it hides the distinction between direct and indirect representation, and it makes diaphora de re indistinguishable from energy quanta. Yet energy quanta are a direct representation with value semantics, whereas diaphora de signo and all higher levels of information processing are indirect representations. Classifying diaphora de re as data and making it part of information conflates direct and indirect representation in information. It also conflates reference and values semantics. Thus, including diaphora de re within the scope of the definition of information makes the semantics of information ambiguous.In addition, as we will see later, the ontology of the entire concept of ‘representation’ splits into direct and indirect representation right at its base. In other words, there are two major classes of representation in the universe – Direct representation and indirect representation. These two fundamental types of representation are logically disjoint. They have different semantics and at least eight of their major information theoretic properties are logical converses of each other. These converses include observer dependency (observer dependent vs. observer independent), type of encoding ( static / fixed vs dynamic) , encapsulation (unencapsulated vs. encapsulated), context dependence (context free vs. context dependent), uncertainty (uncertain vs certain), decidability (undecidable vs decidable), completeness (incomplete vs. complete), and consistency (inconsistent vs. consistent). Conflating direct and indirect representation and calling them both information creates a mixed up ontological mess throughout the ontology of information. That misclassification is one of the major reasons it is difficult to define information. Once that misclassification is corrected, the properties of direct and indirect representations separate cleanly, and they can both be represented and defined consistently.
• Now that we know what data is, a little bit about how it is acquired, and how it relates to existence, we need to explore the meaning of GDI.2 and GDI.3.
• Data is well formed if it has correct syntax. In (GDI.2), “well-formed” means that the data are clustered together correctly, according to the rules (syntax ) that govern the chosen system, code or language being analyzed. Syntax here must be understood broadly (not just linguistically), as what determines the form,construction, composition or structuring of something (engineers, film directors, painters, chessplayers and gardeners speak of syntax in this broad sense). For example, the manual of your carmay show (see figure 3) a two dimensional picture of the two cars placed near one another to connect jumper cables, not one on top of the other. This pictorial syntax (including the perspective projection that represents space by converging parallellines) makes the illustrations potentially meaningful to the user. Using the same example, the actualbattery needs to be connected to the engine in a correct way to function: this is still syntax, in termsof correct physical architecture of the system (thus a disconnected battery is a syntactic problem). And of course the conversation you carry on with your neighbor follows the grammatical rules ofEnglish: this is syntax in the ordinary linguistic sense. Syntax is often represented by specifying rules that determine the valid sequences of symbols from some alphabet.
• Of course the conversation you carry on with your neighbor follows the grammatical rules ofEnglish: that is syntax in the ordinary linguistic sense. Syntax is often represented by specifying rules that determine the valid sequences of symbols from some alphabet.
• The issue here is what does ‘meaningful’ mean? We will also discover that the representation of information has no meaning, in and of itself. Meaning only exists in the mind of the observer that interprets the meaning of information.
• An &apos;observer&apos; is a person or device or thing (an agent) that observes an event or an instance of something that physically exists. Observers can be informers or informees. Note that the semantic model, and thus the representation of semantics is internal to the informer and the informee. Also note that there are two different semantic models, the informer’s semantic model and the informee’s semantic model. In other words, semantic meaning is a property of an informer’s semantic model and a property of an informee’s semantic model. It is not a property of information.The informer uses their knowledge of their semantic model in conjunction with their knowledge of the syntax of a language to encode a message that represents part of their semantic model indirectly.The informee uses their knowledge of the syntax of the same language to decode the message and relate it to their own semantic model. The informee can than interpret the meaning of the message relative to their own understanding of the meaning of the words in the message and relative to the structure, relations, and process represented by their own semantic model. In other words, the informee interprets the semantic meaning of the message in the context of their own semantic model. Thus the meaning inferred from the message is a function of the part of the informee’s preexisting semantic model that is relevant within the context of interpreting the message, as well as a function of the informees knowledge of the meaning of the words in the message and knowledge of the syntax of the language used to encode the message.
• In my opinion, the Rosetta Stone example is invalid. The writing on the Rosetta Stone was meaningful to the informer that produced it, but it was meaningless information before it could be interpreted by an informee. The Rosetta Stone was only considered to contain information because it was assumed to have been a written message that was meaningful to its creator.Information represents a message that may be stored, recalled, and/or transmitted, and that message may communicate or represent semantic meaning indirectly, but the semantic meaning only exists in the mind of the informer and the mind of the informee. In other words, information is a message that may communicate semantic meaning between an informee and an informer, but the message has no semantic meaning in and of itself. The message is just syntax. The semantic meaning of information is only understood by an informee relative to, and in the context of the informees&apos; pre-existing knowledge. For example, a book does not understand the meaning of the words written in it. A computer does not understand the meaning of the information it stores. The book and the computer just store symbol sequences encoded using some syntax. It is the act of interpretation by an informee that converts a message from a sequence of symbols encoded using some syntax into semantically meaningful information inside the mind of the informee. In other words an informer encodes part of the meaning that exists in their mind as an informational message. The semantics recorded syntactically in the message are an indirect representation of part of the informers&apos; knowledge about some referent. The informee then decodes and interprets the syntactic information in the message, and reconstructs the semantic meaning of the information relative to the informees&apos; own state of knowledge at the time the message was received. The original semantic meaning existed in the mind of the informer, and the reconstructed semantic meaning exists in the mind of the informee. The message represented by the information is just syntax. It has no semantic meaning outside the mind of the informer and the informee.Semantic information must be created by an informer (aka observer). The informer must determine what to represent as information, they must ask the relevant yes or no questions, interpret the meaning of the answers, and encode and record the results on some physical medium in order to create an indirect syntactic representation of semantic information. The syntactic information is a message to an informee. The message may carry meaningful semantic information, but the information itself is not meaningful until it is interpreted and understood by an informee.
• One of the most often cited examples of environmental data is the series of concentric rings visible in the wood of a cut tree trunk, which may be used by an observer to estimate its age.  Yet “environmental” datadoes not need to be natural . For example, when youturn the ignition key in a car, the red light of the low battery indicator flashes. That signal can beinterpreted as an instance of environmental data. The correlation above is usually nomic (it follows some law). It may be engineered — as in the case of the low battery indicator (A ) whose flashing (F ) is triggered by, and hence is informative about, the battery (B ) being low (G ). Or it may be natural, as when litmus — a natural coloring matter from lichens — is used as an acid-alkali indicator because it turns red in acid solutions and blue in alkaline solutions. Other typical examples include the correlation between fingerprints and personal identification, or the correlation between temperature and the reading on a thermometer. One may be so used to seeing the low battery indicator flashing as carrying the information that thebattery is low to find it hard to distinguish, with sufficient clarity, between environmental andsemantic information. However, it is important to stress that environmental information may requireor involve no semantics at all. It may consist of (networks or patterns of) correlated data understood as mere differences or constraining affordances. Plants (e.g., a sunflower), animals (e.g., an amoeba) and mechanisms (e.g., a photocell) are certainly capable of making practical use ofenvironmental data even in the absence of any (semantic processing of) meaningful data.
• The so called ‘direct’ access to ‘pure data’ is misleading, because strictly speaking, all of an observer’s access to data is indirect. When we observe anything, our sensory receptor neurons are activated when energy quanta emitted by the phenomena being observed strike the sensory receptor’s receptive field with enough energy of the type the sensory receptor detects for the energy to register and cause sensory transduction. In other words, strictly speaking, all observation is indirect. Adding an additional level of indirection makes no significant difference relative to the existence or nonexistence of semantics in the observer’s mind. It also makes no difference relative to the direct or indirect nature of the information.
• The concept of environmental data is inconsistent with the indirect representation of information when used by non-intelligent observers because non-intelligent observers like plants, animals, and mechanisms that change their state or behavior based on environmental data are doing so based directly on the energy that represents the data, not indirectly based on knowledge of any semantic meaning the energy that represents the environmental data conveys to the observer that uses it. In other words, changes in state or behavior that are directly caused by the energy that carries environmental data are not a form of indirect representation. Such data carries no meaning for the observer that acts on it. Such an observer does not create a model of existence to represent anything indirectly. Such an observer does not take action based on the information represented by environmental data; such an observer simply uses the energy that represents the environmental data to cause a change in state or behavior directly. Thus, in those cases, there is no difference between environmental data and energy. Calling such environmental data &apos;information&apos; causes the definition of information itself to be inconsistent. It conflates direct and indirect representation and hides the fundamental ontological and semantic distinctions between them.
• An &apos;observer&apos; is a person or device or thing (an agent) that observes an event or an instance of something that physically exists. Observers can be informers or informees. Note that the semantic model, and thus the representation of semantics is internal to the informer and the informee. Also note that there are two different semantic models, the informer’s semantic model and the informees’ semantic model. In other words, semantic meaning is a property of an informer’s semantic model and a property of an informees’ semantic model. It is not a property of information.The informer uses their knowledge of their semantic model in conjunction with their knowledge of the syntax of a language to encode a message that represents part of their semantic model indirectly.The informee uses their knowledge of the syntax of the same language to decode the message and relate it to their own semantic model. The informee can than interpret the meaning of the message relative to their own understanding of the meaning of the words in the message and relative to the state, relations, and process represented by their own semantic model. In other words, the informee interprets the semantic meaning of the message in the context of their own semantic model. Thus the meaning inferred from the message is a function of the part of the informees’ preexisting semantic model that is relevant within the context of interpreting the message, as well as a function of the informees’ knowledge of the meaning of the words in the message and knowledge of the syntax of the language used to encode the message.
• Reference: http://en.wikipedia.org/wiki/Ontology_(information_science)An ontology provides a model of what exists. Ontologies typically represent that model in terms of a set of instances, a set of classes, and a set of logical and mathematical relations between those instances and their classes.An example of a modern ontology is OWL. OWL stands for web ontology language. OWL is a W3C recommendation as of 10 Feb 2004. OWL2 is a W3C recommendation as of 27 Oct 2009. See http://www.w3.org/TR/owl-guide/ and http://www.w3.org/TR/owl2-overview/.“The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information.”The OWL Web Ontology Language is intended to provide a machine interpretable standardized formallanguage that can be used to describe the concepts, and conceptualrelations between WWW documents and content.OWL is intended to provide a standardized machine interpretable language that can be used to create ontologies that can model the content of the WWW. It is part of the semantic web. By creating ontologies to describe the relations between information on the WWW, it is hoped that we can develop applications that can use machine reasoning to navigate those relations and infer information more intelligently than simple keyword searches can.The problem is ontologies are domain specific, and good ontologies are fairly time consuming to design, and create. We will never be able to create ontologies as fast as the content on the WWW grows. We can’t even keep up with the rate at which content grows in specific fields of interest. Plus, it is essentially impossible to get everyone that uses the web to follow the conventions established by each domain specific ontology. A more fundamental problem with this approach is that classes are used to model concepts in terms of information. Classes are a semantically poor, and very incomplete indirect representation of concepts. The problem is we don’t think about concepts indirectly. We think about and experience them directly. A third person indirect representation of concepts via classes is not the same thing as the first person direct representation of concepts via neurons. In addition, the semantics of classes are wrong. Classes are based on reference semantics. Mental concepts are based on value semantics. To represent concepts completely and consistently, they have to be represented in terms of value semantics using direct representation. Representing concepts in terms of direct representation using value semantics also solves the ontological domain limitation problem. Instead of having an army of knowledge engineers trying to create huge numbers of semantically incompatible, domain specific ontologies for WWW content, we can have computers automatically, incrementally create and extend a single consistent WWW ontology. The distributed system that creates that ontology can automatically understand, describe, and integrate the meaning of the semantic relations between and within the content of the documents on the WWW as that content is added to the web. This would increase the value of the web far more than a simple indexing or keyword search engine firm like Google can. In addition it eliminates the problem of users adding content that is incompatible with existing ontologies. Users can add whatever they want whenever they want and the system creates its own neural net representation of its meaning soon after it is entered.In addition, the concepts automatically created by such a system would be based on the same representation of concepts used by neurons in the human brain. It would allow the system to understand the meaning of all the content on the WWW, think about it, and answer natural language questions about it. In essence, it could turn the entire internet into a distributed planetary scale brain. A company far more valuable than Google could be created using such an approach. I have already designed and coded many of the core system software and database components that such a system could be based on. On the other hand, a planetary scale artificially conscious system could ultimately become too powerful. It might be safer to limit it so that artificial brains can only exist on a single computer, computer cluster, or local area network. Such a system would then become an individual’s personal agent and interface to the web.
• An example upper ontology is SUMO (Suggested Upper Merged Ontology).Show in KSMA ontology editor if the audience is interested.
• Everything that exists in the entire universe is represented using the same direct representational ontology.Instead of representing existence indirectly in terms of information about object instances, properties, relations, and classes, direct representation represents existence directly in terms of the composition of energy and dark energy quanta.Energy quanta are a superposition of state, relation and process. Each energy quantum has a quantum state. Some of the components of that state represent relations. The component states interact with each other via their relations, thereby creating the representation of a process. All of physical existence is reducible to the interaction and composition of energy and dark energy quanta. That means energy and dark energy quanta are the fundamental finite primitives that compose existence. Energy and dark energy quanta can be considered as specializations of a single higher level representational primitive, generically called a quantum. The energy and dark energy variants of that primitive have opposite signs for their temporal fields, and opposite parity, but are otherwise identical. Energy and dark energy quanta are composed hierarchically by value. The composition of energy quanta creates higher-order energy quanta, with higher order quantum states, higher order quantum relations, and higher order processes. Thus, the universe ends up with energy quanta composed from higher-order composite quantum states, higher-order composite quantum relations, and higher-order processes. As composition proceeds, more complex states, relations and behaviors emerge. The representation of physical existence must be complete and consistent throughout space and time. Without a complete and consistent representation of existence, there could be no consistent laws of physics. All the laws of physics would vary at different places and different times. Clearly that is not the case. It turns out there is only one way to achieve this. There can only be one possible complete and consistent representation of everything. If there could be more than one, they would all have to be identical, and thus they would all have to be the same representation representing all the same things. If not, they wouldn’t be complete. As Kurt Gödel proved, all fixed formal systems at or above the level of complexity required to represent Peano arithmetic, are incomplete and/or inconsistent. It is possible to create complete and consistent fixed formal systems within a limited domain if and only if they are simple enough. For example, propositional logic is complete and consistent. The problem is propositional logic is not even powerful enough to represent arithmetic, let alone represent all of physical existence. The fundamental cause of inconsistency and incompleteness is the indirect representation of a self-referential fixed formal system. In part, this limitation can be overcome by avoiding indirect representation. The problem is indirect representation cannot represent anything directly. That means nothing can represent its own existence using indirect representation. In turn, that means no self-referencing system can represent all of itself completely. There is always some part of itself it cannot represent. Just as problematic, indirect representation is dependent on the existence of an observer. In addition to the existence of a particular state, an observer must exist to represent that state indirectly. Of course the problem is there can’t be any observers (or even any states) in the singularity. Hence, indirect representation cannot represent the singularity consistently. Right from the beginning of the big bang, that makes it impossible to use indirect representation to represent existence consistently or completely.The only logical alternative to indirect representation is its logical converse, direct representation. Together, direct and indirect representation cover all possible forms of representation. In direct representation, every particular that exists represents its own existence directly, via its own existence. Thus, in direct representation, things represent themselves. In fact, in direct representation, things can only represent themselves. Direct representations cannot represent anything indirectly. Direct representation is based on value semantics instead of reference semantics. (DR has to represent things by value because it can’t represent anything indirectly. Reference semantics requires indirect representation because it can only represent things by reference). In direct representation, the singularity can represent itself. No observer is required. That avoids the inconsistency, incompleteness, (and observer dependency) that plague indirect representation.Since existence is complete, there can only be one domain. If there was more than one existential domain, each domain would have to have a boundary, and so neither one could represent everything. That means neither one could be complete. Put another way, there can only be one infinity. There can only be one everything. To be complete, existence must use a representation that has an unlimited domain. Physically, the infinite singularity is that unlimited domain. Everything that exists must then be defined relative to the infinite singularity. Another key to avoiding incompleteness, inconsistency and domain limitations, is to minimize representational complexity. Since existence is a kind of representation, it must have an ontology. The ontology describes what can be represented. In the case of existence, it defines what can exist. It defines how physical existence represents itself. To minimize representational complexity, the complexity of the direct representational ontology must be minimal. In order to be domain independent, the minimal ontology must be an upper ontology. It must describe existence in a way that is complete and domain independent. That means the upper ontology must be complete, consistent, and invariant over the universal domain. Over the course of all time and space, it must be able to represent the intension and extension of everything that ever existed and everything that could ever exist. A minimal complexity ontology can contain only one representational primitive, one computational algorithm, one ontological operator, and one upper ontology. The single representational primitive must be complete; i.e., it must be able to represent anything in the universe. Since anything includes the singularity, since the singularity is complete, and since at one point the singularity is the only thing that exists, that primitive must be able to represent the singularity directly. Under the right conditions, the single primitive that represents existence must be the singularity. Under other conditions, it must be able to represent existence in terms of its relation to the singularity. Otherwise there would be a discontinuity between the singularity and existence and neither one would be complete.The single computational algorithm must also be complete. It must be the universal of computation, and be capable of computing anything in the universe. To be complete, only one universal computational algorithm can exist. If there were more than one, neither one could be complete unless they were the same and they could compute all the same things, including themselves. The single ontological operator must be complete and it must be capable of operating over the entire universal upper ontology. Again, if there were more than one ontological operator, neither one could be complete unless they were the same and they could operate over the complete ontology and compute all the same ontological consistency rules or constraints.Now here is the key to existence: While a single representational primitive, a single computational operator or algorithm, a single ontological operator, and a single upper ontology would reduce complexity, it would not minimize it fully. To truly minimize complexity, the single representational primitive, the single computational primitive, the single ontological operator, and the upper ontology itself must all be one and the same thing. In other words, the existential representation of the universe must be based on a single dimensionally independent “thing” that simultaneously functions as the only upper ontology, the only representational primitive, the only computational primitive, and the only ontological operator. I call this the representational identity principle. That single representational primitive, computational operator, ontological operator, and unlimited domain upper ontology can only be a quantum. It is the only thing that can represent everything that exists consistently, including the singularity. Each quantum relation can be thought of like a pair of higher order symmetric linear unitary functors. Each functor is like a function that takes functions as inputs, and returns a function as its output, except it is more general. Each quantum functor can take a set of higher order functors as input and return a higher order functor as output. In essence, these functors represent the discrete equivalent of partial covariant derivatives and multiple path integrals. Thus each one can represent a quotient of partial covariant finite differences or a higher order summation of finite differences along a path. However, these finite difference are all represented by value, instead of by reference, and the representation of each quantum is completely covariant and completely encapsulated. It is covariant because it is defined by value, so it varies along with its composition. It is encapsulated because it is defined by value, by its intension. Thus its extension completely encapsulates, and is an identity for its intension. Topologically, energy quanta compose simplicial networks. The simplicial networks start with a zero simplex (the singularity) and sequentially compose higher order dimensions, up to the fourth dimension (spacetime). The composition of each additional dimension creates a new emergent fundamental force. Dimension 0 = the infinite SingularityDimension 1 = the temporal and mirror temporal fieldDimension 2 = the electromagnetic and mirror electromagnetic fieldsDimension 3 = the color and mirror color fields (aka the strong force and mirror strong force fields).Dimension 4 = the weak and mirror weak fields.The energy component of spacetime is a 4 simplex lattice; i.e., it is a pentachoron lattice, composed of the composition of the temporal, EM, color, and weak force fields. Energy and dark energy pentachorons pair up to compose a cubic spacetime lattice. The relations between the fundamental forces and the composition of spacetime will be presented later. The increase in dimension stops with the creation of the fourth dimension because the 4-simplex (the pentachoron) is topologically self-dual. In other words, its permutation cycle starts repeating after four dimensions. Thus the higher order composition of additional vertices (temporal field energy quanta) simply subdivides the existing pentachoron, and creates an expanding spacetime pentachoron lattice. It causes the ongoing expansion of spacetime.The quantum field creates a simplicial network because a simplex represents the shortest distance convex hull between a set of vertices in an n-dimensional space. In other words, it creates the minimal energy geodesic projection of an n-dimensional quantum state onto the 4 fundamental forces that compose 4 dimensional spacetime. That implies we can solve the quantum state of existence by finding the covariant partial derivatives of each quantum state along each simplex edge, and then solving the resulting system of five linear equations for zero. This procedure resolves the action into components along each fundamental force vector, and finds the stationary point(s) that minimize the action of the quantum energy field at the location of each energy quantum. In essence, it is a direct representational version of the Hamilton-Jacobi equation.The quantum state of the mirror verse can be solved the same way, except the equations of state are based on dark energy quanta instead of energy quanta, and you solve for minus zero instead of zero.
• The representational identity principle is one of the great secrets of the universe. If the upper ontology and the single primitive of representation and the single primitive of computation, and the single ontological operator are all one and the same thing, one obtains a universal logic and mathematics that is infinitely extensible, consistent, complete and domain unlimited.
• Merriam Webster’s Collegiate dictionary defines representation as &quot;something that serves as a specimen, example, or instance of something&quot;. On the surface, this implies that all representations are indirect, but if you really think about it, indirect representation cannot exist unless something direct represents it. Something that exists, physically exists - whether or not it is represented by something else indirectly. For example, the far side of Earth&apos;s moon exists, even though nobody is there to observe it. The same is true of an unobserved grain of sand, or an unobserved molecule or atom. Because things have to be able to exist even if they are not represented indirectly, representation must necessarily include direct representation. There can be no indirect representation without physical existence of the representation. There can be no physical existence without direct representation. Our conception of representation is based on information, and information is indirect and observer centric, so the conception of representation as only being indirect is an observer centric bias. To advance beyond our observer centric bias, we must expand the definition of representation to include direct as well as indirect representation. Up to this point in history, our species has been developing and using indirect representations almost exclusively. Indirect representation includes written and spoken natural languages, information, logic, mathematics, music, art, video, and all kinds of symbolic representation.  All information is a kind of indirect representation. As a kind of indirect representation, information inherits all of the properties of indirect representation.  Mathematics and information technology arose naturally as a formalization, refinement and extension of human natural language communication. However, that doesn’t mean indirect representation and information are the only possible types of representation. Nor does it mean they are the best possible, most capable, or most efficient type of representation. I discovered a second fundamental kind of representation I call Direct Representation. Direct representation is the logical converse of indirect representation. As shown in Figure 1, the representation of physical existence is a kind of direct representation. Everything that physically exists in the universe is a direct representation of its own existence. Direct representation represents all of physical existence in terms of the direct representation of energy quanta and their direct relations.  From the upper ontology of representation, we can see that information and existence are mutually exclusive representations. The sets of things they can represent are logically disjoint because they are derived from indirect representation and direct representation respectively, and indirect representation and direct representation are logical converses. Logical converses have logically converse properties. An observer can use information to represent existence indirectly, but existence can only represent itself directly. Indirect representation can only represent things indirectly. It cannot represent anything directly. Conversely, direct representation can only represent things directly. It cannot represent anything indirectly.  The existence of direct representation and indirect representation implies the existence of their powerset. I call the powerset of direct representation and indirect representation Universal Representation. The representation of thought is a kind of universal representation. Biological neurons represent thought in terms of universal representation. Universal representation allows us to think about things both directly and indirectly. Because universal representation is the powerset of indirect representation and direct representation it contains the powerset of the properties of direct representation and indirect representation. Its representational power is the Cartesian product of direct representation and indirect representation.
• In this figure, inside the brain, we have UR. Universal representation is the powerset of IR and DR. UR is a superposition of direct representation and indirect representation. That superposition is represented in terms of abstractions. The first order abstraction of abstraction is the only representational primitive of UR. From the perspective of representation, a neuron is the direct representation of the first order abstraction of a related set of abstractions. A neuron is the direct representation of a contextually related set of abstractions. It is also the direct representation of a concept at a particular level of abstraction in a particular conceptual field. An abstraction is a representation of something in some context. A concept is a representation of something in all contexts that thing occurs in within a particular conceptual field. An abstraction is a partial representation of a concept. It is the representation of a concept in one particular context of abstraction. The intension of a concept is composed of a related set of abstractions. Abstractions form a cover over the concept they define, but that cover is not disjoint. In other words, the abstractions that represent the intension of a concept tend to have similar representations because they represent the same thing in different contexts. The only difference between them is their contextual dependencies. That means they typically have a lot of shared representation. By allowing a single neuron to represent a whole set of related abstractions, the neural representation of concepts is compressed combinatorially. In other words, many different abstractions can be represented by the same set of synapses and the same dendritic membranes. The individual contexts of abstraction are distinguished by the relative timing and sequencing of related sets of synaptic activations.Everything in existence is composed of Direct Representation (DR). In other words, the energy patterns that compose physical existence are represented by nature in terms of direct representation. Quantum energy fields represent their existence in terms of direct representation. At the physical level, information is represented by quantum energy field patterns. Information in and of itself has no physical existence as a particular thing in existence. In other words, there is no such thing as an ‘atom of information’, or a ‘quantum of information’ in physical existence. An energy quantum is not information. Information always represents things indirectly, but energy quanta exist directly. Energy quanta are a direct representation of existence, not an indirect representation. ‘Information’ is just a name for a quantum energy field configuration that is meaningful to an observer. That observer can be an artificial information system, or it can be an organic intelligent observer. From the figure, we can see that information in and of itself has no physical existence. Only its direct representation as a recognizable pattern of energy quanta exists physically.We communicate using information, which is a kind of indirect representation (IR). In order to transmit information between its source and destination, it must first be encoded on an information carrier in terms of DR. For example, when we talk, we encode speech in terms of patterns of pressure waves in the atmosphere. Those pressure waves are a direct representation. When we hear speech, our brain decodes the pressure waves, and converts the result into universal representation (UR) inside the listener’s brain. The conversion process activates the patterns of neurons in the listener’s brain that represent the meaning of the words the listener heard. Those neurons represent UR, but their physical existence is encoded in terms of DR. The illusion is that we heard ‘information’. The illusion is that we think in terms of information. In fact, UR is a much richer, much more powerful representation than information. Information is only a partial and incomplete representation of UR. When we receive information, sensory detection of the patterns of energy quanta that represent the information activates the sets of neurons that represent the information inside the brain. Those neurons represent more than just the information that activated them. They represent the entire network of concepts and abstractions that define what that information means relative to the context of thought that was active at the time it was heard. Thus the brain can use a partial representation (information) as a key to recall all related meaning relevant within the current context of thought or perception.Even inside a computer, information has to be converted to and from DR before it can be processed by the computer hardware. For example, inside a computer, information must first be converted into electrical signals by the computer hardware, before it can be manipulated by the microprocessor. Those electrical signals change the state of the computer’s logic gates and hardware. The computer CPU, its memory, its disk drives and all its circuitry are composed in terms of direct representation. Information is encoded in binary inside a computer, whereas inside our brain it is encoded in UR.The brain is a large biological neural network. Neurons are physical representations of universal representation. They compute in terms of universal representation. However, even in the brain, the low level computation (i.e., dendritic integration) is done in terms of a direct representation implementation of universal representation. Thus, at the ‘hardware level’ both computers and brains ultimately compute everything in terms of direct representation. The difference between them is that a computer uses the indirect representation of information as the logical basis for its computations, whereas the brain uses universal representation as the logical basis for its computations. The brain is a kind of direct abstraction and concept processor, whereas a computer is an indirect information processor. The brain represents information in terms of the relations between concepts and abstractions. Those concepts and abstractions are represented neurally.It is natural to assume that everything is composed of information, but that is simply an illusion due to the fact that we must use information for all external communication and computation. In fact, it is information that is the illusion. At the physical level, information is simply a name for an energy pattern embedded in some information carrier. We interpret that energy pattern as bearing information, but physically, it is just an energy pattern.
• Fixed static encodings represent each symbol with a constant fixed code or fixed numeric value, or fixed pattern. Information uses fixed encodings. For example in a computer, the ASCII code for the letter ‘A’ is always decimal 65 or binary 01000001. Every computer that uses the ASCII encoding represents an upper case A as the decimal number 65. Fixed encodings are typically based on standards, conventions or agreements. Fixed encodings are context free. The value of the code used to represent each symbol is fixed. It does not change as a function of the context it is used in. Information uses fixed encodings. Fixed encodings are well suited for communication. Their weakness is they are not very compact, and they do not scale well when representing complex, context dependent information.In contrast to fixed static encodings, dynamic encodings have no fixed “symbols”, fixed codes, fixed values, or fixed alphabet. Dynamic encodings are private. They are unique to each individual that uses them. They are also content and context dependent. Dynamic encodings encode the representation of particulars in terms of how they ‘relate’ to ‘other particulars’, where the ‘relations’ and ‘other particulars’ are previously defined dynamic encodings. Both staticand dynamic encodings can represent how things relate to each other, but they do so differently. Static encodings represent relationships external to the encoding; that is the relationships are not a part of the code itself; they are represented by the encoding, but they are external to it. Dynamic encodings dynamically embed the relationships as part of the encoding itself; the relationships are internal to the encoding. They form part of the encodings’ identity. More specifically, in dynamic encoding, the existence of an object in its intensional hierarchy of composition forms part of its identity. Thus, the hierarchical context in which an object is represented and used by value is an encapsulated invariant part of its type specification and an encapsulated invariant part of its identity. This is a critical distinction. With dynamic encoding, either all the parts of the representation of a particular are encapsulated and embedded inside that particular, or the particular does not exist. Dynamic encodings encapsulate the representation, context, and identity of their component parts. Static encodings do not. With a fixedstatic encoding, each part of the representation is separable and context independent. The behavior of a relation or mathematical operator is defined once, and it is represented by a fixed symbol. That symbol can then be used to represent that same relation everywhere that relation occurs. The behavior of the relation is defined as part of the definition of the relation, not as part of the context it is used in. In a static encoding, the same relation symbol can represent the use of that relation in any number of definitions. In dynamic encoding, a relation is defined and used within the context of its use by value, so it can only be used in the context within which it is defined. This distinction has physical consequences. In direct representation, energy and dark energy quanta represent a superposition of relations, states and processes. A particular energy quantum can only exist at one point in time because at a minimum, it is composed of a temporal field quantum. Energy only exists at the time it exists. It only exists at the time it represents. If it exists below space, its position in space is undefined, so in effect it can exist everywhere at once. If it exists at or above space, then it is composed of spacetime, so it can only exist at the location in spacetime it represents. Massless bosons can exist beneath space. In other words, some exist beneath space, and others exist at or above it. All particles with mass exist above space. Thus massive particles are composed of the spacetime field they exist in, so they are always localized in spacetime. This explains the cause and existence of quantum non-locality, and so called ‘spooky action at a distance’. There really isn’t anything spooky about it. Non local quantum effects occur because some massless bosonic quanta exist at a level of existence beneath the existence of space. Therefore, they do not exist in space, so their interactions take place independent of their location. In effect, they have no location. When they interact with quantum fields that do have a location, they can cause detectable changes in those fields at their locations, and we associate the location of the quantum field interaction with the location of the massless boson, making it appear that the massless boson can be in more than one place at the same time. Fixed static encodings allow partial representations of particulars. Dynamic encodings do not. The use of dynamic encoding to encode the direct representation of existence is the cause of the univalence, unitarity, and quantization of energy and existence. Energy and dark energy quanta exist or they do not. Dynamic encoding requires no representation for the nonexistent. Nature does not waste energy representing those things that do not exist. Things that do not exist simply have no representation. Energy quanta either exist completely or they do not exist at all. Nothing physical partially exists at the fundamental level of energy or dark energy quanta. The direct dynamic encoding of existence is also the cause of the Pauli Exclusion Principle in Physics. It is the reason matter cannot pass through matter, even though according to the standard model of particle physics, matter is known to be 99.999999999999% empty space. (Empty space has physical existence because it has dimension. Distances can be measured in space. If space was only an abstract mathematical framework with no physical existence, there would be no measurable distance between objects in space). If spacetime was only an abstract mathematical framework, then nothing could make it curve. Curves cannot exist in things that do not exist. Matter is composed of curved spacetime. The zero point quantum virtual energy field that composes spacetime is encapsulated in matter and is part of the representation of matter. It is not possible to separate matter from the spacetime that composes it because the direct dynamic encoding of matter encapsulates the representation of the spacetime from which the matter is composed. Removing the spacetime from the representation of matter, or changing the spacetime used in the representation of matter would be the same as removing part of the representation of matter. At the very least, it would change its identity and change its composition. In turn, that would change some of its properties, or change it into another type of entity. At the other extreme, if all the spacetime was removed from matter, the matter would cease to exist in the form of matter. It would decompose into massless bosons or decompose back into the singularity because only they can exist naturally beneath spacetime.It is not possible to break the encapsulation of the direct representation of existence. The encapsulation of the direct representation of existence is a fundamental property of the ontology of the direct representation of existence. The encapsulation of the direct representation of existence is responsible for the quantization and unitarity of existence. If the existential representation were not encapsulated, existence would not be quantized, quantum states would not exist, existence would not be unitary, and the universe would not exhibit quantum behavior or operate according to the laws of Quantum Mechanics. The encapsulation of the direct representation of existence is the fundamental cause of all quantum states in Physics. Each quantum state is an encapsulated finite difference in a quantum energy or quantum dark energy field. If the fundamental building blocks of existence did not have quantum states, then energy would not be a conserved quantity. It would literally be impossible to quantify it. Without energy conservation, it would be possible to create and destroy energy and the singularity, and the laws of Physics would be inconsistent. Obviously, it is impossible to destroy the singularity. Infinity has no bounds. That means it can’t have a beginning or an end in time. That means it cannot be created or destroyed. That also means all singularities are part of the same singularity. When gravitational black holes form, they don’t create a new singularity. They simply decompose the state of the matter and energy inside the black hole’s event horizon back to the singularity. Since the singularity is infinite, only one can exist. The encapsulation of direct representation causes energy quantization, it causes the existence of quantum states, and it causes the conservation of energy. Without quantum phenomena, the fundamental building blocks of existence could partially exist. This does not occur. Nothing that exists can exist half inside and half outside the universe. An individual energy quantum always exists in one quantum state or another at the current instant in time. It never exists at a level partly between two different quantum states. This may appear to violate the quantum state superposition principle, but it does not. As observers, we cannot know which particular quantum state an energy quantum exists in at the current time. Due to light speed limitations on the rate at which energy propagates, we can only determine the quantum state of energy quanta in the past. Consequently, from the perspective of indirect representation, an energy quantum exists in a quantum state superposition until its state is observed. In other words, the concept of a quantum state superposition is an artifact of indirect representation. Quantum behavior is a fundamental property of existence precisely because it is dependent on the direct representation of existence. It is a necessary fundamental property of existence because of the duality between finite existence and the infinite singularity and the encapsulation of the direct representation of existence.
• This diagram shows a general model of direct representation. It shows that Thing1 is related to Thing2 by relation R1. It also shows that Thing1 is related to Thing3 and Thing4 by relation R2. In this diagram, Thing1, Thing2, Thing3, and Thing4 could be anything. Each ‘Thing’ could represent anything that can exist in the universe. Each relation could represent any relation that can exist in the universe. In other words, the specific things and relations represented in this diagram are irrelevant. The thing we need to focus on here is how direct representation represents things and their relations.The first thing to note is that direct representation is not observer dependent. In turn, that means it is not dependent on observation, measurement, or the ability to make decisions. That means it is not anthropocentric.Second, in direct representation it is important to understand that there is no fundamental representational distinction between the representation of ‘Things’ and ‘relations’. A relation is a kind of thing, and a thing may function as a kind of relation. In direct representation, both things and relations are represented by the same representational primitive. Whether something functions as a thing or as a relation is purely a function of the context it exists in. In direct representation, both things and relations are represented by composition by value. This is extremely important because it allows Nature to reduce all of representation to a single representational primitive. That primitive can then represent a superposition of state, relation and process. Avoiding a distinction between the representation of states, relations, and processes reduces representational complexity exponentially. That is necessary in order to create a representation that is complete and consistent in the universal domain. It also avoids the need to make any decisions because no decision has to be made as to what kind of primitive must be used to represent anything. Instead, everything is represented in terms of the composition of a single type of representational primitive. Things are composed of other things, and the very fact of that composition creates the existence of a relation between a thing and the things it is composed of. Direct representation results in the composition of higher order states, higher order relations, and higher order processes. In direct representation, composition simultaneously composes states, relations and processes. Structural composition then becomes equivalent to the composition of higher order functionals. However, because those functionals also represent structure, they have state as well as behavior. A things direct composition is the existence of its logical and physical intension. The intension of each thing represents the computation and existence of its state and its behavior.Looked at from another perspective, the use of a single representational primitive means all of physical existence can be generated and computed via transfinite recursive composition and decomposition. In turn that provides a way to unify the representation of all computation and representation. All computation and representation can be represented via transfinite recursive composition and decomposition. Another thing to note is that everything represented by direct representation is represented and exists in context. This reflects the fact that everything that exists, exists in some context. At a minimum, everything finite that exists, exists in time or mirror time. (Mirror time is the dark energy equivalent of time. It is like time, except its arrow of time is reversed). (The singularity is infinite, so it exists outside time and space. It is eternal). The singularity decomposes into the zero point virtual temporal quantum energy field and the virtual anti-temporal quantum dark energy field. Transfinite recursive composition of the temporal field then composes the electromagnetic field, the color field, the weak field and half the quantum field structure of spacetime. Similarly, transfinite recursive composition of the anti-temporal quantum dark energy field composes the anti-electromagnetic field, the anti-color field, the anti-weak field and the dark energy component of the quantum field structure of spacetime. Quantum energy fields compose matter and quantum dark energy fields compose dark matter. Dark matter, mirror matter and antimatter are just different names for the same thing. The antimatter in the universe isn’t missing. It is just unobservable, because its arrow of time is reversed. It is also important to note that direct representation is fully encapsulated. That accounts for the unitarity and quantization of energy quanta. It accounts for the fact that big things are composed of smaller things, and that small things evolve before larger things. Spacetime evolved before massive subatomic particles. Subatomic particles evolved before atoms. Atoms evolved before molecules and stars. Stars evolved before galaxies.Since things in direct representation are composed by value, the arrow of time and causality are also automatically ensured. Things can’t be composed of things that don’t already exist. They can’t react to force fields that don’t yet exist.Direct representation also accounts for the present moment of time. In DR, nature only represents the current quantum state of existence. In DR, spacetime is not just geometry. It is a quantum energy and dark energy field. Spacetime expands as the singularity decomposes. Its curvature results from its composition of most of the current quantum state of existence. Mass exists in spacetime because it is composed of the zero point quantum energy and dark energy fields that compose spacetime. In effect, matter is like frozen spacetime. Matter is composed from stable configurations of energy, and zero point quantum field virtual energy. In effect matter functions as a sink for the zero point virtual quantum energy and dark energy fields. We experience that sink as the force of gravity, and as gravitational time dilation. In effect, gravity is a change in the velocity of the temporal field. We can’t distinguish between it and acceleration because it is an acceleration. It is just the acceleration of an energy field we cannot measure because it exists below the zero point.
• Indirect representation cannot be completely encapsulated because it is observer dependent and the observer is not part of indirect representation. In addition, indirect representation represents things by reference, and the referent may be outside the bounds of the representation.Even when we attempt to create fully encapsulated indirect representations we never achieve full encapsulation because we can’t remove the observer dependence.Even in the case where we explicitly try to create an encapsulated representation, for example, when we create a software class that represents an encapsulation of an abstract data type, the definition of the class depends on the preexistence of an alphabet of symbols, language operators, and primitive data types. Of course it also depends on the software designer or programmer that creates it, the computer it executes on, and the user that provides its input, and interprets the meaning of its output.
• This diagram shows a model of indirect representation. It represents the same set of things and relations that were shown in the model of direct representation.In this diagram, the observer observes energy from Thing1, Thing2, Thing3, and Thing4. For example, the observer’s eye could detect photons reflected from each of them. The observer then creates indirect representations IR(Thing1), IR(Thing2), IR(Thing3) and IR(Thing4) of Thing1 through Thing4 to represent them indirectly. Those indirect representations could be information encoded as a sequence of bits. For example, the observer could describe each thing using a natural language noun, and then encode the representation of those nouns as the sequence of bits that represent the ASCII number that represents each letter in each noun. Thus, each indirect representation of a thing is a proxy or substitute for it that allows us to perform computations on the proxy instead of on the thing directly. Conceptually, each indirect representation contains a pointer or referent back to the thing it represents. In this figure, those referents are shown by the arrows that go from the indirect representation of each thing to the thing it represents in the real world. Note that both things and relations have referents. I omitted the display of the relation referents for the sake of reducing clutter in this illustration.The observer also observes some relation R1 between Thing1 and Thing2 so the observer creates an indirect representation IR(R1) to represent that relation. Note that the observer also creates an indirect representation IR(a) of the association ‘a’ between Thing1 and R1, and an indirect representation IR(b) of the association ‘b’ between R1 and Thing2. In software, relations are typically represented by functions, so IR(R1) would typically be represented as IR(R1)((IR(Thing1), IR(Thing2)), where IR(a) and IR(b) are implicitly represented by the presence of IR(Thing1) and IR(Thing2) in IR(R1)’s argument list. The referents are not explicitly represented in terms of information. They are inferred by the observer. Similarly, the observer observes some relation R2 that relates Thing1 to Thing2 and Thing3 in the real world. To represent that relation, the observer creates an indirect representation IR(R2) to represent relation R2. The observer also creates an indirect representation IR(c) of the association ‘c’ between Thing1 and R2, an indirect representation IR(d) of association ‘d’ between R2 and Thing3, and an indirect representation IR(e) of association ‘e’ between R2 and Thing4. In software the relation represented by IR(R2) would typically be represented by a function IR(R2)(IR(Thing1), IR(Thing3), IR(Thing4)). IR(c), IR(d), and IR(e) are implicitly represented by the presence of IR(Thing1), IR(Thing3) and IR(Thing4) in IR(R2)’s argument list. Once again, the referents are not explicitly represented in terms of information. They are inferred by the observer. Also, remember that I have omitted the relation referents from this diagram to reduce clutter. It is important to remember several points about the indirect representation of information.All indirect representation is indirect. Information can’t represent anything directly.All indirect representation is ultimately based on reference semantics due to the indirection inherent in indirect representation. It is not possible to avoid indirection and reference semantics in computer programs because ultimately, all programs are reduced to execution of a sequence of predefined CPU instructions. Each of those CPU instructions is a predefined function coded into the CPU firmware. Even if the programmer designs a program that uses only inline function calls, passes all parameters by value, and creates all data structures by value, the program still gets translated into a sequence of CPU instructions and those instructions are executed by reference. Indirect representation distinguishes between the representation of states and relations. Each state is typically represented as a sequence of bits, whereas each relation is typically represented as a function. Processes have no first order primitive representation. They are represented by a sequence of interactions between states and relations.In a sense, indirect representation is an ‘outside in’ representation. An observer observes a domain of things from the outside, creates a representation of that domain, creates a separate representation of each object, each state, and each relation observed, and then populates the domain with the object, state, and relation representations required to represent the relations between the states and/or objects in the domain. (An object is an encapsulated set of states and relations with an external interface that provides the only access to the object’s internal state and relations). Consequently, indirect representation is unencapsulated, domain dependent, and observer dependent. If an observer wants to represent nature indirectly, the observer must identify all the relevant states, relations and semantic constraints in each domain, and decide how to design a representation that represents each relation and enforces each constraint.
• Beyond incompleteness, inconsistency, and observer dependency, one of the biggest problems with indirect representation is ontological domain limitations. In this figure I only show three different knowledge domains. The actual situation is far more complex than this diagram can show. In reality, knowledge domains are not all distinct. Different domains may overlap different parts of each other. Different domains often represent parts of the same things in different ways from different perspectives. They may also represent different aspects of the same things. If the brain represented thought in terms of information there would be quadrillions of knowledge domains, just among those in the brains of the human population. Fortunately, the brain is more advanced than that. Its native knowledge representation is based on universal representation, not indirect representation. With sufficient training, it is possible to expand our ability to think directly in terms of universal representation. Even so, since we must communicate using indirect representation, our brains are forced to use universal representation to abstract the representation of written and spoken speech. That results in the representation of tens of thousands of different information domains. This artificially limits the intellectual potential of most individuals. On top of that there are all the different domains represented by computer programs. Integrating the information in all those different domains is intractably complex in indirect representation. The domain integration problem doesn’t exist in universal representation because universal representation only has one domain. The increased knowledge processing capability the development and use of universal representation would confer is hard to imagine. Imagine being able to access and process the integrated totality of human knowledge, without error or delay due to translation between different languages, different ontologies, or different knowledge domains. Imagine all computer programs being compatible with each other. Imagine conscious artificially intelligent computers that can integrate, understand, and process the totality of human knowledge. Imagine how access to that type of computational capability would expand human intellectual, societal, and economic potential. All of that is possible with universal representation. What’s more, when combined with direct representation, we will gain the ability to compute any desired part of the quantum state of existence completely and consistently. The opportunities that will open up are mind boggling. As far as I know, this is the first time in human history such a wealth of opportunities has presented itself. In indirect representation, the totality of knowledge is broken up into quadrillions of different ontological domains. If we could represent the knowledge in a single observer’s brain in terms of information, it would require the representation and integration of millions of different knowledge domains. The current world population is estimated at 6.94 billion by the US census bureau, so if we were to do that for all people we would have to integrate quadrillions of different ontological domains from billions of different perspectives. Each of those domains would require its own ontology, its own representation, its own axioms, its own ontological consistency rules, and its own ontological commitments. Obviously, that would create an intractable combinatorial explosion in the complexity of knowledge representation. Even worse, it is typically quite difficult and labor intensive to translate knowledge between different domains. Different ontologies may use different terms and different languages to represent the same things. They each may have different axioms, different consistency rules, and different partially overlapping sets of relations. Translating all of those differences from one ontology to another without error is in general intractably complex. In contrast, it is theoretically possible to represent the totality of human knowledge using one knowledge domain in universal representation. Even if we can’t actually get at all the knowledge of individuals, we could create artificial systems that could automatically learn and integrate all knowledge on the internet. By representing that knowledge in terms of universal representation, it avoids the need to integrate quadrillions of different knowledge domains and knowledge representations. Such a resource could accelerate the rate at which civilization acquires and develops knowledge exponentially. It could create exponential increases in efficiency and productivity, while simultaneously reducing error due to miscommunication and misinterpretation exponentially.Domain limitations cause a lot of misunderstandings. They reduce our ability to represent and understand the universe combinatorially. The cost of this inefficiency and the resultant decrease in potential productivity and technological and societal potential is incalculable. Science and society have gotten so used to the benefits of information and information processing that they failed to consider the possible existence of superior representations. Information may be a blessing, but blessings are relative. Compared to direct and universal representation, information is a boat anchor that is exponentially retarding the potential intelligence of individuals, and the exponentially limiting the potential of science, technology, knowledge, economics, and human civilization. Direct representation and universal representation both avoid all of these limitations. They are not subject to inconsistency, incompleteness, observer dependency, or domain limitations. The use of direct and universal representation will produce a combinatorial reduction in complexity, a combinatorial decrease in error and a combinatorial increase in intelligence, efficiency, productivity, and capability. Those improvements will occur at the level of the individual, and ripple their way all the way up to the level of the economy, society, and civilization as a whole. Even with widespread increased use of direct and universal representation, there will still be some limits to individual human intellectual capacity, productivity and efficiency. Even if we all learned to think directly in terms of universal representation, we still must resort to indirect representation for communication with each other. Unfortunately, universal representation can only gain its incredible storage and processing efficiencies by using a private, dynamic encoding and relative relational knowledge representation. That means the encoding and knowledge representation within each individuals brain are unique to that individual. It is possible to overcome that limitation in artificial systems. In other words, it is possible to create distributed networks of artificial intelligences that can operate over a distributed universal representation. Using such techniques, it should be possible to create an integrated planetary scale artificial intelligence that automatically learns, integrates, and processes the sum total of human knowledge. Such a system could exponentially increase the span, completeness, consistency, and expansion rate of human knowledge. It could give us capabilities in a few years that would take millennia to develop without it. On the other hand, the risks of developing such powerful intelligences needs to be carefully considered. We don’t want to put ourselves in a position where our machines grow so powerful, they supercede humanity us as the most evolved species on earth.
• Universal representation is the powerset of IR and DR. UR is a superposition of direct representation and indirect representation. That superposition is represented in terms of abstractions. The first order abstraction of abstraction is the only representational primitive of UR. UR represents everything in terms of abstractions. Each abstraction represents a superposition of state, relations, and process because it is composed of energy quanta that are a superposition of state, relations and process. From the perspective of representation, a neuron is the direct representation of the first order abstraction of a related set of abstractions. A neuron represents sets of related abstractions that represent the same concept in related contexts within a conceptual field. In this case, I am using the term field in the sense of a vector field. Each concept can be thought of as being represented by a matrix of related vectors, each of which represents an abstraction. In turn each abstraction vector is composed of a related set of references from other abstractions. The abstraction vector is represented by conduction paths within the dendritic tree and the references are represented by synaptic connections. In essence, a concept exists within a vector field of related concepts, and each of those concepts is composed of a related field of abstractions. The concept field is represented by the neurons with a cortical column. In turn, cortical columns are arranged within cortical maps.A neuron is the direct representation of a contextually related set of abstractions. It is also the direct representation of a concept at a particular level of abstraction in a particular conceptual field. An abstraction is a representation of something in some context. A concept is a representation of something in all contexts that thing occurs in within a particular conceptual field. An abstraction is a partial representation of a concept. It is the representation of a concept in one particular context of abstraction. The intension of a concept is composed of a related set of abstractions. The intension of abstraction is a collection of associations that defines a partial representation of a concept in terms of how that concept relates to concepts relevant within the context defined by the abstraction. The static component of the intension of abstraction is represented by dendritic conduction paths, postsynaptic terminals, and their properties. Its dynamic component is represented by the sequence in which postsynaptic terminals fire, their relative firing times, and the way in which the resulting electrotonic potentials integrate as they are conducted through the neuron’s dendritic tree. The intension of abstraction can be thought of as an equation that is composed of a sequence of terms, where each of those terms is represented by a reference from an activated abstraction instance. Each of those activated abstraction instances represents the occurrence of a particular abstraction in a particular context of abstraction. Each one also represents a particular superposition of states, relations, and processes. The dendritic tree relates the terms to each other and provides the boundary conditions that constrain the dendritic integration process.The static component of the extension of a concept is the same as the static component of the extension of all abstractions that compose that concept. The static component of the extension of a concept is represented by the neuron’s axon and its presynaptic terminals.The dynamic component of the extension of each abstraction is a function of the order and relative timing of the synaptic activations that compose the intension of each abstraction. Dendritic integration performs a kind of spatiotemporal correlation. It also functions as a spatiotemporal demultiplexer. A neuron can be thought of as a kind of spatiotemporal combination lock. It fires its axon when particular combinations of synapses fire in particular sequences with particular relative firing times. The occurrence of each particular synaptic firing sequence with each particular set of relative synaptic firing times allows the same axon and presynaptic terminals to represent a particular context of abstraction. Abstractions form a cover over the concept they define, but that cover is not disjoint. In other words, the abstractions that represent the intension of a concept tend to have similar representations because they represent the same thing in different contexts. The only difference between them is their contextual dependencies. That means they typically have a lot of shared representation. By allowing a single neuron to represent a whole set of related abstractions, the neural representation of concepts is compressed combinatorially. In other words, many different abstractions can be represented by the same set of synapses and the same dendritic membranes. The individual contexts of abstraction are distinguished by the relative timing and sequencing of related sets of synaptic activations.
• In indirect representation, the referrent is a substitute for that which it represents. The referrent can be thought of as a pointer from an indirect representation to that which it represents. Alternatively, it can be thought of as a label or identifier for that which it represents. Thus the referrent functions as one possible identifier for that which it represents.In universal representation, the referrent is also a substitute for that which it represents, but the referrent can be thought of as being a pointer from that which it represents back to the context in which its indirect representation is being used. In other words, the referrent in universal representation provides a way to reuse the extension of an existing abstraction within many different new abstraction intensions. However, within the intensions in which it is used, it functions as if it were part of each intension by value.
• So - Where do we begin? How can we derive a correct, fully generalized definition of the representation of abstraction? We will start by surveying the published literature and try to find a common consensus among the existing definitions. If all the existing definitions are correct, yet each is different, it must mean that each definition is describing a different aspect of the same thing, or describing the same thing from a different perspective or context. It is like several blind men trying to describe an elephant based only on the sense of touch. One touches the leg and thinks it is a tree. Another touches the tail and thinks it is a snake. Another touches the trunk and thinks it is a hose. Each blind man is correct from his perspective, but there is something more there. We need to find what more is there. We need to find the elephant that is abstraction and describe the whole thing, not just its parts. In other words, a more general definition must exist that captures the essence of all the existing definitions. By generalizing the existing definitions and describing them all from a single perspective, we should be able to develop a general definition that captures the common essence of abstraction. We will then use our general definition and the known characteristics of thought as a basis for further deductive refinement and expansion. There are multiple published definitions for the term ‘abstraction’ . Some define abstraction as a process while others define it as a thing, i.e., they define it as the representation that results from an observer or reader performing an abstraction process.In this section, we will derive the definition of the first order abstraction of abstraction itself. We will do so by starting with the published definitions of ‘abstraction’. We will then show that each fundamental type representation has its own kind of abstraction. Thus, we have direct abstraction, indirect abstraction, and universal abstraction.
• The term ‘something’ was chosen to avoid conflict with the established meanings of ‘entity’, ‘process’, ‘state’, ‘relation’ and other commonly used terms denoting classes of representational primitives. We will consider “something” to mean a completely generalized representation; i.e., a representation capable of representing anything that exists.
• Notice that in this step, we are also converting the description of a process to a description of the result of that process. We want to know what the representation of abstraction itself is, not the process we need to perform to create an abstraction.
• Here again, we need to convert from a description of a process to a description of what results from performing that process.
• Note that here too, we needed to convert from the definition of a process to the definition of the result produced by that process.
• Note that importance is judged relative to some purpose. The phrase “important for some purpose” in (5) is equivalent to the phrase “relevant to a particular purpose” in (1) and both could be generalized to “relevant in some context”.Hence, from the perspective of representation, this aspect of definition 5 shares some commonality with definition (1).
• In this case, we again need to convert from the definition of a process to a definition of the representation that results from that process.
• First, we should note that this is the first definition that refers to abstraction as a noun or thing, as opposed to a process.
• Note that this definition goes a little further than the others in that it implies that the essential characteristics an abstraction should represent are those that can be used to distinguish an object represented by an abstraction from all other kinds of object. That implies an abstraction should represent things relative to their similarities and essential differences. We will expand on this and clarify its importance later. The phrase &quot;provide crisply defined conceptual boundaries&quot; also implies some kind of a relation between an abstraction and a concept. We will also clarify this later. On a related note, crisply defined conceptual boundaries can be represented by a well defined &quot;interface&quot; with some kind of unique identity. The interface then provides the identity of an abstraction and the only way to access or use that abstraction.  In software that interface is defined in terms of states and relations (i.e., data and functions). This definition then provides the motivation for the definition of software classes with well defined class interfaces. It also provides the motivation for separating the definition of class interfaces from their implementation in object-oriented programming languages like C++, Java, and C#. However, using direct representation, we can further increase generalization and improve encapsulation by combining the representation of states, relations, and processes into a single representational entity and representing the whole thing via the activation of a unique reference from the abstraction. That reference then functions as the extension and unique identity of the abstraction.
• Basically, the idea here is to come up with a name for anything a neuron can represent. We will call whatever a neuron can represent a concept.Note that this definition is a lot broader than the common definitions of a concept:a.  the conjunction of all the characteristic features of somethingb.  a theoretical construct within some theoryc.  a directly intuited object of thoughtd.  the meaning of a predicate
• In the tradition which spans Aristotle, Porphyry, the Port Royal Logic, Frege, and Russell, a concept has the two distinguishable but inseparable aspects of intension and extension. The intension of a concept can be stated equivalently as that condition which must be satisfied by any object, within the given universe of discourse, for it to exemplify the concept. Also, if an object does satisfy the condition (concept intension) then it is an exemplar of the concept. The representation of information is indirect. An indirect representation of a thing is a substitute for the thing it represents. Information is represented from the 3rd person indirect perspective of an observer. We cannot represent thought or concepts using only an indirect representation, at least not if we want to create a representation of thought that can be used to create a machine that can think and understand the meaning of information from a first person direct perspective. We think from the first person direct perspective. We understand meaning from the first person direct perspective. Cogito Ergo Sum. I think, therefore I exist. An indirect representation can represent things in context free form, but its representation is not context dependent. It also is incapable of directly encoding or representing meaning. We can program it to interpret syntax, but not to understand meaning from the first person perspective. A direct representation can represent things in context, and it can represent meaning directly, but it cannot represent things in context free form. Hence, it cannot represent concepts.The only remaining possibility is a universal representation. A universal representation is the powerset of direct and indirect representation. Only a universal representation can represent the meaning of abstractions in context and represent concepts in context free form. A universal representation is also geometrically more compact than a direct representation.A concept’s intension and extension come from the semantic notion of intension and extension respectively. A concept only has one intension, but that intension may be composed of many different abstract equations. Only one of those equations must be satisfied to denote the existence of an instance of the concept.An abstract equation is an equation composed of terms that are themselves represented by abstractions. Each abstraction term represents something in some context. That something can represent anything that can be represented as a superposition of state, relation and process. Thus the terms in the equation are arbitrarily abstract. Each term is of arbitrary order and arbitrary dimension. The terms are related to each other by the spatiotemporal context they exist in. In the case of neurons, each term is represented by the activation of a postsynaptic terminal. It is also represented by the postsynaptic electrotonic potential generated in the part of the dendritic tree that contains the synapse. The spatiotemporal relations between the flow of electrotonic potentials within the dendritic tree then represent first order spatiotemporal relations between each abstract term that composes the intention of abstraction that composes the intention of the concept represented by each neuron.
• 1) If the concept intension is composed of more than one abstraction and each abstraction represents an instance of the concept in some context, then each abstraction must be a partial representation of the concept. In other words, each abstraction represents that part of the representation of the concept that is relevant in the context represented by the abstraction. (In the exceptional case of a concept that is only ever used in one context, it would only have one abstraction in its intension. In this case, the abstraction would be a full representation of the concept).2) The definition of concept intension implies concepts are context free.The representation of the concept intension is composed of a collection of abstraction representations. If the representation of each abstraction in a concepts’ intension is a partial representation of an instance of that concept in a particular context, then the full representation of the intension of the concept consists of the union of all abstractions (and thus the union of all contexts) in which the concept occurs. If there is an intensional representation for all contexts the concept is represented in, then the representation of the concept is context independent, and hence context free. Therefore, the representation of concepts is context free. Concepts are nothing more than their representation. Therefore, concepts are context free. Note that this result is consistent with the well-known fact that concepts are context free. This result explains why concepts are context free.3) Each abstraction that represents the intension of a concept represents that concept in a different context. Since each abstraction represents the same concept, at least some of the representation of each abstraction should be the same as that of the other abstractions representing the same concept intension. The only representational differences between the abstractions within the intension of a concept should be those due to the part of each abstraction’s representation required to represent its unique contextual dependencies. To maximize the reuse of shared representation, and thus minimize storage space, we should factor out the shared parts of each abstraction’s representation and only represent the shared parts one time. Thus, the representation of the intension of a concept should factor the intensional similarities and differences into shared and unshared representational components respectively. The computational structure best suited for intensional factoring is a set of trees. We can use one tree to factor related (mutually dependent) intensional abstraction similarities and differences, but multiple trees are required to handle the representation of a collection of independent intensional representations.By using multiple trees, we can define the intension of a single concept in terms of its relations to any number of dependent and independent sets of concepts. This is important because it allows us to represent concepts that have mutually independent, mutually dependent, or any combination of mutually dependent and independent intensional representations. This representational generality is required for a complete representation.4) Neurons typically have multiple dendritic trees. The branching topology of dendritic trees is morphologically identical to the branching topology of the computational data structures referred to in the previous section. In both cases, the branching tree structures are used to factor the representational similarities and differences in the representation of concept intensions.
• The isomorphic identity between neural dendritic tree topology and the use of multiple computational trees to represent concept intensions via relative relational encoding of upper ontological concept intensions is only one of many convergent arguments that support and lead to this conclusion. The same conclusion can be reached via arguments based on the evolution of neural and dendritic function. Alas, those arguments are too long to present here.
• 5) An additional benefit of eliminating redundant representation is we fully normalize the representation (in the sense of relational normalization). If we fully normalize the representation by factoring out all shared representation and only representing it once, not only do we minimize storage space, we also automatically eliminate the possibility of update anomalies, update synchronization errors, and race conditions that could otherwise be caused by the need to update multiple copies of the same representation and keep the updates synchronized. We also eliminate the need for any synchronization mechanism, thus further reducing complexity, storage space, processing time, and energy.We can automatically eliminate all other potential causes of ontological inconsistency by using a single closed computation to perform the thought process over the complete foundational concept ontology. We can use structural induction to achieve this result. We can perform the structural induction by extending the foundational conceptual ontology relationally relative to its state at the time when we create the new representations. (This is why we call it a relative relational encoding). We define new concepts in terms of how they relate to other concepts where the representation of the ‘other concepts’ and ‘relations’ is preexistent in the ontology. In simple terms, all new conceptual representation is composed from, and relative, to the representation of preexisting concepts. Since the representation of the preexisting concepts used to create the representation of the new concept were ontologically consistent and complete, and since the operation used to create the new representation is part of the ontology and is consistent, complete, and closed, the new representation is guaranteed to be consistent, complete and closed.
• 6) Relative relational intensional encoding does more than fully normalize the intensional representation of thought and remove all representational redundancy to maximize reuse. It results in combinatorial compression of intensional representation and computation within every neuron. Over the neural network as a whole, it results in logarithmic combinatorial compression of representation and computation.The firing of a single synapse can represent an abstraction of any complexity, any dimension, and any logical order represented by the neuron that caused it to fire. For example, the firing of a single synapse could represent the result of computing a million term fuzzy spatiotemporal logic equation. To do so it would only need to represent the result of a sixth order abstraction. For example, if an abstraction was composed of a ten term equation, and each of those terms was composed of a ten term equation, then a six layer feed forward neural network would suffice to compute a million term fuzzy spatiotemporal logic equation. In effect, this provides logarithmic combinatorial compression of computation and representation. The representation does not need to be decompressed before computation can take place. Computation occurs directly on the compressed representation. Consequently representational compression also causes computational compression.7) Concept intensions form our abeyant (i.e., static) representation of thought and knowledge.7.1) From hypothesis 1, a neuron’s dendritic trees represent the concept intension.7.2) A concept’s intension represents and defines the meaning of the concept.7.3) Therefore, a neuron’s dendritic trees represent and define the meaning of a concept.7.4) Therefore, the meaning of a concept is stored in a neuron’s dendritic trees.7.5) A neuron’s dendritic trees exist whether or not they happen to be receiving or processing synaptic inputs.7.6) Therefore, neurons dendritic trees (and their synaptic weights) represent and store memory. 7.7) Therefore, neuron dendritic trees and concept intensions form our abeyant (i.e., static) representation of thought and knowledge.7.8) Neuron dendritic trees represent, define, and store the meaning of concepts.
• Note that even though the extension is a logical reference, physically, that reference is implemented by value. In other words, physically the extension is a direct representation, because it is composed of a neurons’ axonal tree and presynaptic terminals.
• In neurons, the relational primitives are represented by the dendritic trees. The dendritic trees associate the abstractions represented by each synapse they contain with each other because the electrotonic potential generated by activation of each of those synapses can only flow inside the dendritic segments and branches that contain those synapses. In other words, the dendritic trees relate the abstractions represented by the synapses they contain to each other in terms of the spatiotemporal relations between the flow of electrotonic potentials generated by activation of those synapses within the dendritic trees. Those relations are computed by the dendritic integration process. The mathematical equations that describe dendritic integration will be presented after we discuss neurons in a little more detail.
• Because the intension of abstraction has occurrent and abeyant components, we can represent relations that vary over time and space. The brain is particular good at representing things in terms of how they relate in time and space in different contexts, because it represents spatiotemporal relations between the abstractions that represent things directly and it does so directly in the context those things occur in.
• Indirect representation represents everything by reference, so its representation of abstraction is based on reference semantics. Indirect representations rely on an observer to define the context of abstraction. The observer defines the context in terms of the domain of the representation. To preserve consistency, the context of abstraction in indirect representation can only represent a limited size domain. In other words, an observer must limit the size of the domain to that which can be represented consistently in terms of indirect representation.  Technically, the domain must not include &apos;proper classes&apos;. In other words, the domain must be consistently representable as a set. It must be possible to include the domain in a set. That precludes the ability to represent the universe indirectly. No set can include the universe because the universe is everything. In order for everything to include itself by reference, there would have to be something larger than everything that could represent everything. But by the definition of &apos;everything&apos;, there cannot be anything larger.  Practical constraints limit the size of domains in indirect representation even further. Indirect representations represent the relations between their members in terms of observer defined relations or functions. In order to represent a domain consistently and completely, the language used to represent those relations or functions must be general enough and flexible enough to represent everything inside the selected domain consistently and completely. In indirect representation, language size and complexity tend to increase with domain size. The number of relations that must be represented also tends to increase exponentially with increasing domain size. Thus, to restrict size and complexity to manageable levels, it is usually necessary to limit the domain size. All of logic is based on reference semantics. The same real world object can be represented by multiple logical variables and sentence letters by reference. Sets and set based operators are defined in terms of reference semantics. The same object can be included in a set multiple times, and the same object can be a member of multiple sets. Functions are defined in terms of reference semantics. The arguments of a function are references for the objects they represent. Mathematics is an indirect representation of information that uses functional abstraction to abstract indirect representation functionally.
• Direct representation represents everything by value, so its representation of abstraction is based on value semantics.  Direct representation represents each abstraction in terms of its relation to the abstractions that compose it by value. The context of abstraction is the intension of abstraction. It is the set of abstractions and abstraction relations that compose the definition of the containing abstractions&apos; extension, or instance. Thus, context is represented by value in direct representation. In addition, the context is not observer defined. It is part of the ontology of abstraction. The ontology of abstraction is part of the topology of existence. That topology is conserved by value semantics. It is conserved because energy and dark energy quanta can only be composed of quanta that already exist. Temporal and mirror temporal field bosons are created before any higher order forms of energy. That ensures the order of cause and effect , both in energy and dark energy. In other words, it ensures that cause precedes effect in time and mirror-time. Remember that the arrow of time is reversed in dark energy. That means from the hypothetical perspective of an observer in normal space, if that observer could observe dark energy and dark matter, effect would appear to precede cause. However, that is only an artifact of indirect representation. In direct representation, cause always precedes effect because all higher order compositions of energy include a temporal field component, and all higher-order compositions of dark energy include a mirror-temporal field component. The temporal and mirror-temporal field components have higher energy levels than all higher order types of energy, so they always function as energy or dark energy sources relative to all higher order forms of energy or dark energy. That means the current of change is driven bottom up by change in the temporal and mirror temporal fields. That is the cause of the &apos;arrow of time&apos; and the cause of the second law of thermodynamics. This also means the universe contains two present moments. These are the observable present and the unobservable present in the hidden mirror sector. The observable present and the unobservable present arrow of time move in opposite temporal directions. Both move away from the singularity in time, up until the point when they enter a black hole in the observable universe, or a white hole in the hidden mirror sector. Once the present moment enters a black or white hole, its arrow of time is reversed and the local present moves back towards the singularity. When all present moments reach the singularity, the current instance of existence ends, and the next quantum state transition causes the next big bang and the beginning of the next instance of existence. Direct logic is based on value semantics. Direct sets and direct set based operators are defined in terms of value semantics. Direct abstraction is based on hierarchical composition by value. All things that exist are composed of preexisting things. All of physical existence is ultimately composed of energy quanta. The different kinds of energy are themselves hierarchically composed of lower order, lower dimensional energy forms. The lowest order forms of energy are the temporal energy field quanta. The lowest order forms of dark energy are the anti-temporal field dark energy field quanta.
• Universal representation represents things by value and by reference, so its representation of abstraction is based on value and reference semantics. Its representation of abstraction is the first order abstraction of abstraction itself. Just as everything is ultimately represented as a number or collection of numbers in mathematics, everything is represented as an abstraction or a collection of abstractions in universal representation. The representation of abstraction is far more powerful than that of numbers. Numbers can only represent things that can be quantified. Abstractions can represent anything we can perceive, conceive, think, feel, dream, or imagine. Biological neurons represent abstractions and concepts. Their topology is a bijection of the ontology of concepts and abstractions. Universal abstraction represents the context of abstraction directly, using value semantics. In other words, it inherits the representation of context from the direct representation of direct abstraction. It then extends direct representation by adding the ability to represent things indirectly, using reference semantics. That extension is represented by each neuron&apos;s axonal tree and its presynaptic terminal connections to other neuron&apos;s dendritic trees.  Each synapse functions as a reference from the neuron whose axon contains its presynaptic terminal. When the synapse fires, it signals the detection of an instance of the abstraction represented by the presynaptic terminal&apos;s neuron in the current context of abstraction. In other words, the activated synapse represents one of the presynaptic terminal&apos;s neurons&apos; abstractions.  The abstractions represented by a neuron all represent the same concept in different contexts of abstraction. The only difference between their representations is their contextual dependencies. That means the parts of their representation that are not contextually dependent can be shared. The branching structure of the dendritic tree can then be used to factor out the shared and unshared components of the representation. From a static perspective; i.e., from the perspective of a synapse that is not firing, one presynaptic terminal can represent multiple abstractions in multiple contexts of abstraction. Each abstraction instance is represented by the spatiotemporal correlation and integration of electrotonic potentials from multiple activated synapses.  However, from a dynamic perspective; while the synapse is firing, it only represents one particular instance of abstraction in one context of abstraction. The precise firing time of each synapse relative to the firing times of the other synapses in the set of activated synapses that represent each abstraction instance is a critical part of its knowledge representation.  Dendritic integration functions as a kind of spatiotemporal demultiplexer. It uses the branching topology of the dendritic trees, combined with synaptic placement, and the relative firing times of related sets of synapses to demultiplex and identify the specific context of abstraction represented by each set of firing synapses. That allows synapses (and parts of dendritic trees) to be reused in the representation of multiple abstractions. That is another reason neurons can compress the representation of the equivalent of so much information into such a small volume.
• This image is licensed under the Creative Commons Attribution 2.5 Generic license.GFP expressing pyramydal cell in mouse cortex. Dynamic Remodeling of Dendritic Arbors in GABAergicInterneurons of Adult Visual Cortex Wei-Chung Allen Lee, Hayden Huang, GuopingFeng, Joshua R. Sanes, Emery N. Brown, Peter T. So, EllyNediviPLoS Biology Vol. 4, No. 2, e29 DOI: 10.1371/journal.pbio.0040029Please note that this section of this presentation is primarily focused towards an explanation of neural knowledge representation. It is not intended to be a full course in neural science. To that end, I am intentionally avoiding getting too deeply mired in low level neurophysiological, neurochemical, and electrochemistry implementation details. I am also avoiding enumerating all the different types of neurons, their locations and their functions. I am also not going to go into a detailed explanation of ion channels, ion channel kinetics, ion pumps, synaptic vessicles, second messengers, dendritic membrane electrochemistry, specific neurotransmitters and neuromodulators, or the electrochemical basis of action potential generation or propagation. These topics are all fascinating, but for the most part, they function as the biological plumbing required to build a biological abstraction processor. Studying them is like studying resistors, capacitors and transistors to try and understand how a computer program works. None of that is of much relevance to how neurons represent and process knowledge. While different neural regions and different types of neurons do compute different things, all neurons compute and represent knowledge in the same way. Different types of neurons represent different functions primarily because of the different synaptic connections they make and because of their unique dendritic tree geometries, not because they compute or represent things using different algorithms, procedures or mechanisms.If you are interested in the aforementioned neuroscience topics, they are all well covered in standard Neural Science texts. Recommended texts include: Principles of Neural Science, Edited by Kandel, Schwartz, and Jessell, Cognitive Neuroscience by Gazzaniga, Ivry, and Mangun, and The Synaptic Organization of the Brain, by Shepherd. Please refer to those texts or any good graduate level neural science course if further information about any of those topics is desired. More advanced texts include: The Theoretic Foundation of Dendritic Function, by Segev, Rinzel and Shepherd, The Handbook of Brain Theory and Neural Networks, edited by Arbib, Methods in Neuronal Modeling, edited by Koch and Segev, and The Neuron Book, by Carnevale and Hines.When simulating biological neural systems in software or hardware, most of those low level mechanisms can be abstracted away and replaced with much simpler mathematical equivalents. In other words, a lot of the complexity in the brain is there because it is needed to make a biological processor work, not because it is required by the neural knowledge representation or computational model. It is possible to create working simulations of neural processing with far less complexity. Simulation of all the ‘biological plumbing’ is only necessary before knowledge of what is and is not relevant to neural knowledge representation, neural computation and the representation of memory is acquired.
• Neurons are multi-functional devices. They combine the functions of signal conditioning, signal processing, knowledge representation, computation, memory, and communication all in a single unit. Specialized neurons also function as sensory transducers, output controllers (motor neurons), and signal generators. Neurons are not directly comparable to computers because they don’t compute, represent, or store information. Neurons are abstraction processors, not information processors.They use a whole different model of computation, knowledge representation, and memory. Just like energy quanta, neurons combine the representation of state, relation, and process in a single functional unit. In other words, neurons do not have separate representations for the notion of state (data), relation (function), or process (processor and program). In addition to representing states, relations, and processes, neurons also function as memory, communication port, communication channel, signal conditioner, signal processor, and in some specialized cases, signal transducer. Since neurons don’t represent information, they have no need to encode information, decode it, store it, or compute anything in terms of it. In fact, they don’t even use any kind of bivalent (or binary) code. They don’t represent things in terms of true or false, or one’s and zeros, or even exists, doesn’t exist. Instead they represent abstractions directly by value and indirectly by reference from an abstraction by value. Each abstraction represents a superposition of states, relations, and processes in a particular context of abstraction. That explains why it has been so difficult to understand how the brain represents, processes, stores, and encodes information. We’ve been trying to explain something the brain does not do. The brain is not an information processor, and it does not represent or compute information or store data. The brain is an abstraction processor. It represents, stores and computes abstractions directly. Even the most fundamental information processing categories like information, data, bit, function, program, instruction, and memory have no distinct counterparts in the brain. Some of those categories don’t exist at all and others are combined in every neuron. Trying to understand the function of the brain in information processing terms is misleading and often counterproductive.When you think about it the brain has to operate the way it does. As observers, something inside our brain has to represent nature for us indirectly. Otherwise, we wouldn’t be able to represent anything indirectly, nor would we be able to communicate with each other using information. On the other hand, the brain is part of physical existence and nothing in physical existence can represent itself indirectly at the physical level. Nature only represents physical existence directly. Since the brain is part of nature, the brain has to operate in terms of direct representation. In other words, it has to use direct representation to represent indirect representation. That is the only way indirect representation can exist. In other words, even inside the brain, at the physical level, nature represents and implements the indirect component of its knowledge representation in terms of direct representation. Thus the brain only represents and computes indirect representations indirectly. From the perspective of an observer, the brain appears to represent things indirectly because we communicate indirectly and we can observe the results of its indirect representation. Alas, just because the brain can produce indirect representations, it does not mean it must use indirect representation internally to produce them. At the physical level, computers also use direct representation to represent information. The representation of every bit of information in a computer is ultimately represented by the interaction of quantum field energy patterns that compose the matter that composes the computer hardware, the electricity that operates it, and the computer’s storage media. So, when you get right down to it, even computers use direct representation to represent indirect representation. The difference between a computer and the brain is that we design computers to represent information directly, whereas the brain uses direct representation to represent indirect representation indirectly. In other words, the brain’s knowledge representation and computation model are based on direct representation, whereas a computer’s knowledge representation and computation model are based on the indirect representation of information. Since the brain’s knowledge representation and computational model are based on the same representation that composes physical existence itself, the brain is much more efficient at representing physical existence than any computer can be. On the other hand, since the operation of computers is logically based on the representation of information, computers are much more efficient at representing and computing information than the brain can be.
• Dendritic trees have a wide variety of branching structures. Some types have thousands of fine branches, while other have only five or ten. Dendrites typically branch profusely, getting thinner with each branching, and extending their farthest branches a few hundred microns from the soma.This is where the majority of synaptic input to the neuron occurs. Typical neurons have 1000 to 10,000 synapses, but some have many more. For example, Purkinje cells can have up to 200,000 synapses. Functionally, the dendritic trees process synaptic inputs and represent and compute abstraction intensions via dendritic integration. They also function as the static component of long term memory and they perform some signal conditioning and signal processing. The dendrites perform almost all the computation and intensional knowledge representation in neurons. Dendritic integration uses spatiotemporal correlation of the electrotonic potentials generated from synaptic activations to determine the context of abstraction directly. Note that the determination is implicit by value, not explicit by reference. In other words, a neuron doesn’t need to identify the particular context of abstraction by reference, because it already as a direct reference from it via its synaptic connection. The neuron doesn’t care what the abstraction is or what it represents. It doesn’t need to represent the meaning of what was computed. All it needs is a notification that an instance of whatever was computed currently exists. In other words, there is no information encoded in the signal. It is just a signal. All that matters is that the receiving neuron gets a spatiotemporally correlated set of synaptic activations from the synapses in its receptive field. If synaptic activations in a neuron’s receptive field are spatiotemporally correlated, then the intension of one of the abstractions represented by that neuron is detected and it will fire its axon to represent that detection. In other words, receipt of a spatiotemporally correlated set of synaptic activation potentials will generate a spatiotemporally correlated set of electrotonic potentials in the dendritic tree, and dendritic integration will integrate those potentials over space and time to determine if they satisfy one of the intensional abstraction equations that represents one of the contexts of abstraction represented by the neuron that received the synaptic activations. If one of the abstraction equations is satisfied, the electrotonic potentials will sum to the neurons firing threshold and the neuron will fire its axon to broadcast that fact.
• The logical intention of concept C1 is defined by abstractionsA1 thru A5. The logical intention of abstraction A1 is definedby presynaptic references from concepts C2, C3, C4, C5 and C6. Concept references C2 and C3 are shared with the logical intention ofabstraction A2. Concept references C3 and C4 are shared withthe logical intention of abstraction A3. If concept references C2, C3, C4, C5, and C6 receive activation in the correct timeframes relative to each other, abstraction A1 is selected andcauses instantiation of concept C1.Similarly, the logical intention of abstraction A2 is defined byreferences from concepts C2, C3, C7 and C8. The only conceptreference unique to abstraction A2 is C7. Abstraction A2 isinstantiated if concepts C2, C3, C7, and C8 are activated in thecorrect time frames relative to each other.Abstraction A3 shows an even more efficient case of conceptreference reuse. A3 contains no unshared concept references.A3 is uniquely determined by the activation of concept referencesC3, C4, C8, C9, C10, C12, and C13. Thus the representationof abstraction A3 comes for free!
• Post synaptic potentials are electrotonic potentials. Electrotonic potentials are caused by relatively slow moving passive currents carried by ionic charges. Those ionic currents spread passively through the dendritic tree. The electrotonic potential travels via electrotonic spread. Electrotonic spread is caused by: - Electrostatic attraction between ions with opposite polarities - Electrostatic repulsion between ions with the same polarities - Diffusion caused by the presence of an ionic concentration gradientElectrotonic potential dissipates in inverse proportion to the square of the distance from the source. The rate of electrotonic conduction varies inversely with the product of dendritic axial resistance and dendritic membrane capacitance. The cable equation describes the passive spread of electrotonic potential in dendritic membranes.Electrotonic potentials can sum spatially and temporally. Spatial summation occurs when a postsynaptic potential from one activated postsynaptic terminal spreads electrotonically to a second postsynaptic terminal such that it is present at the location of the second postsynaptic terminal at the time the second postsynaptic terminal fires, thereby causing the creation of an additional postsynaptic potential that sums with that already present at its location. In other words, the electrotonic potentials only sum if they are spatiotemporally correlated. The sum is linear if both postsynaptic potentials are excitatory, and non linear if one or both are inhibitory. Excitatory potentials sum linearly because they are proportional to the local passive membrane resistance, whereas inhibitory potentials sum nonlinearly because they cause a localized increase in membrane conductance. The increase in conductance reduces the local membrane resistance, thereby lowering its resistance and acting as a shunt that decreases the potential of any EPSPs that happen to be present. In other words, inhibition is due to the combined effect of the hyperpolarization from the IPSP itself plus the associated reduction in the potential of any EPSPs already present. Thus IPSP’s tend to dominate EPSPs when both are present.Temporal summation is a gradual increase in overall charge due to repeated postsynaptic terminal activations in the same location. Because the ionic charge enters in one location and dissipates as it spreads to others, losing intensity as it spreads, electrotonic spread is generally a graded response in passive dendrites. However, it is also possible for dendrites to have voltage gated ion channels in their membrane. In that case, if the electrotonic potential exceeds the threshold required to open the voltage gated ion channel, the influx or efflux of ions can amplify the electrotonic potential and cause a dendritic spike. This is particularly useful in neurons that don’t have axons because it allows part of the dendritic tree to act as an axon. It is also useful for very large dendritic trees, because the sections of the tree with voltage gated ion channels can act like an inline signal amplifier. Thus it can boost the signal from distal dendritic branches.Normally, you would think this would mean that distal EPSPs would have less of an effect on the magnitude of EPSP summation than proximal ones, but this is generally not the case. Dendritic trees taper down with increasing distance from the soma, and as they get thinner, their axial resistance increases. The increase in axial resistance with increasing distance creates larger EPSPs at distal locations. The end result is individual EPSPs tend to sum linearly, independent of their distance from the soma.
• A transmitter substance acts as a neuromodulator when it alters the synaptic action of other neural inputs by means other than itself producing direct excitation or inhibition. In other words, it acts by means other than generating new EPSPs or IPSPs. Neuromodulators can act presynaptically, or postsynaptically. Presynaptically, they can change the amount of neurotransmitter releasedfrom a specific presynaptic terminal. This can be accomplished by way of autoreceptors, which when bound by a transmitter substance, modulate further release of that substance, or it can occur when one transmitter substance modulates the release of another. Norepinephrine at some synapses in the autonomic nervous system can inhibit its further release. When enkephalin is released into sympathetic ganglia by preganglionic neural input, it can inhibit the release of acetylcholine within that ganglion. In some cases, the postsynaptic potential elicited by a given transmitter substance can be altered by, or contingent upon, the postsynaptic action of a neuromodulator. For example, a brief exposure to dopamine released synaptically into sympathetic ganglia enhances the muscarinichypopolarizations induced by acetylcholine for hours, even though the dopamine causes no change in the membrane potential or resistance of the postsynaptic cell. Similar effects of dopamine have also been described in the caudate nucleus and hippocampus. A shorter potentiation of both excitatory and inhibitory responses of Purkinje cells in the cerebellar cortex is induced by norepinephrine released by axons originating in the locus ceruleus.It has been suggested that these longer lasting changes in neural activity produced by neuromodulators may play a role in slowly developing and enduring behavioral changes such as learning and memory. The effects of muscarinic antagonist drugs on learning and monoamines on sleep/waking and learning may indicate that this suggestion has some credence.Postsynaptic neuromodulators can change the gain of dendritic integration . In effect, they can change the weights of groups of PSPs. Weight changes can be caused if voltage or chemically gated ion channels are opened along the portion of the dendritic membrane while dendritic integration is taking place. Neuromodulators open chemically gated ion channels in the dendritic cell membrane to adjust the local gain of dendritic integration, thereby making it easier or harder for the electrotonic potential to sum to the trigger threshold and cause generation of an action potential. If the gain is decreased, the relative PSP contribution by afferent synapses is reduced. Conversely, if the gain is increased, the relative PSP contribution by afferent synapses is increased. If the gain is reduced, more synapses have to fire, and thus more abstractions must be activated, or existing abstractions must contribute more PSP per activated synapse to generate an action potential. To do this, they can increase their degree of spatiotemporal correlation. Thus reducing the gain makes neural processing more specific, or more fine tuned. Conversely increasing the gain makes neural processing less picky, or more coarse; i.e., it makes it easier to generate an action potential, so less spatiotemporal correlation is required, or fewer intensional abstractions must be present than would be the case prior to the increase in gain. Additionally, the scope of gain adjustments should be a function of where in the dendritic tree neuromodulation occurs. Neuromodulation can only affect the PSP gain from synapses located along the branches of the dendritic tree more distal than the location of the neuromodulatory synapse. Thus, the closer a neuromodulatory synapse is to the soma, the broader its effect can be.
• The axon transmits and distributes the results of neural processing synaptically in a point to point fashion. The result of neural processing is a set of synaptic activations where each synapse in the set has a particular activation time relative to the others in the set. Each of those synaptic activations represents an event that denotes the occurrence of an instance of one of the abstractions or the concept represented by the neuron that fired its axon. The precise timing of the synaptic activation, combined with the precise timing of synaptic activations from related neurons is used by each neuron in the axon’s receptive field to discriminate between contexts of abstraction in the current context of thought.
• The dendritic trees process synaptic inputs and use dendritic integration to compute the intension of abstraction. When each synapse fires, it opens ion channels in the dendritic membrane. In turn, the resulting ion flux creates an electrotonic potential that flows down the dendritic tree to the neuron’s axon hillock. Some synapses create excitatory electrotonic potentials and others create inhibitory electrotonic potentials. Excitatory electrotonic potentials increase the likelihood the neuron will fire its axon, while inhibitory electrotonic potentials decrease that likelihood. I like to think of excitatory synapses as providing arguments for the occurrence of an abstraction and inhibitory synapses as providing arguments against it. In effect, each activated synapse functions as a term in a fuzzy spatiotemporal logic equation. Those terms represent the occurrence of abstraction instances. From our perspective, some of them represent abstractions of states or objects, and others represent abstractions of relations, but from the neuron’s perspective there is no difference. Both states and relations are represented as abstractions. The neuron’s branching dendritic tree controls the relation between electrotonic potentials because it determines the dendritic integration path. Hence it controls how the terms in the equation it represents relate to each other in space and time. In particular, it determines how the flow of electrotonic potentials integrate and sum through space and time. At the physical level, knowledge is represented in terms of the spatiotemporal relations between the flow of electrotonic potentials within the dendritic trees. In effect the brain represents things in terms of how they relate to each other in space and time. This is as true of the relation between things in the external world, as it is of the relation between abstractions inside the cerebral cortex, and as it is of the relation between the flow of electrotonic potentials inside each neuron’s dendritic membrane. From that perspective, the neuron is a spacetime contextual relation processor. Everything exists in spacetime. What better indirect representation of existence could there be than to represent existence directly in terms of how things relate to each other in spacetime? What could possible be more general, or more domain independent than that? The dendritic tree can also function as a signal conditioner and signal modulator. It can also perform limited signal processing by directly computing the sum and difference of electrotonic potentials. That allows it to represent first order spatiotemporal relations between higher order abstractions implicitly, without the need for synaptic representation. The distance between each activated synapse and the relative time each is activated are important factors in dendritic integration. Thus the dendritic trees form part of memory, part of the knowledge representation, and the most important part of neural processing. The dendritic trees compute the intensions of the abstractions represented by their neuron.This is a far cry from how an information processing system works. First, there is no neural code. The firing of a synapse does not represent a binary bit of information. It is not part of an encoded message. It does not represent information at all. It directly represents the occurrence of an abstraction. The brain is an abstraction processor. It processes and computes abstractions directly. The firing of a single synapse can represent an abstraction of any complexity, any dimension, and any logical order represented by the neuron that caused it to fire. For example, the firing of a single synapse could represent the result of computing a million term fuzzy spatiotemporal logic equation. To do so it would only need to represent the result of a sixth order abstraction. For example, if an abstraction was composed of a ten term equation, and each of those terms was composed of a ten term equation, then a six layer feed forward neural network would suffice to compute a million term fuzzy spatiotemporal logic equation. In effect, this provides logarithmic compression of computation and representation. The representation does not need to be decompressed before computation can take place. Computation occurs directly on the compressed representation. Consequently representational compression also causes computational compression.The abstraction could represent anything from the perception of an individual qualia, an emotion, a command to contract a muscle, or any abstraction of any order, dimension, or level of abstraction, in any context represented by its part of the neural network. All abstractions are represented and processed the same way using the same dendritic integration algorithm. The function represented and computed by a neuron is a function of its location within the network, its dendritic and axonal geometry, its synaptic connection patterns, and the current synaptic and dendritic membrane properties - not a function of what it computes or represents. In this way, both the representation and the computational algorithm are completely independent of what they compute. The beauty of the first order abstraction of abstraction is that an abstraction can abstract the representation of anything, yet it contains, requires, and needs no a priori domain specific knowledge of anything. It is the logical converse of functional representation. Whereas functional representation and computation are domain specific, abstraction representation and computation are domain independent. Therefore the first order abstraction of abstraction is the universal of representation and the universal of computation. Instead of being programmed, the brain is a learning machine. That explains how the brain can represent things so flexibly. Second, synaptic activation timing is just as important as synaptic activation. If synapses fire at the wrong times relative to each other, their electrotonic potentials won’t be spatiotemporally correlated and they won’t cause activation of the neuron’s axon. Neural computations can represent logic, sequencing, and spatial and temporal relations all at the same time in the same computation.Third, neural computation is always context dependent. Abstractions are always computed in context.Fourth, the computation is simultaneously direct and indirect. Intensional processing is direct via dendritic integration, whereas extensional processing is indirect via axon activation. Dendritic integration computes abstractions directly by value, whereas axons and their presynaptic terminals represent the existence of previously computed abstraction instances by reference. Fifth, neural processing solves the hard problem of first person direct conscious experience. A neuron represents the relation between meaning and existence directly. It’s intension represents the definition and meaning of an abstraction in some context of abstraction at some level of abstraction, and its extension directly represents the occurrence of an instance of that abstraction. Since the representation is direct, and since the neuron is part of our body, when a neuron fires its axon, we experience the meaning of the abstraction represented by that neuron from the first person direct perspective. The neuron is part of our body. When it fires its axon we experience the activation of the abstraction the neuron represents as the occurrence of that abstraction within our current context of thought. For the same reason, fake perceptions and random memories can be triggered by electronic neural stimulation. Sixth, there is no need for a homunculus (a miniature observer) inside the brain to interpret the meaning of anything. Neurons represent meaning directly, not indirectly. With universal representation, both the cognitive science problem of Ryle’s regress, and the homunculus problem disappear. There is no infinite regress of homunculi or interpretation. Both are artifacts of thinking in terms of information. This is backed up by the fact that there is no central location in the brain that all neurons are connected to.
• Dendritic integration can be represented mathematically by representing each dendritic segment in the dendritic trees as a four dimensional direction vector dn in R4. (A dendritic segment models a linear portion of a dendritic branch represented by a four dimensional direction vector in which the spatial components of the direction vector point towards the soma). Any dendritic branch can be approximated to any desired degree of precision as a series of piecewise connected linear dendritic segments, where the head of one vector is located at the tail of the next. This representation ignores the diameter and cross sectional area of each dendritic segment because its variation along the dendritic tree primarily serves to linearize the summation of EPSPs. Mathematically, we can just assume the EPSPs sum linearly. In other words, we don’t need to represent the biological mechanisms that causes linearization. The cross sectional area also affects electrotonic conduction speed by changing membrane capacitance, but we can represent segment electrotonic conduction velocity directly in each dendritic segment if desired, without the need to repeatedly compute it from its biological cause at runtime.The direction vectors represent the spatial position and spatial path traced by the dendritic tree. All dendritic direction vectors should point in the direction of the soma. Each synaptic activation potential is represented as a four dimensional vector sm in R4. The head of sm is located at its synapses’ position on a dendritic segment. The magnitude of sm is a function of the synaptic weight. Its orientation represents the neuromodulation of the dendritic segment, and whether the synaptic potential is inhibitory or excitatory. Presynaptic inhibition can be simulated by setting the orientation of sm normal to the orientation of the dendritic tree segment the synapse is located on. In that case, the dot product of smand dnis zero so smmakes no contribution to the PSP. In this representation, since we use the orientation of sm to simulate the effects of neuromodulation, we can simulate a change in dendritic integration gain (either positive or negative) by changing the orientation of smrelative to dn. If the neuromodulation is at full gain, and the synaptic potential is excitatory, the orientation of smis parallel to the dendritic segment it is part of, facing towards the soma. In this case, the cosine of the angle between sm and dn = 1 and the projection of sm along dn = ||sm||. If the neuromodulation is at full gain, and the synaptic potential is inhibitory, its orientation is parallel to the dendritic segment it is part of, facing away from the soma. In this case the cosine of the angle between smand dn = ‐1 and the projection of sm along dn = - ||sm||. Then equation (1) represents the electrotonic potential contribution of all synaptic activation vectors smalong dendritic segment dn. Equation (2) then gives the projection of smalong dn. It is necessary to set the time component of each dnand sm vector to the current integration time while performing the summations.At the end of each dendritic segment dn we need to take the resultant electrotonic potential vector EP whose tail is at the end of the dendritic segment and reorient it so it points along the next dendritic segment dn+1. Therefore, we need to preserve its magnitude and sign, but change its orientation to point along dn+1. The function required to do this is given by equation (3).Eq (4) integrates the electrotonic potential contributions from each activated synapse sm along dendritic segment dn. It is necessary to set the time component of each dn and sm to the current integration time while performing the summation. When the dendritic integration reaches the end of a dendritic segment, its electrotonic potential is reoriented along the direction of the next dendritic segment dn+1 using Eq (5) to initialize the electrotonic potential at the start of dn+1. This cycle is then repeated over all dendritic segments along the integration path until the integration of the electrotonic potential reaches the neuron’s axon hillock. If the integrated sum of the electrotonic potential exceeds the neurons activation threshold it fires its axon. The firing of the axon represents instantiation of the extension of the abstraction and concept that was just dendritically integrated. In other words, the neuron functions as an abstraction detector. When the axon fires, it signals that the relational conditions that represent the intension and meaning of the abstraction its neuron is representing are satisfied in the current context of abstraction, and therefore, at the time it fires, the abstraction exists. Hence, the neuron relates the meaning of an abstraction to its existence. It relates its intension to its extension. This then provides a direct link between the representation of meaning and existence. It explains how the brain grounds semantic meaning.Since the extensions all the abstractions and the concept represented by each neuron are represented by the firing of the same axon, there must be some way for the neuron to distinguish between the different contexts of abstraction. That function is also performed by dendritic integration. Dendritic integration functions as both a spatiotemporal correlator of synaptic potentials, and as a spatiotemporal demultiplexer. It functions as a spatiotemporal demultiplexer because only those synaptic potentials that are spatiotemporally correlated will sum to the threshold required to initiate generation of the axon’s action potential. In other words, synaptic potentials that fire outside the current context of thought are effectively ignored because their potentials are not spatiotemporally correlated, so they don’t sum effectively enough to cause the neuron to fire its axon. Of course, that also explains why we think in context. It also explains our ability to generalize from the specific to the general, our ability to classify objects based on their similarities and difference, our extraordinary ability to identify patterns despite missing or noisy data, our ability to thing in terms of analogies, our ability to think associatively, and our ability to focus on the current context of thought and ignore irrelevant inputs. If the distribution of synapses within neurons, and the neocortical and subcortical patterns of neural connectivity are also considered, it explains the rest of human cognitive characteristics and capabilities. For example, the organization of the brain’s cortical layers, cortical minicolumns, cortical columns, cortical maps, and cortical subsystems and their neural connectivity are all consistent with what would be expected given a representational and computational system based on universal representation. Unfortunately, space and time do not permit me to go into all those details in this presentation. From a mathematical perspective, this algorithm is equivalent to projecting four dimensional abstraction extensions onto four dimensional abstraction intensions. From a mathematical perspective, all calculations are performed in four dimensions. Yet from a representational perspective, the intensional representation at any point in the integration can represent the integration of concept extensions each of which may have a different number of dimensions; i.e., each of which may represent a concept of a different order. In effect, dendritic integration provides a mathematical transform that automatically translates the abstract representation of an arbitrarily mixed dimensional, arbitrarily mixed higher order abstract state space to a four dimensional representation. All computations can then be performed in the same dimensionally independent way using the same algorithm irrespective of the dimensionality of the abstract terms and abstract relations that represent the set of nth order relational conditions that define the abstractions intension. Hence, everything can be represented in the minimal number of dimensions required to represent it without the need to consider the dimensionality of the representation in the computation. Therefore, a single algorithm can represent the computation of an abstraction that can represent anything or any combination of things, or the computation of any set of relations among any number of things in the universe (subject to the number of synapses and the abstraction instances they represent in the dendritic trees). The computations are completely abstract. They have no domain limitations. They can represent literally anything at any combination of any number of levels of abstraction. For example they could represent the meaning of mathematical equations of arbitrary complexity, they could represent word sequences, letter sequences, relations between words, the meaning of an entire sentence, spatiotemporal relationships between objects in a visual field at any level of abstraction, relationships between feelings and the words that express them, etc. Think about it. The same algorithm can compute anything in the universe. A universal of computation. It doesn’t get any more elegant than that. It is also combinatorially less complex than computations performed using information, while using combinatorially less space and combinatorially less processing time. It is mathematically complete and impossible to make inconsistent, so there are no domain limitations and there is no need to represent, compute, or enforce ontological consistency because there is only one ontology and it is the same as the representation and the computational model. There is nothing to be inconsistent with. All possible things can be learned given a large enough network, enough computational time, the relevant prerequisite knowledge and the relevant experience or analysis. No prior knowledge of the sequence or pattern of inputs is required. The representation is also extremely robust. Destroy an abstraction or a neuron and the neuron that represents the next best match for its representation automatically takes over. It is also robust in the sense that there is no dependence on the sequence or pattern of inputs, so there is no need to predict it to program the system. The system perceives whatever inputs it is given in whatever sequence they are presented, interprets them epistemically, and learns to understand the meaning of its inputs automatically.
• Maximizing Representational PowerIncrease dendritic tree span – this allows the neuron to represent a larger volume of spatiotemporal correlations. Thus it can represent more spatiotemporal diversity. It also provides room for more postsynaptic terminals, or alternatively, allow the neuron to represent larger spatiotemporal distances between them.Increase the number of dendritic branches – This allows the neuron to increase representational and computational compression because it allows it to factor postsynaptic terminals and dendritic branch segments into a larger number of reusable sets of shared representational components.Increase the number of synapses – This allows the neuron to represent more complex abstractions because it can represent abstraction equations that contain larger numbers of terms.Reduce electrotonic conduction speed – This increases the range and volume of spatiotemporal correlations the neuron can represent. This also increases the time differential between synaptic events, thereby increasing the potential precision of spatiotemporal correlation.Maximizing Computational SpeedIncrease electrotonic conduction speeds – This reduces the amount of time required to compute an abstraction. The cost of doing this is reduced scope and precision. Reduce dendritic tree size – This reduces electrotonic conduction distance, thereby reducing spatiotemporal integration time. The cost of doing this is reduced representational and computational power.Increase axon diameter – This will increase axon conduction velocity at the expense of increased size and power.Reduce axon length – This will reduce the time required for action potential propagation. The cost is the abstraction or concept computed by the neuron can only be used locally.
• Cortical LayersThe human cerebral cortex is organized into cell layers. The number of layers and the details of their functional organization vary throughout the cortex. The most typical form of neocortex contains six layers, numbered from the outer surface (pia matter) of the cortex to the white matter.Layer I is an acellular layer called the molecular layer. It is occupied by dendrites of the cells located deeper in the cortex and axons that travel through or form connections in this layer.Layer II is comprised mainly of small spherical cells called granule cells and therefore is called the external granule cell layer. Granule cells are inhibitory.Layer III contains a variety of cell types, many of which are pyramid shaped; the neurons located deeper in layer III are typically larger than those located more superficially. Layer III is called the external pyramidal cell layer. Pyramidal cells are excitatory.Layer IV, like layer II, is made up primarily of granule cells and is called the internal granule cell layer. These cells are inhibitory. The cells in this layer are the main target of sensory abstractions from the thalamus.Layer V, the internal pyramidal cell layer, contains mainly pyramidal shaped cells that are typically larger than those in layer III. These cells are excitatory. Layer VI is a fairly heterogeneous layer of neurons and is thus called the polymorphic or multiform layer. It blends into the white matter that forms the deep limit of the cortex and carries axons to and from the cortex.Layers I thru III contain the apical dendrites of neurons that have their cell bodies in layers V and VI, while layers V and VI contain the basal dendrites of neurons with cell bodies in layers III and IV. It is important to note that the cortical layers are not simply stacked one over the other; there exist characteristic connections between different layers and neurontypes, which span all the thickness of the cortex. These cortical microcircuits are grouped into cortical columns and minicolumns, the latter of which have been proposed to be the basic functional units of cortex.[13] In 1957, Vernon Mountcastle showed that the functional properties of the cortex change abruptly between laterally adjacent points; however, they are continuous in the direction perpendicular to the surface. Later works have provided evidence of the presence of functionally distinct cortical columns in the visual cortex (Hubel and Wiesel, 1959),[14] auditory cortex and associative cortex.Note that this regular neural organization is likely to be a common feature throughout most if not all of the neocortex. Once again this implies the use of a single type of representation and computation throughout the neocortex.Cortical MinicolumnsCortical columns contain groups of cortical mini-columns. A cortical minicolumn is a vertical column through the cortical layers of the brain, comprising perhaps 80–120 neurons, except in the primate primary visual cortex (V1), where there are typically more than twice that number. There are about 2×108 minicolumns in humans.[1] From calculations, the diameter of a minicolumn is about 28–40 µm[citation needed].Facts:Cells in 50 µm minicolumn all have the same receptive field; adjacent minicolumns may have very different fields (Jones, 2000).Downwards projecting axons in minicolumns are ≈10 µm in diameter, periodicity and density similar to those within the cortex, but not necessarily coincident (DePhilipe, 1990).Thalamic input (1 axon) reaches 100–300 minicolumns.The number of fibres in the corpus callosum is 2–5×108 (Cook 1984, Houzel 1999) — perhaps related to the number of minicolumns.Interpretation:The excitatory cells in cortical minicolumns would all have the same receptive field if they compete with each other for representation. They should represent small spatiotemporal variations of the same types of abstractions at the same level of abstraction in similar contexts. The neurons in a minicolumn should be connected to each other with lateral inhibition circuits that ensure the neuron with the best existing representation of its abstraction type is activated when the neurons in the minicolumn process their input field. It is important to note that these are only ‘functional’ units by virtue of their shared receptive field. The term functional should not be interpreted to mean that the neurons in a minicolumn use a different type of representation or computational algorithm than those in other minicolumns. Each neuron in a minicolumn actually represents a superposition of state, relation and process in a particular context of abstraction. In addition, it will represent the preexisting long and short term memory of the abstractions it represents. The thalamic axon should control the flow of sensory input to the cortical minicolumns. One of the roles of the thalamus is to enable sleep and consciousness. When asleep or unconscious, it can prevent sensory input from activating the neurons in the minicolumns, thereby allowing us to sleep. The thalamic axon can have a large efferent field because pinpoint control of which neurons are enabled for input is not necessary to switch between consciousness and unconsciousness.Cortical ColumnsThe columnar functional organization, as originally framed by Vernon Mountcastle, suggests that neurons that are horizontally more than 0.5 mm (500 µm) from each other do not have overlapping sensory receptive fields, and other experiments give similar results: 200–800 µm (Buxhoeveden 2002, Hubel 1977, Leise 1990, etc.). Various estimates suggest there are 50 to 100 cortical minicolumns in a hypercolumn, each comprising around 80 neurons.An important distinction is that the columnar organization is functional by definition, and reflects the local connectivity of the cerebral cortex. Connections &quot;up&quot; and &quot;down&quot; within the thickness of the cortex are much denser than connections that spread from side to side.David Hubel and Torsten Wiesel followed up on Mountcastle&apos;s discoveries in the somatic sensory cortex with their own studies in vision. A part of the discoveries that resulted in them winning the 1981 Nobel Prize[3] was that there were cortical columns in vision as well, and that the neighboring columns were also related in function in terms of the orientation of lines that evoked the maximal discharge. Hubel and Wiesel followed up on their own studies with work demonstrating the impact of environmental changes on cortical organization, and the sum total of these works resulted in their Nobel Prize.InterpretationCortical columns represent the next higher level of neocortical organization after minicolumns. Cortical columns should represent related collections of minicolumns. Cortical columns should also be found throughout the neocortex. However, the higher the level of abstraction represented by the region involved, the more difficult it will be to identify them functionally because they will relate things to each other at higher and higher levels of abstraction. In other words, the relations themselves become more and more abstract. They become higher and higher order abstract functionals. The cells in a cortical column should represent the same general type of abstraction, but over a graded range of different parameters. For example if a minicolumn represented oriented line segments with a particular orientation, the cortical column that contains them might represent oriented line segments with a range of different orientations. In addition, the neurons in each minicolumn should be connected via axon collaterals to other minicolumns in the same cortical column. For example if a minicolumn represents an oriented line segment, a column might represent an abstraction of a curve composed of a sequence of oriented line segments. To support this, there should be feedforward axon colaterals of the neurons in each minicolumn to the neurons in the other minicolumns in each column. There should also be about ten times as many feedback connections from the axons in each column to the dendrites in each minicolumn. That would allow us to recognize the outlines and boundaries of figures bottom up from lower levels of abstraction to higher levels of abstraction, yet also be able to consciously examine and fill in perceptual details of a particular object in short term memory at lower levels of detail.Cortical MapsEach part of the brain projects in an orderly fashion onto the next, thereby creating topographical maps.One of the most striking features of the organization of most sensory systems is that the peripheral receptive surface – the retina of the eye, the cochlea of the inner ear, and the surface of the skin – is represented topographically throughout successive stages of neural processing. For example, neighboring groups of cells in the retina project to neighboring groups of cells in the visual portion of the thalamus, which in turn project to neighboring regions of the visual cortex. In this way, an orderly neural map of higher and higher order abstractions representing the receptive surface is represented at each successive level in the brain. Such neural maps reflect not only the position of receptors, but also their density, since density of innervation determines the degree of sensitivity to sensory stimuli. For example, the central region of the retina, the fovea, has the highest density of receptors and thus affords the greatest visual acuity. Correspondingly, in the visual cortex the areas devoted to abstraction of the abstractions that represent the fovea’s receptive surface are greater than the areas representing the peripheral portion of the retina, where the density of receptors (and visual acuity) is lower.In the motor system, neurons that regulate particular body parts are clustered together to form a motor map. The most well-defined motor map is in the primary motor cortex. The motor map, like the sensory maps, does not represent every part of the body equally. The extent of the representation of an individual body part reflects the density of innervation of that part and thus the finiteness of control required for movements in that part. InterpretationCortical connections and neurons are organized in topographic maps because it allows the brain to implicitly represent the spatiotemporal relations between neighboring regions in the sensory receptor input fields and motor control output fields. In other words, by organizing its representation topographically, no additional explicit representation of spatial relations has to be abstractly represented neurally. In effect, the spatial relations are represented for free. This allows the propagation rate of electrotonic potentials within each neurons dendritic trees, combined with action potential conduction rates, and the existing topographic neural maps to automatically represent the spatiotemporal relations between neighboring portions of the receptive and control fields. When combined with hierarchical organization of axonal projections between related topographic maps, this allows the cortex to represent spatiotemporal relations between neighboring receptive fields and neighboring control fields at progressively higher levels of abstraction, without the need for any additional explicit representation of those spatiotemporal relations.Functional Systems are Hierarchically OrganizedIn most brain systems abstraction processing is organized hierarchically into levels of abstraction. In the visual system, for example, each neuron in the lateral geniculate nucleus (within the thalamus) is responsive to a spot of light in a particular region of the visual field. The axons of several adjacent thalamic neurons converge on cells in the primary visual cortex, where each cell fires only when a particular arrangement of presynaptic cells is active. For example, a cortical cell may fire only when the inputs signal a bar of light with a particular orientation. In turn, cells in the primary visual cortex converge on individual cells in the association cortex. These cells respond even more selectively. For example, they may only respond to a bar of light that is moving in a certain direction. Abstractions are processed both serially and in parallel through as many as 35 or more cortical regions dedicated to the processing of visual information. At very advanced stages of visual abstraction processing in the cortex, individual neurons are responsive to highly complex visual abstractions, such as the shape of a face.InterpretationHierarchical organization into increasing levels of abstraction in the feed-forward direction is precisely what would be required by universal representation. It is also precisely what occurs in the brain. The reverse order occurs in the motor cortices. In the motor control system, neurons that represent higher order abstractions receive inputs first and they project to regions that represent lower and lower order abstractions in the feed forward direction until the neurons in the primary motor cortex are reached. Whereas the primary sensory areas of cortex are the initial site of cortical processing of sensory abstractions, the primary motor cortex is the final site in the cortex for processing motor commands. Higher order motor areas, located in front of the primary motor cortex in the frontal lobe, compute more and more specific coordinated sequences of movement signals that are conveyed to the primary motor cortex for implementation. Once again, this is precisely the connectivity pattern and processing order that universal representation requires.Cortical SubsystemsTo be continued…Cortical HemispheresTo be continued…As far as I can tell, universal representation provides both the ontological, and computational foundation for the biological representation of all perception, emotion, thought, meaning, and consciousness. It solves both the neural binding problem, and the hard problem of consciousness. It explains how the brain performs semantic grounding. It also explains the operation, structure, capabilities, and organization of the brain. It is the basis for the neural representation of thought, meaning, memory, awareness, and consciousness.
• More memory details coming in Revision 2
• In the visual system, information about motion, depth, form, and color is processed in many different visual areas and organized into at least two different cortical pathways. How can such distributed processing lead to cohesive perceptions?When we see a red ball we combine into one perception the sensations of color (red), form (round), and solidity (ball). We can equally well combine red with a square box, a pot, a shirt, or a car. The possible combinations of elements is so great that the existence of an individual feature-detecting cell for each set of combinations is improbable. As far as I know, this problem has not previously been solved satisfactorily. The brain solves the neural binding problem using universal representation.
• Visual object recognition occurs in two sequential neural processing phases. The first phase involves preattentive, bottom up, hierarchical abstraction, and parallel composition of the qualia, features and properties of objects. This process focuses on the parallel abstraction and bottom up hierarchical composition of an object&apos;s global texture or features and focuses on the distinction between figure and ground by abstracting in parallel the useful elementary properties of the scene: position, color, orientation, size and direction of movement. At this point, variation in a simple property may be discerned as a portion of an object edge, border, or contour, but complex differences in combinations of properties are not detected. The neuron that represents each feature or property at its level of abstraction fires its axon when it detects the feature or property it represents. The neurons that fire then provide feed forward topographically mapped synaptic inputs to convergent processing at nearbymodal association cortices. The modal association cortices associate multiple low level features at higher order levels of abstraction. Afterneurons fire an action potential, they are easier to fire again in the short term future because it takes time for the electrotonic currents in their dendritic trees to dissipate. This form of short term memory is called neural priming. It lasts about 100 milliseconds.This initial grouping of items continues at higher and higher orders of abstraction until the level of conscious awareness is reached. It is then followed by a top down attentive process of hierarchical abstraction decomposition that takes advantage of the bottom up neural priming to re-fire the neurons that composed the lower level features and properties of the objects that are relevant in the current context of thought at higher levels of abstraction. The attentive process uses recurrent feedback connections from the objects and relations consciously identified at higher levels of abstraction to sequentially re-activate the lower level objects and relation abstractions previously fired in bottom up processing that we are consciously attending to, thereby consciously filling in the lower and lower level, fine grained perceptual details and qualia relevant to the higher level objects relevant in the current context of thought. The lower level details that are reactivated include the shared representation of the low level image position, motion, and color qualia in area 4 of the primary visual cortex. The recurrent activation of the low level abstractions can then cause repeated cycles of activation, which refresh our short term conscious memory, and extend its duration. In that way, our conscious short term memory continues to be refreshed until we consciously attend to something else. Once our focus of attention moves on, the previous recurrent neural activations cease firing and the perceptual details fade from short term memory. An important point in this solution of the neural binding problem is that there is no master neural map or neural area that synthesizes the convergent information from all relevant neural areas involved in the representation of higher level abstractions. This is important because anatomically, there is no single cortical area to which all other cortical areas report exclusively, either in the visual or in any other neural system. There is no part of the brain looking at the visual image provided by a master area. There is no listener in the brain listening to a master auditory area. There is no observer inside the brain that monitors or interprets the meaning of, or understands objects and their relations at higher and higher levels of abstraction. Instead, lower level abstractions previously activated by pre-attentive parallel hierarchical bottom up abstraction composition are reactivated by top down neural abstraction decomposition to fill in lower and lower level , finer and finer grained features, details, and qualia during conscious attentive processing. The neural binding process is very efficient. It allows the brain to recognize objects in near real time. Each neuron represents the relation between the existence and meaning of the abstractions and concepts it represents at its own level of abstraction. The representation of knowledge, meaning, and memory is thereby distributed throughout the neocortex.
• Generally speaking, bottom up feed forward modal perceptual integration stages should go from the representation of very small, very localized percepts that occur close together in space and time in the receptive field, and progress towards the representation of larger and larger spatiotemporal regions that are more and more abstract. Feed forward divergent processing breaks the perceptual stream into different types for type dependent processing; for example, the use of separate processing pathways for shape and motion processing.I hypothesize the hippocampus functions as the organ of declarative conceptualization. (Other LTP processes are responsible for reflexive conceptualization. Reflexive conceptualization occurs in the cerebellum and midbrain outside the neocortex. It is responsible for learning memories we can’t consciously recall declaratively, like how to play a piano, or how to ride a bicycle, walk, and talk).Fundamentally, the hippocampus operates like a concept factory. Its afferent inputs are activated abstractions from high order and/or multimodal sensory and association regions of the neocortex. The hippocampus combines afferent projections from these abstractions into new higher-order concepts which it later stores or “fixes” in long term memory, in those same high order multimodal sensory and association cortices.The weak intermediate forms of long term memory are probably fixed in the CA1 region of the hippocampus, subiculum, presubiculum, parasubiculum, and entorhinal cortex. During subsequent cycles of operation, the hippocampus can recombine the high order concepts created during previous cycles, with other high order neocortical concepts to create concepts at ever higher levels of abstraction. Evidence supporting this hypothesis follows:A unique property of the hippocampus is its strategic location as a convergence region for abstraction projections from nearly all higher-order cortical areas, as well as from brainstem nuclei. In humans, every sensory modality projects to the hippocampus (via entorhinal cortex), and they all have reciprocal projections back.The entorhinal cortex is the major source of projections to the hippocampus. The major cortical inputs originate in the adjacent parahippocampal gyrus and in the perirhinalcortex. These regions in turn receive projections from several polysensory associational regions in the frontal, temporal, and parietal lobes. The entorhinal cortex also receives cortical inputs directly from other presumed polysensory regions and one unimodal input from the olfactory bulb. The sources of hippocampal afferents and the reciprocal destinations of its efferents combined with universal representation uniquely place the hippocampus in an ideal anatomical and neural circuit position to function as the organ of declarative conceptualization.The afferent connections provide just the abstraction extensions required for combination into higher-order abstractions and concepts. The reciprocal efferent feedback connections allow the hippocampus to store the results of previous conceptualization cycles back in the relevant high-order association cortices, and to form concepts at ever higher levels of abstraction by reusing the newly formed concepts in a sequence of recombinant conceptualization operations. The reciprocal feedback connections also subserve top down short-term memory and conscious recall.This explains several well known phenomena. Learning progresses from low levels of abstraction to higher levels of abstraction. It also explains why we learn efficiently in terms of similarities and differences. At a smaller scale, it even explains the horn shape of the hippocampus. At the tip of Amon’s horn, the hippocampus represents abstractions that have a short spatiotemporal span. Those abstractions represent things that are small and that are precisely located in space and time. As the horn gets wider, the abstractions represent progressively larger regions of space and time. They relate things in larger and larger space time contexts. The role of the hippocampus as the organ of declarative conceptualization is also consistent with the increase in the size of the subiculum that occurs throughout phylogenyand culminates in humans. We need our greatly enlarged subiculum to be able to represent concepts at high levels of abstraction.
• Keep in mind that each hemisphere of the brain contains its own hippocampus. The hippocampus in each cerebral hemisphere represents and processes abstractions from the contralateral half of the body.Also, it is important to note that the hippocampus only needs to get involved in the formation and storage of new declarative memories. Overall, the brain is a hierarchical abstraction processor. Processing proceeds from the bottom up initially. If an abstract representation of what we perceive or are thinking about already exists, it is used and processing does not need to continue to higher levels of abstraction. In this sense, the brain is like a hierarchical event processor, except the events are represented by abstractions. Thus, the brain automatically represents, computes, and processes events at the lowest level of abstraction possible. The hippocampus doesn’t need to determine whether a representation for an abstraction exists or not. If neural activity reaches the hippocampus, it means there is no existing abstract representation of that activity. At that point, if there is sufficient affective saliency, the hippocampus associates the previously unassociated afferent abstractions, thereby creating abstraction, a new concept, and a new neural binding. Of course, this means one reason we must sleep is that it is the only time we can store new long term memories back in the cerebral cortex. We can’t do that while we are awake because it would interfere with abstraction processing during consciousness. The thalamus needs to be able to replay spatiotemporal abstraction contexts from intermediate term memory back to the cerebral cortex undisturbed in order to allow the cerebral cortex to create the new synaptic connections required to integrate new long term memories with the existing memories in the cerebral cortex.That new neural binding is then transferred from short term working memory in the hippocampus to intermediate term memory in the surrounding limbic structures (i.e., in the presubiculum, parasubiculum, thalamus, and entorhinal cortex. I suspect the thalamus gates perceptual input to and from the cortex. During consciousness, it allows perceptual input to reach the cortex for processing. While asleep, it prevents most perceptual inputs (all those except olfaction) from reaching the cortex. While in REM sleep, it probably plays a role in vectoring feedback axonal projections from intermediate term memory in the subiculum and entorhinal cortex back to their original sources of origin in the cerebral cortex where they are stored as new long term memories.
• In more detail, neural binding for visual object recognition occurs as follows:There are two primary visual processing pathways in the human brain called the Magnocelluar (M) and Parvocellular (P) pathways. The M and P pathways originate in the M and P cells in the retina respectively. The M pathway is specialized for the detection and processing of motion and depth of field. The P pathway is specialized for the detection and processing of position, color, and form. Feed forward connections for the M and P pathways go from the retina to the lateral geniculate nucleus (LGN), and then onto shared representation in area 4 of the primary visual cortex. From there, the M pathway feeds forward to the thick stripes region of the extrastriate area in the secondary visual cortex, and from there onto the middle temporal cortex, then to the posterior parietal cortex in the dorsal parietal pathway. From area 4 of the primary visual cortex, the P pathway feeds forward to areas 2 and 3 of the primary visual cortex which abstracts low level areas of color, form, and oriented line segments, then onto the interstripe and thin stripe region of the extrastriate area in the secondary visual cortex, which abstracts depth of field, larger color areas, and textures, then onto area V4 which abstracts higher level color regions and objects, then onto the inferior temporal cortex which associates multiple higher order visual features, thereby performing neural binding to identify specific visual objects. Consequently bottom up feed-forward abstraction processing starts in the retina and LGN, then proceeds to area 4 of the primary visual cortex. Bottom up processing then splits up into the dorsal parietal pathway for the processing of motion and depth of field, and down to the ventral inferior temporal pathway for the conscious identification of objects. From both pathways, attentive processing then proceeds top down in serial fashion until it reaches the representation of the shared low level abstractions of low level visual percepts, and qualia associated with percept position, color, form, motion, and depth of field in area 4 of the primary visual cortex. It is interesting to note that due to the shared representation of position and motion information at low levels of abstraction, our species is unable to consciously perceive or represent points or objects in the visual field that are simultaneously stationary and moving. Obviously, that makes understanding particle wave duality and the superposition of position and momentum in quantum mechanics counterintuitive and difficult. Those phenomena cannot be experienced or observed directly. They can only be understood abstractly.
• From: Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346. Available online at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.7929&amp;rep=rep1&amp;type=pdf
• Universal representation also solves the hard problem of semantic grounding. In other words, the direct relation between abstraction intension and extension directly causes semantic grounding. At the lowest level of abstraction, energy quanta in the environment are detected and transduced by the receptive fields of our sensory neurons. The sensory neurons represent a direct abstraction of the spatiotemporal relations between the energy quanta that strike their receptive surface. Semantic meaning is then grounded transitively, through successively higher levels of abstraction.
• * From: “Facing up to the problem of Consciousness”, David Chalmers, Published in the Journal of Consciousness Studies 2(3):200-19, 1995. Available online at: http://consc.net/papers/facing.html
• We have to get rid of the dependence on an observer because it is consciousness that creates and performs observation on behalf of the observer. In a very real sense, it is consciousness that creates an observer’s experience. Without consciousness there is no experience, no observation, and no observer.
• The Yoneda Lemma is a theory in category theory that allows us to create a higher order fully faithful representation of a category by covariantly embedding that category in a higher order functor space of sets. The functors in the functor space provide a fully faithful higher order representation of the embedded category. Instead of operating over the embedded category itself, we can then operate over the higher order functor space of sets.The vast generalization of the Yoneda Lemma generalizes this idea in several ways. First, it involves a generalization of category theory in which we eliminate the category theory distinction between the representation of objects and arrows. Instead of using different representations for objects and arrows, we create a single representational primitive that functions as the superposition of state, relation, and process. Instead of representing things indirectly in terms of a collection of states and their relations, they are represented directly in terms of the composition of hierarchy objects, where each hierarchy object represents an arbitrarily high order superposition of states, relations, and processes. In addition, instead of representing composition by reference, we restrict the composition of hierarchy objects to composition by value.We then compose a direct representation from the hierarchical composition of hierarchy objects by value. Next, instead of embedding an indirect representation of a category in a covariant functor space of sets, we embed an abstraction in a higher order covariant space of higher order abstractions. This is simply done by allowing abstractions to compose higher order abstractions. I.e., we allow hierarchy objects to compose other hierarchy objects hierarchically. The net result is we end up with a collection of higher order abstractions composed from lower order abstractions by value. This process can then be repeated, to create fully faithful representations at arbitrarily high levels of abstraction. Because there is only one primitive of representation, there is no need to make any decisions about how to represent anything. In other words, there is no need to decide whether to represent something as an object or as an arrow. Since all computation is performed using the same universal of computation, and that universal is represented by the same representational primitive, there is also no need to make any decisions about which algorithm to use to compute anything. The combination of these techniques removes all representational domain limitations, and it eliminates the need for any a priori knowledge about what is to be represented. The exclusive use of value semantics and direct representation completely avoids incompleteness and inconsistency. It creates a representation that is always complete and consistent. No ontological consistency rules are required so none have to be created or represented. No omniscient, omnipotent observer needs to ensure the representation remains consistent and complete. The representation is defined in such a way that it is impossible for it to become incomplete or inconsistent. The same thing is true of the direct representation of the universe at large.
• Surprise! The quantum computer that computes and is existence is not an information processor. It does not represent existence in terms of information. Information is a kind of indirect representation. The quantum computer that computes existence does so in terms of direct representation. Direct representation is the logical converse of indirect representation. Direct representation is a superior alternative to the current representation of numbers, mathematics, computation, and information. Direct representation can represent all of existence completely and consistently using a single domain, a single ontology and a single mathematical process. What’s more, that domain, ontology, and process are all represented by the direct representation of energy quanta. By contrast, indirect representation and information can only represent existence incompletely and inconsistently. Using information, we are forced to break the representation of existence up into large numbers of limited size domains, and use many different representations or languages, many different ontologies, and many different functions and processes. In order to get a complete representation of existence we then have to find ways to integrate all those domains consistently, translate all the different representations and languages into some common form, and integrate all the ontologies, functions, and processes consistently. It is impossible to do all of that consistently and completely using mathematics. Gödel&apos;s Incompleteness theorems prove it. Direct representation is not observer or measurement dependent. It is not subject to Heisenberg uncertainty, undecidability, halting problems, incompleteness or inconsistency. It avoids all of those limitations by representing numbers and mathematics in terms of direct representation, instead of indirect representation.We derive information about physical existence from physical existence. Physical existence is not composed of information or bits. It is composed of discrete units of energy we call energy quanta. Energy quanta are represented directly by their existence. We can represent energy quanta using information, and we can use energy quanta as a carrier for information, but that does not mean energy quanta and information are the same thing. Information is only an indirect representation of energy quanta. Information about a thing α is not the same thing as the physical existence of α. Some people think the lowest level of physical existence must be composed of information because they think the binary representation of information is the simplest, most fundamental, most general representation possible. They think nothing could be simpler than binary representation so nature must base its own existence on it. They think the simplest possible representation of a quantum state is binary with two states; i.e., quantum exists (yes || no). That belief is based on the fallacy of nonexistence. It is based on the logical fallacy of incomplete information. It is only correct in the context of indirect representation. There is more to representation than indirect representation.  In direct representation, the simplest possible representation is unary with one state: Quantum exists. In particular, the simplest energy quantum is a single closed quantum loop. That quantum loop is the direct relation between the finite and the infinite. It is the event horizon that separates the finite from the infinite singularity. It is a temporal or anti-temporal field energy quantum.Direct representation is far less complex than the representation of information, yet it can represent and compute the existence of literally everything in the universe. Direct representation uses unary encoding instead of binary encoding. It uses a single invariant ontology to represent all of existence, instead of a large number of special purpose observer defined domain limited ontologies. It is observer independent instead of observer dependent. It is based on value semantics instead of reference semantics. It is complete and consistent over the entire universe. It uses one universal process to represent everything in the universe consistently and completely.
• Physical existence exists independent of any observation of it. For example, the dark side of the moon exists, even though it cannot be observed from Earth. Planets in distant galaxies exist even though they are too distant for our current technology to observe them. A tree that falls in a forest when nobody is there to hear it still makes pressure waves in the atmosphere as it falls. It is true a falling tree does not make a sound if no observer is present, but that is only because sound is a neural abstraction created by an observer&apos;s brain when the atmospheric pressure waves created by the falling tree cause vibrations in our eardrum. In turn, those vibrations cause deflections of the hair bundles in our inner ear. The hair cells that compose those hair bundles function like biological strain gauges. The hair cells have different lengths that resonate at a particular vibration frequency. In other words, they are tuned to detect specific frequencies. They can measure motions of atomic dimensions. Mechanical stimulation of the hair cell opens ion channels in the cell&apos;s plasma membrane; the current flowing through those channels alters the cell&apos;s membrane potential, which in turn modulates the release of synaptic neurotransmitter. That neurotransmitter traverses the synaptic cleft and causes activation of the neurons that synapse on the hair cells. The neurons that process those synaptic inputs then use a specialization of direct representation called universal representation to create the abstract representation we experience as &apos;sound&apos;. The perception of sound is typical of how the brain perceives, represents and processes all other senses. The main difference between different senses is that the brain uses different types of sensory receptor cells to detect and transduce different types of energy. In addition, different types of sensory cells synapse on different sets of neurons. Thus, different senses are processed by different parts of the brain&apos;s neural network. To create information, an observer must decide or determine how to measure existence. An observer must also decide or determine how to encode the measurement in terms of information. An observer must interpret the meaning of the measurement, and decide or determine how to represent that meaning syntactically. To be of any value, an observer must be able to store and recall any information produced. Ultimately, that means there must be some way to create persistent patterns of energy (diaphora de re) in physical existence to represent the bits of information. For example, bits may be represented by magnetic domains on a hard disk. They may be represented by pits in the reflective optical surface of a DVD or CD. In other words, there must be some way to record and/or transmit the information via some physical data carrier. There must also be some way to retrieve the information from the data carrier. That means there must be some way to access and measure the energy patterns that physically represent the stored bits of information. There can be no information without physical representation. There can be no disembodied or non-physical information. All of these facts make information observer and observation dependent. That is a problem because existence existed long before any observers. That means physical existence cannot be composed of information. It also means information is an anthropocentric form of representation.
• To be of any value, an observer must be able to store and recall any information produced. Ultimately, that means there must be some way to create persistent patterns of energy (diaphora de re) in physical existence to represent the bits of information. For example, bits may be represented by magnetic domains on a hard disk. They may be represented by pits in the reflective optical surface of a DVD or CD. In other words, there must be some way to record and/or transmit the information via some physical data carrier. There must also be some way to retrieve the information from the data carrier. That means there must be some way to access and measure the energy patterns that physically represent the stored bits of information. There can be no information without physical representation. There can be no disembodied or non-physical information. All of these facts make information observer and observation dependent. That is a problem because existence existed long before any observers. That means physical existence cannot be composed of information. It also means information is an anthropocentric form of representation.
• The universe expanded from a singularity in the Big Bang. There couldn&apos;t be any observers in the singularity because there is nothing in a singularity to compose the existence of an observer from. A singularity is infinite. There are no differences in a singularity that can distinguish between different states. Thus the singularity is state-less. There can be no information without states. The singularity has no quantum state. There is no space in a singularity. Therefore, an observer would have no space to exist in. Without space and quantum states there can be no fermions, no leptons, no atoms, and no molecules in a singularity. Therefore there are no particles that could compose the existence of an observer. Without an observer, there can be no observation and without observation there can be no information. A singularity has no quantifiable order or structure capable of encoding or representing meaningful information. It can&apos;t even represent state, let alone meaning. Meaningless information is no information at all. It is just energy. Energy and information are not the same thing. If they were, there would be no point in distinguishing the two concepts. Since no observers and no meaningful information could exist at the beginning of the universe, the beginning of the universe could not have been composed of information. If the universe wasn&apos;t composed of information at its beginning, why should it be now? II. Independent of that argument, we also know it is impossible for physical existence to be composed of information because special relativity tells us the propagation speed of information is limited by the finite speed of light. Since it takes a finite amount of time for information to travel from any point in space to an observer&apos;s location, it is impossible for any observer to have any information about the current quantum state of existence any distance from the observer. An observer can only have information about the past quantum state of existence. Even then, the observer&apos;s information about the past is temporally inconsistent, because the information that reaches the observer at any given instant in time took an amount of time proportional to the distance between the observer and the observed event to reach the observer. Since different events are located different distances from an observer, the information an observer receives about each of those events at any observer proper time t represents the quantum state of those events at observer proper time t - dt, where dt is the amount of time it took for the information to travel from the location of the event to the observer&apos;s location. That makes it impossible for the quantum state of a universe composed of information to be temporally consistent over any time-like or light-like separation of events in space. Furthermore, different observers are located at different locations so each observer observes the quantum state of existence at different relative times. That means the quantum state of existence observed from the perspective of different observers is temporally inconsistent. Because of the finite speed of light, we can only observe information about the past, and we can only do that inconsistently because we can only observe events with temporal consistency as they occurred at some time t in the past if and only if those events occurred the exact same Planck distance from us. Even then, we cannot observe the current quantum state of the universe at all. It is impossible to represent the current quantum state of the universe consistently in terms of information because the finite speed of light makes it impossible to get temporally consistent information about it. That is the second reason it is impossible for the universe to be composed of information. III. Heisenberg Uncertainty at quantum scales also makes it impossible for the universe to be composed of information. Physical existence must be certain. It can have no uncertainty. Uncertainty causes incompleteness. All of physical existence above the Planck scale is ultimately composed of energy quanta. Every energy quantum exists or it does not. There is no middle ground. There is no uncertainty in physical existence at the fundamental level of the existence or nonexistence of an energy quantum. Uncertainty is observer and observation centric. It is information centric. It would create incompleteness in physical existence if Nature based its representation of existence on observation, measurement, or information, because Heisenberg Uncertainty would make it impossible for Nature to obtain complete information about the quantum state of existence. Physical existence is complete and consistent. Nature cannot use an incomplete and inconsistent representation of physical existence to create a complete and consistent universe. That is the third reason it is impossible for the universe to be composed of information.
• Nothing that exists is inconsistent with its own existence, or inconsistent with any part of its own existence, at any size scale, or over any distance in space or time above the Planck scale. Inconsistent systems are unstable. They decay, or decompose until they change into consistent forms. Why should we settle for an incomplete, inconsistent, partial representation of existence when physical existence itself proves it is possible to have a complete and consistent representation? A complete and consistent representation of physical existence obviously exists because existence exists. Without it, existence itself would be incomplete and inconsistent. Instead of resigning ourselves to the impossibility of creating a complete and consistent representation of mathematics and physics, we need to change the way we represent mathematics, physics, and physical laws. We need to see if there is some other representation, and some other kind of mathematics we can base Physics on that is not observer or information dependent, and that is not incomplete and inconsistent.
• The use of information forces us to create symbolic domains and codomains that are different than the contexts that naturally occur in nature. Due to the inability of information to represent things consistently in the universal domain, and due to the fact that general purpose languages tend to be more verbose and require more storage and processing time than more specialized languages to represent equivalent levels and amounts of detail, we must create limited size domains and codomains, and specialized representations. Limited size domains and codomains are necessarily incomplete in the universal domain.If we want to create a complete and consistent representation of existence across the universal domain in terms of information, then we are forced to integrate all the different limited domains and codomains, different languages and representations, and different perspectives used to represent all the information we’ve amassed about existence. That integration effort is hopelessly complex. It frequently causes inconsistency. In fact, as shown by Godel’s Incompleteness theorems, it is impossible to represent all of existence completely and consistently using any fixed formal system at or above the complexity required to represent Peano arithmetic. Fixed formal systems at or above the complexity required to represent Peano arithmetic are all either incomplete or inconsistent. If they are complete, then they are inconsistent. If they consistent, then they are incomplete. Fixed formal systems at or above the complexity required to represent Peano arithmetic can’t even represent themselves completely and consistently, let alone all of physical existence.
• This means the universe and the singularity are infinite, but each instance of existence is finite.Because the singularity is infinite, the universe cannot have a first cause.The universe also has no outside. The universe does not expand into anything. There is nothing outside the universe for it to expand into. All expansion is internal. The expansion of spacetime is driven by the decomposition and conversion of the singularity into virtual energy and virtual dark energy strings. Those strings that are symmetric and form closed loops form energy or dark energy quanta. Those energy and dark energy quanta compose spacetime. More details to follow later. Nothing exists except the universe.
• The universe is a vast quantum computer. That quantum computer is composed of every energy and dark energy quantum in the universe. It computes all quantum state transitions in the universe.
• 0 = {}1 = {{}}2 = {{},{{}}}3= {{},{{}},{{},{{}}}}Etc.The natural numbers are symbolic names that represent the number of sets in each set of empty sets that composes each natural number.For example, instead of typing {{},{{}},{{},{{}}}}, we can simply type 3. The number 3 is just a shorthand for {{},{{}},{{},{{}}}}.
• The singularity is infinite because it is an undifferentiated scalar field with no differences, and thus no quantum state or boundary. If we want to create a mathematically consistent and complete representation of physical existence, we need to replace the concept of nonexistence with that of infinity and its physical manifestation, the infinite singularity. Existence is not a duality between existence and nonexistence. It is a duality between the infinite and the finite. Since all of finite existence expanded from the infinite singularity in the big bang, the origin of existence must be the infinite singularity. That means we need to move the origin of numbers to infinity. That will make infinity a well founded part of mathematics, and it will make the origin of the numbers consistent with the origin of physical existence.
• It is important to understand that the uniform background referred to here is the uniform background of all of existence, not the zero point virtual quantum energy and dark energy fields that composes spacetime. The singularity is the uniform background that all of existence comes from, including the existence of all time, space, energy, dark energy, and the quantum vacuum.The infinite singularity is an undifferentiated scalar field.
• Infinity must include everything because it is boundless and there can be only one infinity. Infinity is the only thing that includes everything. It is the only thing that every finite thing must be composed in relation to.Note that my definition of infinity is different than Cantor’s. Using my definition, infinity is unquantifiable, and uncountable. Thus it can have no size. Using my definition, it is not possible to have different size infinities. Cantor’s definition of infinity is inconsistent with physical existence because it is ultimately based on the empty set. One of the differences between a relation and a function is that all relations have inverses. On the other hand, functions are not required to have inverses. Some functions have inverses, but some do not.
• The singularity is an undifferentiated scalar field. That means the singularity is a non-unitary, non-quantized scalar field. The singularity is stateless so it has no countable quantity or amount, making it infinite. On the other hand, energy is quantized so it has state, and it is finite in amount. Thus, energy and dark energy can exist in finite and infinite forms. Energy is finite as virtual energy strings, virtual dark energy strings, and energy and dark energy quanta, but infinite in the form of the singularity.
• The omega black and white hole’s gravity compresses energy and dark energy into the grand unified field potential energy in the singularity. As the omega black hole’s gravitational field collapses, the grand unified field’s potential energy undergoes spontaneous symmetry breaking which causes a phase transition back into energy and dark energy. That energy and dark energy is composed of temporal and anti-temporal field bosons. That is the beginning of time and mirror-time. It is also the cause of the big bang.
• The omega black and white hole’s gravity compresses energy and dark energy into the grand unified field potential energy in the singularity. As the omega black and white hole’s gravitational field collapses, the grand unified field’s potential energy undergoes spontaneous symmetry breaking which causes a phase transition back into energy and dark energy. That energy and dark energy is composed of temporal and anti-temporal field bosons. That is the beginning of time and mirror-time. It is also the cause of the big bang.
• Every state transformation is a difference so it is finite. Those finite differences that cause invariant patterns to persist are symmetrical. The only way for an invariant pattern of change to persist is if it is symmetrical. Symmetry creates the degrees freedom that allow invariant energy patterns to persist. Only symmetrical energy patterns survive over time. Everything else decays until it reaches a symmetrical state.
• Symmetry is an invariance under any change in quantum state or any change in the relations between quantum states. Each symmetry creates an invariant degree of freedom. It creates a stable degree of freedom that repeated cycles of change can persist in. Multiple stable degrees of freedom can create multi-dimensional stable energy patterns. Each degree of freedom is a dimension.
• All the conservation laws of physics are caused by underlying symmetries of nature. See Noether’s Theorem.Noether&apos;s theoremMain article: Noether&apos;s theoremThe conservation of energy is a common feature in many physical theories. From a mathematical point of view it is understood as a consequence of Noether&apos;s theorem, which states every continuous symmetry of a physical theory has an associated conserved quantity; if the theory&apos;s symmetry is time invariance then the conserved quantity is called &quot;energy&quot;. The energy conservation law is a consequence of the shift symmetry of time; energy conservation is implied by the empirical fact that the laws of physics do not change with time itself. Philosophically this can be stated as &quot;nothing depends on time per se&quot;. In other words, if the physical system is invariant under the continuous symmetry of time translation then its energy (which is canonical conjugate quantity to time) is conserved. Conversely, systems which are not invariant under shifts in time (for example, systems with time dependent potential energy) do not exhibit conservation of energy – unless we consider them to exchange energy with another, external system so that the theory of the enlarged system becomes time invariant again. Since any time-varying system can be embedded within a larger time-invariant system, conservation can always be recovered by a suitable re-definition of what energy is. Conservation of energy for finite systems is valid in such physical theories as special relativity and quantum theory (including QED) in the flat space-time.================================================================================================================In direct representation, energy conservation is not an empirical observation. It is a logical and mathematical necessity from first principles. In direct representation, all of the energy in the universe comes from the singularity. The singularity is infinite. Infinity has no bounds, so it can have no beginning, or end; therefore it is impossible to create it or destroy it, so it must be a conserved quantity. Since the singularity is just another kind of energy in direct representation, energy must also be a conserved quantity. Therefore it is impossible to create or destroy any energy. If infinity could be destroyed, then it would have an end, and thus a bound in time, so it could not be infinite. Therefore, only the finite can be created or destroyed. Energy is finite when quantized, but in the form of the singularity it is infinite, so it must be conserved. If energy could be destroyed, the singularity could not be infinite, and the universe would eventually run out of energy and cease to exist. That is ludicrous. Since energy is conserved, and time is a kind of energy, time must be conserved. In other words, existence must be cyclic. It must be possible to recreate time from the singularity in each successive big bang. Each instance of existence exists for a finite amount of time, but over the totality of all instances of existence, time is infinite.In direct representation, time itself is a kind of charge. It is the lowest-level, most powerful form of quantized energy. Every other kind of finite energy is composed in terms of its relation to temporal energy. Thus every other kind of finite energy contains a minimum of one temporal energy quantum. Since nature only represents the current quantum state of existence, everything that exists must be invariant in time. In other words, each thing that exists, exists in a single quantum state at a single point in time. A thing’s quantum state can be composite, so it can be composed of multiple quantum sub-states, and in the sense of those substates, it can exist in multiple quantum states at the same time, but in the sense of its complete quantum state, it can only exist in one complete quantum state at one time.Since everything is composed in terms of direct representation, nothing that exists can be inconsistent with itself or any part of itself. Since temporal energy is part of everything that exists, it must be invariant with respect to time. In other words, the current time is part of every object’s quantum state. From an external perspective, as time progresses, the temporal quantum state of every object is updated to include its own current temporal field component; thus every object is always invariant with respect to the current temporal state of existence. ==============================================================================================================RelativityWith the discovery of special relativity by Albert Einstein, energy was proposed to be one component of an energy-momentum 4-vector. Each of the four components (one of energy and three of momentum) of this vector is separately conserved across time, in any closed system, as seen from any given inertial reference frame. Also conserved is the vector length (Minkowski norm), which is the rest mass for single particles, and the invariant mass for systems of particles (where momenta and energy are separately summed before the length is calculated—see the article on invariant mass).The relativistic energy of a single massive particle contains a term related to its rest mass in addition to its kinetic energy of motion. In the limit of zero kinetic energy (or equivalently in the rest frame) of a massive particle; or else in the center of momentum frame for objects or systems which retain kinetic energy, the total energy of particle or object (including internal kinetic energy in systems) is related to its rest mass or its invariant mass via the famous equation E = mc2.Thus, the rule of conservation of energy over time in special relativity continues to hold, so long as the reference frame of the observer is unchanged. This applies to the total energy of systems, although different observers disagree as to the energy value. Also conserved, and invariant to all observers, is the invariant mass, which is the minimal system mass and energy that can be seen by any observer, and which is defined by the energy–momentum relation.In general relativity conservation of energy-momentum is expressed with the aid of a stress-energy-momentum pseudotensor. The theory of general relativity leaves open the question of whether there is a conservation of energy for the entire universe.=============================================================================================================In direct representation, the conservation of energy for the entire universe is a logical necessity of the definition of infinity.=============================================================================================================Quantum theoryIn quantum mechanics, the energy of a quantum system is described by a self-adjoint (Hermitian) operator called a Hamiltonian, which acts on the Hilbert space (or a space of wave functions ) of the system. If the Hamiltonian is a time independent operator, emergence probability of the measurement result does not change in time over the evolution of the system. Thus the expectation value of energy is also time independent. The local energy conservation in quantum field theory is ensured by the quantum Noether&apos;s theorem for the energy-momentum tensor operator. Note that due to the lack of the (universal) time operator in quantum theory, the uncertainty relations for time and energy are not fundamental in contrast to the position momentum uncertainty principle, and merely hold in specific cases (See Uncertainty principle). Energy at each fixed time can be precisely measured in principle without any problem caused by the time energy uncertainty relations. Thus the conservation of energy in time is a well defined concept even in quantum mechanics.
• http://www.youtube.com/watch?v=GYKyt3C0oT4This video shows some of the black hole scientific visualization work of Professor Andrew Hamilton at Colorado State University. This is a general relativistic visualization of a supercomputed magneto-hydrodynamic simulation of a disk and jet around a black hole. The disk and jet were supercomputed by John Hawley at the University of Virginia. The general relativistic rendering was done with the Black Hole Flight Simulator.Reference: http://jila.colorado.edu/~ajsh/insidebh/intro.htmlNote the contra-rotating event horizons, and their hierarchical composition in this simulation. These features are typical characteristics that result from direct representation.
• In other words, a black hole can be caused by compressing the mass of an object so much that all its mass lies inside its Schwarzschild radius.The Schwarzschild radius (sometimes historically referred to as the gravitational radius) is the distance from the center of an object such that, if all the mass of the object were compressed within that sphere, the escape speed from the surface would equal the speed of light. An object smaller than its Schwarzschild radius is called a black hole. The surface at the Schwarzschild radius acts as an event horizon in a non-rotating body. (A rotating black hole operates slightly differently.) Neither light nor particles can escape through this surface from the region inside, hence the name &quot;black hole&quot;. It is important to understand that from the standpoint of mathematical topology, spacetime, the black hole event horizon, the interior of a black hole, and its singularity are different topological spaces, each with distinct properties. The same is true in direct representation. Those topologies are gravitationally coupled. In other words, the gravitational field in the spacetime surrounding the black hole affects what is inside the black hole’s event horizon.
• A black hole’s gravitational field creates a spacetime vortex that compresses the zero point quantum field virtual energy that composes spacetime itself. It compresses spacetime, matter, and energy. Matter and energy that cross the event horizon exceed the speed of light. From the perspective of an external observer, time runs backwards inside the event horizon of a black hole. As energy crosses the event horizon, it is transformed into dark energy. As matter crosses the event horizon, it is transformed into dark matter. In other words, the black hole converts matter to dark matter, and energy to dark energy. Dark matter and dark energy have the same topological structure and geometry as normal matter and energy, except time runs backwards relative to normal matter and energy. Thus, no structural or geometric changes occur when converting energy into dark energy, or matter into dark matter. From the perspective of energy or matter itself, there is no change as it crosses the event horizon. It simply accelerates past the speed of light. From its perspective, time still runs in the same direction. It is only from the perspective of an external observer that time runs backwards when the speed of light is exceeded. As dark matter falls towards the singularity, it is compressed into denser and denser forms, until it forms dark neutronium. Eventually, the pressure becomes so great that even dark neutronium cannot resist it. The dark neutrons that compose it collapse into a mirror quark gluon bosonic plasma. In other words, the dark fermionic field is compressed into a dark bosonic field. The dark bosonic field continues to be compressed until the dark weak force unifies with the dark electromagnetic force to create the dark electroweak field. Since the dark weak field is part of the zero point quantum field that composes spacetime, that destroys spacetime. Without spacetime, there can be no curvature in spacetime, and thus no mass or gravity. The dark strong(dark electroweak(dark temporal)) massless bosonic field is then further compressed until it unifies with the dark strong force, creating the dark electronuclear(dark temporal) field. That is further compressed until it unifies with the dark temporal field; aka the mirror-time field. Eventually, the dimensions of spacetime are compressed below a Planck length, and mirror-time is compressed into the hypertime string field that composes the singularity. The collapse of the spacetime field into singularity inside the event horizon reduces the real component of the grand unified field potential energy in the singularity to absolute zero. That creates an absolute zero real component potential energy sink that drains the 10^121 GeV/m3 real component of the zero point virtual quantum energy field that composes spacetime outside the event horizon. As a result, the zero point quantum energy field flows into the black hole like water going down a drain. That flow creates curvature in the zero point virtual quantum energy field that composes spacetime, energy and matter. That curvature has divergence. That divergence is the black hole’s gravitational field. The gravitational field is a divergence in the entropic field of spacetime. The stronger the gravitational field is, the lower the entropy is. In the limit, entropy reaches zero at the singularity. Massless bosons are massless because they exist beneath spacetime. From an observer’s perspective, that also means multiple bosons can occupy the same space. Thus they obey Bose-Einstein quantum field statistics instead of Fermi-Dirac statistics. Conversely, massive bosons have mass because they exist at or above spacetime. Fermions all have mass because they are composed of the zero point virtual quantum energy field that composes spacetime. They obey the Pauli exclusion principle because the spacetime field has volume. Things that are composed of spacetime cannot occupy the same space. Since the singularity is a pure bosonic field, it too is massless. The singularity has no quantum state. That is what makes it infinite. Mass cannot exist without any quantum state.That means the singularity has no mass. A black hole’s gravitational field is not caused by the mass of the singularity. It is caused by difference in energy density between the immense energy density of the zero point quantum field, and the absolute zero energy density of the singularity. According to quantum mechanics, that energy density difference is about 10121 GeV/m3.
• Mirror matter is the same thing as cold dark matter. It is cold and dark because it emits mirror photons instead of normal photons.
• The divergence in the zero point virtual quantum field velocity is primarily caused by a reduction in the velocity of the temporal field as spacetime curvature increases. That is because the temporal field contains about 90% of the energy in the zero point quantum field of spacetime. Of the remaining 10%, about 9% is due to the color field (responsible for the strong force) and about 1% is due to the electromagnetic field. Entropy is reduced wherever energy is present because energy has symmetry, structure and form. Thus it reduces uncertainty. The more energy density, and the more mass that is present, the more entropy is reduced. In other words, the entropic field reduction is not unique to gravity. Entropy is reduced to zero in the singularity because the singularity has perfect symmetry. All energy in the singularity is compressed to the same frequency, wavelength, phase, amplitude, and spin so there is no uncertainty, and thus zero entropy in the singularity.
• The energy density of the quantum vacuum is calculated as 10^121 GeV/m3 in quantum mechanics. The vacuum energy density inferred from data obtained from the Voyager spacecraft is less than 10^14 GeV/m3. The difference is 10^107 GeV/m3. However, the according to direct representation, the zero point quantum field energy density contribution to the cosmological constant and gravity is precisely zero. In other words spacetime curvature only represents energy above the zero point. The energy below the zero point composes spacetime itself. It composes that which is curved by any energy imbalance above the zero point. Therefore, the QM calculation for vacuum energy density of 10^121 GeV/m3 is correct. The problem with computing the vacuum energy density from Voyager is that most of the vacuum energy density exists beneath spacetime, but Voyager travels thru spacetime. In other words, the data from Voyager can only be used to infer gravitational field effects, so it can only represent the part of the vacuum energy that represents the imbalance in the vacuum energy responsible for spacetime curvature.The QM calculation of vacuum energy density must be correct. Modern computers, semiconductors and solid state devices are based on quantum mechanics. It must be correct, or those devices would not work. We cannot disregard a QM calculation simply because it is contrary to our currently weak understanding of gravity. It is more likely our theory of gravity is incomplete and/or incorrect than QM. In fact, DR shows that the current theory of gravity is at fault. It turns out gravity is not even an independent force field. It does not have any of its own energy. Instead, it is composed from the interaction and relations between the temporal, electromagnetic, strong, and weak fields.
• [1] SM Dutra (2005). Cavity Quantum Electronics. John Wiley &amp; Sons. p. 63. ISBN 0471713473.[2] MP Hobson, GP Efstathiou &amp; AN Lasenby (2006). General Relativity: An introduction for physicists (Reprint ed.). Cambridge University Press.
• Reference: http://einstein.stanford.edu/Media/FD_Measurement-flash.htmlAlso see: http://www.salem-news.com/articles/may102011/space-experiment-nasa.phpAnd: http://einstein.stanford.edu/SPACETIME/spacetime4.html
• Temporal energy is a new fundamental interaction. Gravity is not fundamental. Gravity is a curvature in the spacetime and mirror spacetime fields. Thus gravity is dependent on the composition of the temporal, electromagnetic, color / strong, and weak force fields. It is primarily dependent on the temporal field, and secondarily dependent of the electromagnetic field. The color/strong and weak force fields are range limited so they only contribute to gravity at very short ranges. The strong interaction is range limited to about 10-15 meters. The weak interaction is range limited to about 10-18 meters.
• Mirror spacetime refers to the existence of a hidden mirror sector in the universe. The existence of the hidden mirror sector is required by symmetry. Its laws of physics are similar to the universe in most respects, except it has the opposite temporal symmetry. It is composed of dark energy and dark matter instead of energy and matter.
• The mirror verse has to exist to conserve symmetry. Symmetry has to be conserved because everything ends up back in the infinite singularity, and the infinite singularity has perfect symmetry. It could not have perfect symmetry if symmetry were not conserved. If symmetry was not conserved, some of it would be lost.Mirror energy is the same thing as dark energy. Mirror matter is the same thing as cold dark matter. This is also called ‘Alice Matter’.A good collection of papers on the mirror verse can be found at: http://people.zeelandnet.nl/smitra/mirror.htm.The fundamental paper on the mirror verse is: R. Foot, H. L. (1991). A model with fundamental improper spacetime symmetries. Physics Letters B, 272 (1,2), 67-70.
• The singularity and the universe are infiniteEach instance of existence is finiteWhite and black holes are 180 degrees out of phase in time over the duration of each cycle of existence.As white holes collapse, their gravitational field strength decreases and the potential energy in the singularity expands into energy and matter.As black holes collapse, their gravitational field strength decreases and the potential energy in the singularity expands into dark energy and dark matter.As white holes expand, dark energy and dark matter are consumed, and compressed into potential energy in the singularityAs black holes expand, energy and matter are consumed, and compressed into potential energy in the singularity.
• Merriam Webster’s Collegiate dictionary defines representation as &quot;something that serves as a specimen, example, or instance of something&quot;. On the surface, this implies that all representations are indirect, but if you really think about it, indirect representation cannot exist unless something direct represents it. Something that exists, physically exists - whether or not it is represented by something else indirectly. For example, the far side of Earth&apos;s moon exists, even though nobody is there to observe it. The same is true of an unobserved grain of sand, or an unobserved molecule or atom. Because things have to be able to exist even if they are not represented indirectly, representation must necessarily include direct representation. There can be no indirect representation without physical existence of the representation. There can be no physical existence without direct representation. Our conception of representation is based on information, and information is indirect and observer centric, so the conception of representation as only being indirect is an observer centric bias. To advance beyond our observer centric bias, we must expand the definition of representation to include direct as well as indirect representation. Up to this point in history, our species has been developing and using indirect representations almost exclusively. Indirect representation includes written and spoken natural languages, information, logic, mathematics, music, art, video, and all kinds of symbolic representation.  All information is a kind of indirect representation. As a kind of indirect representation, information inherits all of the properties of indirect representation.  I discovered a second fundamental kind of representation I call Direct Representation. Direct representation is the logical converse of indirect representation. As shown in Figure 1, the representation of physical existence is a kind of direct representation. Everything that physically exists in the universe is a direct representation of its own existence. Direct representation represents all of physical existence in terms of the direct representation of energy quanta and their direct relations.  From the upper ontology of representation, we can see that information and existence are mutually exclusive representations. The sets of things they can represent are logically disjoint because they are derived from indirect representation and direct representation respectively, and indirect representation and direct representation are logical converses. Logical converses have logically converse properties. An observer can use information to represent existence indirectly, but existence can only represent itself directly. Indirect representation can only represent things indirectly. It cannot represent anything directly. Conversely, direct representation can only represent things directly. It cannot represent anything indirectly.  The existence of direct representation and indirect representation implies the existence of their powerset. I call the powerset of direct representation and indirect representation Universal Representation. The representation of thought is a kind of universal representation. Biological neurons represent thought in terms of universal representation. Universal representation allows us to think about things both directly and indirectly. Because universal representation is the powerset of indirect representation and direct representation it contains the powerset of the properties of direct representation and indirect representation. Its representational power is the Cartesian product of direct representation and indirect representation.
• Objects that are encapsulated have a boundary. They exist within some scope, and some context. All massive objects exist in spacetime, and they have an inside, and an outside in spacetime. However not all objects have mass and not all boundaries need exist in spacetime. Some exist beneath spacetime. They can simply be boundaries between quantum states or groups of quantum states. The boundary is the difference that distinguishes one state from another. In encapsulated representations, the objects and relations that compose an object are part of the object they compose by value. Each object and relation only has one instance, and it can only exist within the direct intension of a single object. An energy quantum is an encapsulated representation. It is the fundamental unit of existence.By contrast, in an unencapsulated representation, objects can be defined and related by reference. In an unencapsulated representation, the existence and semantics of a relation are not defined as part of the objects it relates. The same relation can be defined once, and it can be used to relate many different objects in many different contexts.
• This table provides an overview of the three fundamental types of representation, their relationships, and some examples of things represented by each type. Direct representation represents the physical existence of everything in the universe. Universal representation is the powerset of direct and indirect representation. Universal representation is the basis for the human neural representation of thought, meaning, and consciousness. Information is a kind of indirect representation. Most human generated representations are of this type. Indirect representation is required for communication. Among other things, it is used for logic, mathematics, writing, speaking, drawing, and computation.
• Universal Representation is the powerset of direct and indirect representation.
• Since direct representation cannot represent things that cannot exist, no observer needs to decide whether something computed by the representation describes reality or if it is just an abstract artifact of the mathematics.
• Direct representation is a further extension of Albert Einstein’s concept of general covariance.In theoretical physics, general covariance (also known as diffeomorphism covariance or general invariance) is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations. The essential idea is that coordinates do not exist a-priori in nature, but are only artifices used in describing nature, and hence should play no role in the formulation of fundamental physical laws.A physical law expressed in generally covariant fashion takes the same mathematical form in all coordinate systems up until a specific coordinate system is selected for a particular mathematical solution. Generally covariant systems are usually expressed in terms of tensor fields. The classical (non-quantum) theory of electrodynamics is one theory that has such a formulation. Albert Einstein’s theories of special relativity and general relativity are others. If we step back and look at the bigger picture, we can see that general covariance simplifies the representation of physical laws because it makes the mathematical form of physical laws independent of the observer’s choice of coordinate system. In other words, it removes dependencies on what frame of reference the observer measures position, velocity, spatial orientation, time, duration, etc. from - relative to the general form of the mathematical equations that describe physical laws. If we step back even further in history, we can see that general covariance is yet another case of a major scientific advance brought about by decoupling some observer dependent property from the representation of physical law. For example, a major advance in Physics also occurred when science shifted from the Ptolemaic view of a geocentric universe in which the universe orbited around the observer&apos;s location (earth) to the Copernican heliocentric view in which it orbited around the sun. This change in viewpoint decoupled the observers location from the general representation of physical law. In doing so, it created a simpler representation of existence that was more in accord with the observational evidence. Another way theoretical physicists eliminate observer centric dependencies is to express the laws of Physics in terms of natural units of measure. Natural units are units of measure whose magnitudes are normalized so that some related subset of the dimensionally dependent physical constant&apos;s values are equal to 1. Those normalized physical constants can then be removed from the mathematical expression of physical laws. This is done because the magnitude of the standard units of measure are often anthropocentric or geocentric. For example, the duration of the second was chosen to be consistent with the ephemeris second, which is related to the amount of time it takes earth to rotate once daily around its pole. Hence, even though the second is officially defined as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the Cesium 133 atom, the number of Cesium 133 radiation periods was chosen to be consistent with the ephemeris second, so the magnitude of the second is geocentric. The meter is defined as the length of the path travelled by light in a vacuum during a time interval of 1 / 299 792 458 of a second. Since the second is geocentric, and the meter is defined in terms of its relation to the second, the magnitude of the meter is also geocentric. The kilogram is anthropocentric. It is defined as the mass of the international prototype kilogram measured at 3.984 degrees C under one standard atmosphere of pressure. The international prototype kilogram is a particular precision machined block of platinum iridium alloy stored under carefully controlled conditions at the International Bureau of Weights and Measures in Sèvres France. It is a man made artifact. Looked at in reverse, the dimensional constants are included in the equations of physics to cancel out our observer centric units of measure. Normalizing dimensional constants allows us to express the laws of physics in a way that is independent of the observer centric magnitudes of some of those constants.The question then naturally arises as to what other observer centric dependencies should we remove from the mathematical representation of physical laws? The shocking answer is we must eliminate all observer dependencies. Just as coordinates have no a-priori existence in nature, there are no a priori observers in nature. In other words, the existence of physical existence and its complete and consistent mathematical representation cannot depend on the a priori existence of observers in any way whatsoever because the universe existed long before there were any observers in it. That means the universe must have some way to represent physical existence and the mathematical relations between cause and effect that is completely independent of all observers and all observation. Observer Free MathematicsMost fundamentally, mathematics is a formal language for representing the relations between things. If we want a mathematics that can faithfully represent all of existence, it must be able to represent relations in a way this is not causally dependent on observation. Mathematics must be able to represent the observable and the unobservable, and all relations between them. After all, physical relations existed between objects in existence long before there were any observers to observe them and represent them mentally. That means nature&apos;s own representation of physical relations must be independent of observation. If we want a complete and consistent mathematical representation of the physical existence of the universe, we must represent the universe using the same definition of &apos;relation&apos; and the same kind of mathematics the universe uses to represent itself. We must remove all observer and observation based dependencies, human cognitive biases and neural limitations from the representation of relations, numbers and mathematics. Only by doing so can we create a mathematical domain that includes everything in the universe, including the representation of singularities, the interior of black holes, their event horizons, dark energy, dark matter, the unobservable parts of the universe and all the quantum energy field relations between them. Only then will we be able to apply the mathematical tools of analysis and calculus of variations to the representation of everything over the mathematical representation of the entire universe. This means we must eliminate all of the following observer dependencies from the representation of numbers, mathematics, and physical laws:Nature&apos;s own mathematical representation of physical laws cannot depend on an observer because there were no a priori observers at the beginning of the universe. Nature&apos;s own mathematical representation of physical laws cannot depend on observation because there were no observers to perform observations. Nature&apos;s own mathematical representation of physical laws cannot depend on measurement because without observers and observation there is nothing that can measure anything and there is no need for measurements. Measurements are only needed to create information. Without observers there is no need for information. Measurement changes the system it measures and introduces uncertainty which cannot exist in the observer free representation of physical existence because it would make existence uncertain and inconsistent. There is nothing uncertain or inconsistent about physical existence. It is only the interaction of an observer with physical existence that introduces uncertainty and inconsistency. Only our information about existence is uncertain and inconsistent. Existence itself is not uncertain or inconsistent, therefore existence cannot and does not measure itself.The universes&apos; own &apos;mathematical representation&apos; of physical laws cannot depend on information because without observers, there is no way to sense, measure, encode, store, transmit, or decode the meaning of information. Meaningless information is no information at all. It is only energy. The universe is composed of energy, not information. Without observers there is no meaningful information and the concept of information is no different than energy. There is no a priori information in the universe. There is only a priori energy. Energy and information are not the same thing. All of mathematics is based on the representation of information. That means we must find an alternative representation of mathematics and an alternative representation of the concept of ‘number’ that is not dependent on the representation of information. We must develop an automorphic representation that provides a complete and consistent one-to-one direct representation of all possible relations and only those relations that represent and compose the existence of the physical universe. We must develop a direct mathematical description of energy and the relations between energy fields that is not dependent on observers, observation, measurement, or information. Only then can our mathematics be a one-to-one representation of all of physical existence. Only then can our mathematics be completely isomorphic to nature&apos;s own.
• Direct representation is the logical converse of indirect representation. Direct representation provides a natural, observer independent representation of numbers, mathematics, and existence that is mathematically complete and consistent over the universal domain. It provides the mathematical foundation required for a quantum field theory of everything that avoids all limitations imposed by Kurt Gödel&apos;s incompleteness theorems.Direct representation is based on a fundamental extension of the mathematical notion of representation. Direct representation provides an objective, observer independent, mathematically complete and consistent alternative to the use of information for representing, viewing, computing, and understanding the relations within and between the constituent parts of the physical world, the mental world, the world of computation, and the Platonic world of mathematics. It provides a way to unify the existence and representation of the physical world, the mental world, the world of computation, and the Platonic world of mathematics.
• Direct representation provides an entirely different way of &apos;knowing&apos;. It provides a new representational foundation for numbers, mathematics, and computation. In short, it provides a new way to represent, model, compute, and think about physical existence and our relation to it. Direct representation is the logical converse of indirect representation and the representation of information. It is the mathematical foundation for the complete and consistent representation of all of physical existence, being, and the neural representation of perception, thought, meaning and consciousness.
• Cause 1: Use of indirect representation to represent numbers and mathematics – i.e., the use of reference semantics instead of value semanticsCause 2: Basing the definition of numbers on the transfinite recursive composition of empty sets
• Existence expanded from the infinite alpha singularity in the big bang. Therefore, the origin of existence is infinity, not zero. The new numbers represent phenomena at sub-quantum scales and superluminal speeds. They represent phenomena inside the event horizon of quantum scale white and black hole microsingularities. They represent phenomena in ‘hyperspace’ – the space below time in which quantum leaps occur. They represent strings and string interactions. Currently, there is no representation for these numbers. They are currently hidden ‘inside zero’.Everything is composed of energy or dark energy. Energy cannot be created or destroyed, it can only change form. That means energy always exists. That means the concept of nonexistence, and nothing have no existence in nature. The empty set is inconsistent with existence. One result of that is that numbers are an incomplete and inconsistent representation of existence.The unsoundness shows up in undecidability, lack of division by zero, halting problems, difficulty representing emergent phenomena, incompleteness and inconsistency over the universal domain, and inability to represent the relations between the infinite and the finite mathematically. In physics, it is responsible for our inability to completely and consistently represent energy. It is responsible for our inability to solve many-body problems in physics. It is responsible for our inability to consistently represent what goes on inside black holes, and our inability to consistently represent the singularity.
• In particular, it accounts for the fact the laws of physics remain consistent throughout the universe. The laws of physics are not consistent everywhere now because everything was in light-speed communication during the initial stage of the big bang. Even if they were in light speed communication when the entire universe was less than a Planck length in circumference, what kept them consistent over the ensuing 14 plus billion years while they weren’t in light-speed communication?The universe is complete and consistent because it represents everything that exists in terms of a single complete and consistent self-organizing, self-modifying process, that operates over an invariant ontology. The invariance of that ontology is caused by the eternal symmetry relations that exist between the infinite and the finite. Those symmetric relations are energy and dark energy quanta. The quantum process that creates, controls, and is existence operates over its own quantum state. It creates, modifies and extends its own ‘program’ as it changes the current quantum state of existence everywhere in the universe consistently. All quantum state transitions occur ‘beneath’ spacetime inside ‘quantum leaps’. Quantum state transitions do not occur in time or space. They create time, space, and all quantum state transitions and changes in it on an ongoing basis. Everything that exists is composed of energy (or dark energy). Time itself is a kind of energy. It is the most powerful form of energy after the singularity. It is represented by temporal field bosons. All energy quanta are composed of temporal field bosons and the differences between temporal field bosons. That includes the energy fields that compose spacetime. All of existence is the current quantum state of existence. Nature only represents the present. The past only exists to the extent that some of the energy patterns that compose parts of the present were created in the past and continue to exist as part of the present. That means time travel into the past is impossible. There is no past to return to. Of course, time travel into the future is also impossible. There is no future to travel to. It hasn’t happened yet. That includes all time and space. When the current quantum state of existence is changed, it changes everywhere inside a quantum leap. From an observer’s perspective, the passage of time is simply a change in the current quantum state of existence.The ongoing expansion of spacetime is not due to the momentum left over from the big bang 14 plus billion years ago. If so, expansion would be slowing due to ongoing gravitational attraction. Instead it is accelerating. It is unrealistic to think that expansion would continue to exist without slowing down during 14 plus billion years of gravitational attraction, and innumerable particle collisions since the big bang without it being driven by an ongoing energy source. The ongoing expansion of spacetime is powered by the ongoing collapse of the omega black hole’s gravitational field. As that gravitational field collapses, the alpha singularity decompresses, creating differences in itself. Those differences that remain symmetric for at least a Planck time form new temporal and mirror-temporal field bosons. They create more time. The temporal field is the most energetic field in existence. It causes all change in the current quantum state of the universe. That includes the ongoing expansion of spacetime.The temporal field boson is part of the zero point quantum field virtual energy that composes spacetime. Each temporal field boson has an energy of 9.71738x1026 eV, or 155.689 MJ. To put that in perspective, it is 1.692 trillion times more energy per temporal field boson than that available via the acceleration of lead nuclei in the Large Hadron Collider when operating at its maximum design capacity. We just can’t observe or detect that energy because it is part of the zero point virtual quantum field energy that composes spacetime. As such, it is part of the background energy we observe and measure all other energy relative to.
• Unification of QM and General Relativity is achieved by describing the causal relation between energy quantization and the geometry of spacetime. In other words, time and space have to have physical existence. It is not possible for Gravity to be curvature in spacetime if spacetime has no physical existence. Something must exist in order for it to be curved. Direct representation describes how energy quanta and quantum energy fields compose spacetime.
• The abundanceof matter and anti-matter are reversed between the universe and the mirror-verse. In the universe, matter is abundant and antimatter is scarce.In the mirror-verse, antimatter is abundant and matter is scarce. The antimatter in the mirror-verse is dark matter. It’s arrow of time is reversed from our perspective, so it exists outside our light cone. In other words, it is not directly observable. Its effects can be observed indirectly via the gravitational force exerted by ‘missing mass’.The antimatter in the mirror-verse cannot annihilate with normal matter because it is separated from normal matter by the singularity. The arrow of time is reversed in the mirror verse relative to our arrow of time. In essence, that means dark energy flows away from the singularity in the opposite temporal direction in the mirror-verse relative to the direction of energy in the temporal dimension in the universe. Low Potential Dark Energy  High Potential Dark Energy  Singularity  High Potential Energy  Low Potential Energy
• Energy (and dark energy) always flow from high potential to low potential. Energy potentials decrease in the direction of the arrow of time.The arrow of time is reversed in the universe and mirror-verse. Equivalently the direction of the arrow of time is reversed between energy and dark energy.In the universe, there is an asymmetry in the weak force that causes an abundance of matter over antimatter.In the mirror-verse there is an opposite asymmetry in the mirror weak force that causes an abundance of mirror antimatter over mirror matter.The matter in the universe and the antimatter in the mirror-verse cannot annihilate each other because they proceed in opposite directions through time. In other words, mirror-antimatter cannot go from a relatively low energy state in the mirror-verse to a higher energy state in the singularity to reach matter in the universe. The same is true of matter in the reverse direction. Once formed, matter in the universe and antimatter in the mirror-verse cannot merge again until they both fall into a black hole and meet in the singularity. Of course, by the time they meet in the singularity, they are no longer matter and mirror-antimatter. By then, they are both part of the grand unified field potential energy. Thus they can never annihilate.
• The intension of a set defines the sets meaning.In indirect representations, the intension of a concept represents the syntactic definition of that which it represents. In indirect representations, the epistemic meaning of a concept can be inferred or interpreted from the representation of the concept’s intension by an intelligent observer, but not by the object itself. For example, a dictionarydoes not understand the meaning of the words it contains. A computer does not, and cannot, understand the meaning of its data or its program. Indirect intensions represented using information contain syntax but not semantics. That syntax can represent semantics, but it is left up to the observer to interpret the meaning of those semantics. Computers can execute programs that interpret syntactically correct sets of relations, but they don’t really understand the meaning of their interpretation. They simply follow programmed instructions that tell them how to manipulate and transform specific symbol patterns in their input data. Such programs may provide the illusion of intelligence, but it is only an illusion. In reality, their function is limited to syntactic symbol manipulation. They are brittle and inflexible. They frequently fail if the input sequences or input data types do not match those that are pre-programmed. Even those programs that use machine reasoning to interpret logical rules and infer new logical relations are only performing syntactic manipulation and transformation according to the dictates of their program. In reality, such ‘intelligence’ is less impressive than that of most insects. For example, insects can find their way around the natural world far better than most computer programs. Insects can also adapt to changes in their environment better than most computer programs.In direct representations, Representation = Existence. Direct representations represent things in terms of how they relate to the things that compose them. The intension of each object that exists is directly composed of the extensions of the set of objects and object relationships that compose it. In turn, the extension of the object encapsulates, surrounds, binds, or forms a bounding surface for the intensional components and relations that compose and form it. The end result is a hierarchy of composition that composes wholes from parts. Real world objects are almost always composed of hierarchies of parts. In addition, those intensional hierarchies are always composed by value. The part-whole hierarchy is ubiquitous in nature. Subatomic particles compose atoms. Atoms compose molecules. Molecules compose stars, planets, and interstellar matter. Stars, planets and interstellar matter compose galaxies. Galaxies compose galactic clusters. Galactic clusters compose galactic super-clusters.Those energy quanta that are stable enough to persist for some time can interact with each other locally to compose local quantum state networks. Quantum states represent nodes in the network. The links between the network nodes are represented by each energy quantum’s relations. Each relation represents one of the quantum state interactions between a nested hierarchy of extensional composition, where each element in the extensional hierarchy is itself composed of an intensional hierarchy. It can be thought of like a branching tree, where each node in the tree (extension) can itself be composed of another tree.Each element in the hierarchy has both an intension and an extension. Relations relate the intensional components of the hierarchy. The relations are themselves members of the intensional hierarchy of composition. They have their own intensions and extensions. The intensional components represent the values of the intensional relations and those relations affect the physical distribution and characteristics of the components they represent. This nested hierarchical pattern of representation repeats at smaller and smaller spatial and temporal scales until the level of the fundamental quanta of energy that everything else is composed of is reached. The most fundamental energy quanta themselves are represented by symmetric differences between the singularity and itself. The differences form an event horizon surrounding the singularity which we detect as the existence of the energy quanta. Quanta are quantized because they encapsulate the singularity. Quanta either exist fully, or not at all. They never exist in a state of partial existence. It is impossible for them to do so because it would have to be possible to completely destroy the infinite singularity (i.e. to completely destroy or eliminate the grand unified field and all energy and dark energy in the universe). It is impossible to destroy the infinite singularity because there is “nothing” to destroy in the sense that the singularity has no quantum state. The only thing that can be done with the singularity is to make it incomplete by making it partial. However, by making the singularity partial and thus incomplete and inconsistent, we necessarily make the representation of existence complete and consistent. There is no other logical possibility. There can only be one set of all sets. Either the infinite singularity represents the set of all sets, or finite existence does. Set theory makes it logically and mathematically impossible for any other possibility. The encapsulation and quantization of the direct representation of existence ensures the global consistency and completeness of the representation of existence throughout the entire universe. It means it is impossible for existence to become inconsistent or incomplete, unless it becomes completely nonexistent, and turns back into the consistent and complete infinite singularity. Fortunately, the finite speed of light makes that impossible. No omnipotent watchmaker needs to watch over the universe and enforce the laws of Physics or ensure the consistency of existence. The laws of physics evolve and enforce themselves as a natural consequence of the necessity to conserve energy and the infinite singularity. In direct representations, nothing can exist before the components it is composed of exist because the components it is composed of form its representation. Existence evolves instant by instant in response to the need to conserve energy. Symmetry causes the conservation of energy. It is the reason matter and antimatter particles are always created in pairs. It is the reason charges of opposite polarity are always created in balanced pairs. It is the reason color charge in the nuclear strong force is created in triples such that the colors always “cancel out”. We see symmetry everywhere throughout the universe. It is a fundamental invariant law of nature. It is the ontology of the representation of existence. Representation is existence. This is true even down to the representation of space and time themselves. In a direct representation, the representation of an objects’ extension encapsulates, surrounds, and/or encloses or bounds the representation and thus the existence of the intensions of the objects and object relationships that the extension of the object being represented is composed of and related to. The encapsulation of the representation explains why objects occupy spacetime, why objects have localized extents and why they have surfaces. It is why bigger objects are composed of smaller objects and not vice-versa. It is why objects cannot exist before that which composes them exists. It is why cause and effect exist. It is why effects do not precede causes. It is why the arrow of time only moves forward. It is why the future does not precede the past. This leads directly to the hierarchical representation of physical existence we see in the universe at large. It is the reason all objects are composed of other smaller objects. The extensions of the smaller component objects and the relations between them comprise the intension of the object they compose. We see this pattern of representation repeated throughout Physics at all spatial and temporal scales, both in matter and in energy fields. To understand Physics fully, we need to represent existence the same way existence represents itself. In direct representation, intensional relations exist between the objects that compose the object being represented. Direct representations represent existence as an -order relational hierarchy of -order object composition. The relations manifest themselves in terms of relations between energy and matter, relations between matter and matter, and relations between energy fields. Any of these relations can be higher order. As used below, the term “object” can represent matter or energy, or a relation between them. Objects can be composed of objects that are composed of objects that are composed of objects, to any required degree. Any object can be composed of zero or more objects, each of which may be composed of zero or more objects. The same is true of relations. There can be first order relations between objects: O R O, second order relations between relations between objects: O R(R) O, third order relations: O R(R(R)) O, etc to any required degree in any existential context. There can be second order objects O(O), third order objects O(O(O)), etc, to any required degree in an existential context. Any combination of any order of relations can relate and compose the intension of any combination of any order of objects in any context. In a direct representation, all these relations occur within the context of their existence. They are all context sensitive. At the most fundamental level of the representation of existence, both the objects and the relations are composed of the same thing and represented the same way, by differences (i.e., asymmetries) in the infinite singularity. Those differences must always cancel out. That is a consequence of symmetry. It is all a consequence of the need to conserve energy, and thus conserve infinity. Even the zero-point quantum field is constrained to conserve the totality of the infinite singularity.
• The extension of a representation identifies the objects in a set by explicitly listing each object, whereas the intension identifies the objects in a set by defining the set of conditions or relations all members of the set must satisfy to be considered a member of the set. Extensional representations work well for small finite sets in which it is possible to explicitly list every set member.Intensional representations work well for infinite sets and sets that have too many members to explicitly list them all.
• All quanta are composed of the quanta and relations between the quanta that compose them. That means higher-order quanta can be composed of pre-existing quanta.Since relations are also quanta, higher-order relations can also be composed of pre-existing quanta.All quanta in direct representation exist by value. That means they are composed using value semantics.
• Direct representation represents everything in context. Note how the extension provides the context for the intension. The meaning of the extension is represented by its intension. The existence of the extension is represented by the existence of its intension. Thus, the ontology of direct representation represents the relation between meaning and existence.
• This Quantum class interface pseudo code provides one way to represent the direct representation of a quantum in terms of information. This interface is only intended as a skeleton. It does not contain all the language specific infrastructure methods required to produce a working direct representation quantum system. For example, to use it for a working system, you would need to add constructors, a destructor - if required by the implementation language, a database interface, some lower level helper functions, and application specific logging and I/O methods. The application specific methods should be restricted to those required for observer interaction, like system administration, querying the current state of the model, event logging, debugging, and visual presentation. In particular, application or domain specific functors that change the quantum state of the system should not be pre-programmed. The whole point of direct representation is that it modifies its own structure and defines its own functors as it executes. The user can then study the resulting model&apos;s structure, its evolution, and its functors to see how it represents existence.
• Quantum objects spend their lives in strictly defined quantum states and can change their state and relations only by means of uninterruptable transitions known as quantum leaps. Because of symmetry, the quantum states and relations are naturally hierarchical (degenerate in quantum terminology).Quantum systems cannot interact with one another directly. Instead every interaction occurs via an intermediate boson. The various intermediate bosons are mediators of the fundamental interactions (e.g., temporal field bosons mediate the temporal interaction, photons mediate the electromagnetic interaction, gluons mediate the strong interactions, and W± and Z0 the weak interactions. The intermediate bosons themselves are Quantum objects.The partially ordered hierarchical structure of Quantum objects allows us to directly add, subtract, multiply, and divide vectors with mixed dimensions. In turn that allows us to perform dot products and projections of vectors with mixed dimensions. Any missing components are treated as zero for addition and subtraction, and one for multiplication and division.Causality is automatically enforced because Quantum object differences only exist between preexisting Quantum objects. The system implements causal dynamical triangulation of simplicial complexes. It also performs structural specialization via composition of differences. It performs generalization via decomposition and removal of unstable quanta. All of existence is caused by a process of variation followed by selection. Every state, process and system in the universe is caused by the same self-organizing, self-modifying recursive process. Variation provides the constituents that undergo selection, and selection provides the constituents that undergo variation. Variation is caused by spontaneous symmetry breaking due to the composition of symmetric differences between quanta, while selection is based on retention of those compositions of differences that are symmetric. Symmetric compositions are stable. Unstable compositions decompose into lower order forms until a stable configuration is reached. Alternatively, those lower level components may create new stable compositions of quanta that reduce the energy level in the current energy landscape, thereby creating a stable basin of attraction.The Quantum class creates active Quantum objects. Those objects represent energy quanta. The system functions like a self-defining, self-modifying hierarchical statechart, except that both the entire statechart, and every event are Quantum objects in their own right. Intermediate virtual bosons are direct representation relations, quanta and thus hierarchical state machines in their own right, not events. Quantum state transitions, and thus new quantum states, are composed from symmetric differences between preexisting Quantum objects. The entire system evolves over time, creating a quantum structure that represents a dynamic energy landscape that seeks to find local minima in energy levels, across all levels of scale.
• Here an observer must define the k-tuple. The k-tuple is an ordered list of k fields, each of which contains a member of some domain D.The observer must identify the fields in the k-tuple.The observer must sequence the fields in the k-tuple.- The observer must identify the domain of each field.- The observer must define the possible values in each domain.- The observer must define the relation and its semantic meaning.- The observer must select values from each domain for each field in the tuple or write a program or algorithm that can do so.The observer must determine whether the selected values satisfy the relation or not, or the observer must write a program or algorithm for the relation that it can execute to determine whether the relation holds or not. The possible values the tuple can hold are the values in the Cartesian product of its domains. Therefore, the possible values the relation can relate consist of the possible values in its tuple. Notice that the definition of a relation in set theory and logic relies on something outside itself (an observer) to define the object or sets of objects a relation holds between. In other words, an observer must define which objects are in the domain of the relation. In addition, an observer must define the relation and its semantic meaning. An observer must define the tuple and decide what sets of inputs the relation relates. An observer must define which kinds of objects from the relations domain belong to each tuple. An observer must assign a truth value to each k-tuple represented by the relation that represents whether the property represented by the relation holds between the objects in it. An observer can do this directly via direct inspection and assignment, or indirectly by defining a predicate function to represent the relation and then computing the value of the function to let it assign the truth value to each k-tuple. All of these things make relations in IR indirect and observer dependent. IR relations are inherently dependent on observers, observer participation, and observer centric representation. Instead of only representing the problem domain, IR relations complicate the representation of the problem domain by mixing its representation with the requirement of observability and the need for observer centric representation and participation. By eliminating the constraint that we only represent observables and eliminating the need for observer centric representation and participation, we can simplify the mathematical representation of relations, because then only the problem itself needs to be represented.
• In IR, a relation is a mathematical object in its own right. A k-ary IR relation is a subset of the Cartesian product on{ n1, n2, ... nk }.IR Relation Definition:A relation L over the sets X1,... Xk is a subset of their Cartesian product, written L { X1 x ... x Xk } where x represents the Cartesian product.The sets Xjfor j = 1 to k are called the domains of the relation. Each member of set Xjdenotes one of the possible values of the j&apos;th term in the k-ary tuple that L is a relation over.An IR relation L is defined over the Cartesian product of the domains of each field Xj in the relation&apos;s tuple because the Cartesian product set represents all possible combinations of tuple values that can exist in relation L. It is then up to the observer to ascertain which of those combinations the relation holds true for.In IR, a mathematical relation is defined as an object that has its existence as such within a definite context or setting. It is literally the case that to change this setting is to change the relation that is being defined. The particular type of context that is needed here is formalized as a collection of elements from which are chosen the elements of the relation in question. This larger collection of elementary relations or tuples is constructed by means of the set theoretic product commonly known as the Cartesian product.Terminology:A relation L is defined by specifying two mathematical objects as its constituent parts:The first part is called the figure of L, denoted as figure(L) or F(L).The second part is called the ground of L, denoted as ground(L) or G(L). In the special case of a finitary relation, for concreteness a k-place relation, the concepts of figure and ground are defined as follows:- The ground of L is a sequence of k nonempty sets, X1, ... Xk, called the domains of the relation L.- The figure of L is a subset of the Cartesian product taken over the domains of L, that is: F(L) =  {G(L)} = { X1 x ... x Xk }Strictly speaking then, the relation L consists of a couple of things, L = (F(L), G(L)).
• By redefining the mathematical meaning of ‘relation’ in DR, we give ourselves the ability to create a mathematically complete, consistent, closed autonomous self-bootstrapping mathematical quantum field theory description of the universe that is not subject to any observer centric bias, observer neural knowledge representation limitations, and that is not subject to the incompleteness imposed by the indirect representation of information, the Heisenberg Uncertainty principle, measurement uncertainty, or light speed limitations in the transmission of information. Yet, at higher levels of direct abstraction, that same representation can represent all observers and every observer&apos;s point of view because it can represent each observer&apos;s brain, body, and the computation of all relations between them and each observer&apos;s environment from each observer&apos;s point of view.
• For a direct relation L, there is no separate figure and ground. Instead, L has an intension and an extension. L’s intension represents its definition, meaning, and intensional existence in terms of the objects L is composed of and in terms of the direct relations between those objects. L’s extension represents its extensional existence. The intension of L is represented by an infinite order functional that is the recursive composition of the direct relations that represent the objects and object relations that L is composed of. A functional is a function that can take functions as arguments and return a function or a number as a result. An infinite order functional can take infinite order functions as arguments and return them as results.
• This is equivalent to creating a higher order generalization of infinite order category theory in which we eliminate the distinction between objects, and morphisms (relations). Category theory represents things in terms of objects and the morphisms between objects. A morphism is a generalization of a relation. Higher order category theory extends this idea by allowing morphisms to relate morphisms. Thus, a 2nd order morphism is a morphism that relates morphisms. This idea can be extended, to allow 3rd order morphisms that relate morphisms of morphisms of morphisms. In the limit, this can be extended to infinite order category theory in which we allow morphisms of any finite order. In DR, we further generalize infinite order category theory by replacing collections of objects and morphisms with collections of hierarchy objects where a hierarchy object (HO) is an active executable process that functions as a superposition of an object and a relation. We then create infinite order categories in terms of a collection of infinite order hierarchy objects. This generalization allows us to represent infinite order relations between infinite order objects. Infinite order objects are then composed of infinite order relations. A hierarchy object is the direct representation of particle wave duality in quantum physics.
• Now that we have a definition for direct relations, we need to define the direct representational counterpart of the concept of &apos;number&apos;. To do this, we will define the direct constructible universe K.The direct numbers are important because they and their relations provide a direct representation of energy quanta and their quantum field interactions.In other words, physical existence itself is a kind of natural physical mathematical system. That system’s number’s are energy and dark energy quanta. Their relations represent the quantum field interactions and quantum state compositions that compose all of existence.
• K is a generalized infinite order inner hierarchy object theoretic model for direct numbers and direct mathematics. K is the direct representational analog of Gödel&apos;s constructible universe L. It is called the K universe because direct representation existed before indirect representation in the universe. In the direct constructible universe, K can be thought of as being built in stages resembling the von Neumann universe, V. The stages are indexed by direct ordinals. The direct ordinals represent direct numbers. In von Neumann&apos;s universe, at a successor stage, one takes Vα+1 to be the set of all subsets of the previous stage, Vα. By contrast, in K one uses only those subsets of the previous stage that are:- definable by a formula in the formal language of direct representation- with parameters from the previous stage- with the composition and decomposition operators operating over the previous stageBy limiting the model to hierarchy objects defined only in terms of what has already been constructed, one ensures the resulting model can only represent that which can exist in the direct constructible universe.
• The Energy functional is the total energy of a certain system, as a function of the system&apos;s state.In the energy methods of simulating the dynamics of complex structures, a state of the system is often described as an element of an appropriate function space. To be in this state, the system pays a certain cost in terms of energy required by the state. This energy is a scalar quantity, a function of the state, hence the term functional. The system tends to develop from the state with higher energy (higher cost) to the state with lower energy, thus local minima of this functional are usually related to the stable stationary states. Studying such states is part of optimization problems, where the terms energy functional or cost functional are often used to describe the objective function.In Hamiltonian systems, the energy functional is given by the hamiltonian.
• Here energy quanta is used generically to mean “energy quanta and dark energy quanta”.
• In mathematics, a geodesic ( /ˌdʒiːɵˈdiːzɨk/ jee-o-dee-zik or /ˌdʒiːɵˈdɛsɨk/ jee-o-des-ik) is a generalization of the notion of a &quot;straight line&quot; to &quot;curved spaces&quot;. In the presence of a metric, geodesics are defined to be (locally) the shortest path between points in the space. In the presence of an affine connection, geodesics are defined to be curves whose tangent vectors remain parallel if they are transported along it.Geodesics are of particular importance in general relativity, as they describe the motion of inertial test particles.  In relativistic physics, geodesics describe the motion of point particles under the influence of gravity alone. In particular, the path taken by a falling rock, an orbiting satellite, or the shape of a planetary orbit are all geodesics in curved space-time.HyposurfacesA hyposurface is the direct representational converse of a hypersurface in differential geometry. Whereas a hypersurface is defined as a subspace of a larger dimensional space, a hyposurface is defined as a superspace of a smaller dimensional space. Whereas a hypersurface is a submanifold of n-1 dimensions of some enclosing manifold of n dimensions, a hyposurface is a supermanifold of n+1 dimensions of an n dimensional submanifold. Each hyposurface adds an orthogonal dimension to its submanifold. Time is the hyposurface of the infinite singularity so it creates the first dimension from infinity. Therefore, time is orthogonal to infinity. The finite is orthogonal to the infinite. We also know the finite is orthogonal to the infinite because time is a kind of energy. All energy is composed of energy quanta. All energy quanta are relations between a source and a sink. The original and ultimate source and sink is the infinite singularity. The first energy quanta are the temporal and anti-temporal quanta. Those quanta are the relation between the orthogonal symmetric decomposition of the singularity and itself. Furthermore, from Physics we know that time is orthogonal to the first dimension of space because a Lorentz transformation is equivalent to a rotation around the origin in four dimensions, when a fourth imaginary coordinate is introduced as c *t *sqrt(-1). That means a distance in time is equal to sqrt(c (t2 – t1) sqrt(-1))^2) = i (c ( t2 – t1)). i is dimensionless, c is in meters/second and t is in seconds, so the resulting coordinate has units of meters, even though it expresses time in seconds. That works because the dimension of time is orthogonal to length in space in the same way that the imaginary numbers are orthogonal to the real numbers. A dimension in an orthogonal space is orthogonal to the dimension that follows it and orthogonal to the dimension that precedes it. The dimension that precedes time is the singularity. Therefore, once again, via dimensional analysis of the Lorentz transformation, and the fact that a dimension in an orthogonal space is orthogonal to the dimension that follows it and orthogonal to the dimension that precedes it, time must be the orthogonal decomposition of the singularity. An n+1 dimensional hyposurface is defined by orthogonally composing the products of decomposition of the n dimensional hypersurface that represents the surface of its submanifold, starting from the infinite singularity. The infinite singularity exists in an infinite topological space. That infinite topological space is unbounded, unquantifiable, and contains no states, no relations, and no processes. It is an undifferentiated scalar field exists in and of itself. In other words, it is not a field in spacetime. It is just an undifferentiated scalar potential field. That field has an absolute potential that represents the amount of energy in the singularity, but that potential is not quantifiable in DR. It simply represents itself as itself. The concept of ‘quantity’ is an indirect representation. The concept of ‘quantity’ as an indirect representation of an amount does not exist in DR, but the absolute potential of the singularity itself does exist in DR. The infinite topological space that represents the singularity is associated with two additional finite topological spaces. One of those finite topological spaces represents the shrinking spacetime associated with the collapsing omega white and black holes from the end of the previous instance of existence, and the other represents the expanding spacetime from the current instance of existence we exist in. As the omega white and black holes from the previous instance of existence consume the zero point virtual energy and dark energy that comprise the spacetime they exist in, they consume their own gravitational field, thereby reducing the gravitational field strength that compresses and contains the omega black and white hole’s singularity. As the gravitational field strength is reduced by an amount of gravitational potential energy –h in the topological space that represents shrinking spacetime, the singularity spontaneously decomposes and produces virtual temporal energy open strings h/2 and anti-temporal dark energy open strings -h /2, and their difference h/2 - -h/2 produces a difference in time +h in the finite topological space that represents expanding spacetime. The resulting ‘spontaneous’ decomposition of the singularity produces graded virtual temporal and virtual anti-temporal open strings. As the singularity continues to decompose, the potential difference in each of those open string pairs increases until their wavelengths reach an integer multiple of a Planck length, at which time the open string pair can close, forming a temporal field energy quantum and an anti-temporal field dark energy quantum. The temporal field energy quantum represents the event horizon of black hole quantum microsingularity, while the anti-temporal field dark energy quantum represents the event horizon of a white hole quantum microsingularity. The white hole quantum microsingularity is a temporal energy source and an anti-temporal dark energy sink, whereas the black hole quantum microsingularity is an anti-temporal dark energy source and a temporal energy sink. In other words, the white and black hole quantum microsingularity’s function as sources and sinks relative to each other, so they have opposite temporal charges, and they attract each other. In expanding spacetime, their difference is positive and it represents a new quantum of time. The same thing happens in the reverse direction in the shrinking topological space, so –x/2 –x/2 = -x. In other words, time runs backwards inside the event horizons of black and white holes and spacetime shrinks there. Note that if the net amount of gravitational potential due to gravitational black and white holes is increasing in expanding spacetime (because more spacetime is being produced than consumed), then the virtual zero point energy and dark energy in the spacetime consumed by the black and white gravitational black holes is compressed into singularity and stored there by the increasing gravitational potential. In essence, the singularity functions as an undifferentiated scalar field capacitor. The singularity undifferentiated scalar field ‘capacitor’ is charged when compressed and contained by an increasing gravitational field and it discharges when decompressed and released by a decreasing gravitational field. The infinite singularitycan be decomposed and partitioned until the omega black and white holes collapse and there is no more undifferentiatedpotential in the singularity to decompose. Thus a very large number of dimensions inside infinity. Each successive hyposurface and its enclosed hypospace is the cumulative hierarchy composed from the transfinite recursive decomposition of the hypervolumes enclosed by its hypersurfaces. Thus the first dimension is a decomposition of infinity. A two dimensional space is a decomposition of a one dimensional space, etc. Higher dimensional spaces are defined in terms of their relation to the spaces that compose them. Because a hyposurface is defined in terms of the orthogonal decomposition of the interior of its subspace, a hyposurface has no exterior. An observer cannot observe a hyposurface from the outside. Instead, from an observer&apos;s perspective inside the hypovolume enclosed by a hyposurface, the interior of a hypovolume looks like space to us. Since our brain represents things in terms of how they relate to each other in four dimensional spacetime, we can observe things in up to four dimensions; dimensions zero, one, two, and three. Dimension zero has no spatial extent so we cannot see it visually, but it is finite. We experience it as existence in time. We can see the effects of the passage of time as the changing state of objects in space. We experience a local change in the rate of time as a local change in the force of gravity. The other three dimensions we can see. However, since we exist spatially in three dimensions, what we see visually is actually a two dimensional retinal projection of the three dimensional space we exist in. Our brain reconstructs an abstraction of the third dimension from the binocular disparity between the position of corresponding points in the two dimensional images projected on each retinal surface, from the occlusion of background object images by foreground object images and from the relative motion of shadows in the two dimensional images. Even if we lose an eye, or are born with only one functioning eye, the brain can use foreground occlusion of images of objects in the background and the relative motion of shadows to reconstruct an abstraction of three dimensional space. The bounding surfaces of objects we observe are the hypersurfaces of the hypospace we exist in. Hypospace is defined from the inside out. In other words, the universe has no exterior. It is impossible to travel outside the universe. The universe is all there is. We are part of the universe. Part of the universe composes us. We cannot escape our own composition. This also explains why all objects are composed in terms of the objects and relations between the objects that compose them. In this context, &apos;object&apos; refers to every particular thing that exists. It includes everything from the simplest quantum state, and the simplest energy quantum all the way up to and including the largest structures in the universe. It explains why no object can exist before the existence of the objects it is constructed from. The composition of hyposurfaces and hypospaces also explains why big things are composed from smaller things, and not vice-versa. Hyposurfaces and hypospace provide the mathematical foundation for direct representation and the transfinite recursive orthogonal decomposition of the cumulative hierarchy of direct integers.
• This just says that the hierarchy object that represents K is composed from the transfinite composition of all lower order hierarchy objects.K is a direct representational finite Hilbert Space composed from the transfinite recursive composition of hierarchy objects by value.
• The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are required to be complete, a property that stipulates the existence of enough limits in the space to allow the techniques of calculus to be used.Hilbert spaces arise naturally and frequently in mathematics, physics, and engineering, typically as infinite-dimensional function spaces. The earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert,Erhard Schmidt, and FrigyesRiesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer) and ergodic theory which forms the mathematical underpinning of the study of thermodynamics. John von Neumann coined the term &quot;Hilbert space&quot; for the abstract concept underlying many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis. Apart from the classical Euclidean spaces, examples of Hilbert spaces include spaces of square-integrable functions, spaces of sequences, Sobolev spaces consisting of generalized functions, and Hardy spaces of holomorphic functions.Geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space. At a deeper level, perpendicular projection onto a subspace (the analog of &quot;dropping the altitude&quot; of a triangle) plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be uniquely specified by its coordinates with respect to a set of coordinate axes (an orthonormal basis), in analogy with Cartesian coordinates in the plane. When that set of axes is countably infinite, this means that the Hilbert space can also usefully be thought of in terms ofinfinite sequences that are square-summable. Linear operators on a Hilbert space are likewise fairly concrete objects: in good cases, they are simply transformations that stretch the space by different factors in mutually perpendicular directions in a sense that is made precise by the study of their spectrum.
• The state-less state cannot be created or destroyed. There is no state to create or destroy!Since the singularity is infinite, it also means it cannot have a beginning, or an end. It cannot have a first cause.That means it is logically and mathematically impossible for the universe to have a first cause.
• In general terms, unitary means &apos;of or relating to a unit&apos;, i.e., having the nature of a whole unit. Energy and dark energy quanta are the units of the finite. They are the fundamental units of existence. Those units are instances of existence, not types. They compose each instance of each thing that exists by value. Things that are unitary have an identity. In other words, they exist as individual units. Unitary operators are just automorphisms of Hilbert spaces, i.e., they preserve the structure (in this case, the linear space structure, the inner product, and hence the topology) of the space on which they act.The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are required to be complete, a property that stipulates the existence of enough limits in the space to allow the techniques of calculus to be used.Hilbert spaces arise naturally and frequently in mathematics, physics, and engineering, typically as infinite-dimensional function spaces. The earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, andFrigyesRiesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer) and ergodic theory which forms the mathematical underpinning of the study of thermodynamics. Mathematically, unitary operators acting on an orthonormal Hilbert space preserve the topological structure of the Hilbert space. They preserve the relations between its lengths and angles. They preserve its symmetries.Unitarity also means the probability amplitudes that describe the outcome of a scattering process always sum to 1. Things that are unitary can be represented by a unitary matrix. A unitary matrix U is an n x n complex matrix such that U is unitary if and only if it has an inverse which is equal to its conjugate transpose. See http://en.wikipedia.org/wiki/Unitary_matrix for further mathematical details.Things that are stable exist for at least a Planck time. Stability implies the existence of symmetry. Symmetry is an invariance under a change of state. The invariance implies the existence of one or more invariant degrees of freedom. Change, and thus quantum state transitions, can occur and persist in those invariant degrees of freedom. The persistence of those changes allow objects to persist in time. They allow stable patterns of energy and dark energy to persist. Only stable energy patterns can participate in the composition of higher order networks and the composition of higher order energy patterns and higher order forms of energy, dark energy, matter, and dark matter. In short, all stable things that exist are symmetrical. Unstable energy and dark energy patterns decay until they become stable and symmetry is restored.
• There are a maximum of n*(n-1)/2 relations between the quantum states that compose a composite energy quantum.
• Thus the singularity is the ultimate source and sink for all energy in the universe.
• We have to be careful defining potential energy. There are cases in which the potential energy of a system can be increased, yet no additional energy is stored anywhere in the system. For example, this occurs when pumping water from a lower reservoir to a higher reservoir. Water with the same temperature, volume, and pressure in a reservoir at the top of a mountain has no more energy quanta stored in it than the same water in a reservoir at the bottom of a valley. Pumping water up a mountain does not add any energy to the water per se once it stops moving. Nor does it add any additional energy to the gravitational field. In this case, potential energy does not exist in the sense of an increase in energy stored somewhere in a system or substance. The increase in potential energy is not really stored anywhere. What changes in this case is the configuration of the resulting system. The height component of the water’s position state is increased relative to the local gravitational field. When water is pumped up hill into a reservoir, the distance that gravity can exert a force over when the water is released is increased. Thus gravity can impart more kinetic energy to the water as it flows downhill than it can if the water existed in two reservoirs at the same elevation. Thus, there are no additional potential energy quanta stored in the water or in the gravitational field when water is pumped uphill. Instead, the configuration of the system is changed, and that configuration change increases the distance an existing force field can operate over. It increases the amount of work that force field can do on the resulting system.Of course, there are also systems in which potential energy is stored. For example, when a battery is charged, additional charge is added to the source. Reducing the number of available degrees of freedom is natures most powerful mechanism for increasing potential energy. It is how black holes convert all energy into potential energy. They keep compressing spacetime, increasing energy density by putting more and more energy into a smaller and smaller volume of spacetime. Then they sequentially reduce the dimensions of the remaining spacetime from 4 dimensions to 3 to 2 to 1 and ultimately to 0. By the time the energy in the system reaches zero dimensions, it has all been compressed into absolute zero entropy bosonic potential grand unified field energy. In other words, entropy is locally reduced inside a black hole. Black holes are nature’s energy recyclers. They reduce the dimension of all energy and matter to zero in the singularity, thereby converting all energy into infinite potential, and making it all available to do work starting in the next big bang. Black holes prevent the heat death of the universe. Outside the event horizon of gravitational black holes entropy always increases, but inside, it decreases. Entropy can decrease inside black holes because the arrow of time is reversed there. Converting matter to energy also works by reducing the number of degrees of freedom. Matter has a relatively large number of degrees of freedom. Energy has a much smaller number of degrees of freedom. When matter is converted into energy, the energy that was locked up in the large number of stable degrees of freedom that represent matter is focused into far fewer degrees of freedom in energy.To a lesser extent, the same thing happens when the energy in an incoherent light beam is focused into a coherent beam by a laser, or even when a weak light is brought to a focus by a lens or mirror in an optical system. An interesting extension of the same concept would be to try to design a temporal field reflector, or a temporal lens. In theory, such a system could achieve energy densities about 1.5 million times as intense as those achievable via focused electromagnetic fields.
• Energy quanta form simplex networks because a simplex describes the minimum geodesic path length, and thus the minimum energy difference between any two points in an n-dimensional space.
• Simplicial geometry describes the smallest volume that can contain every energy quantum that composes the existence of that volume.
• The surface of an object in spacetime is represented by a minimal energy geodesic isosurface in the quantum energy field that composes the object. The surface of the object is a quantum field event horizon. That event horizon is the manifold of stationary points where the net change in the quantum energy field that composes the object is zero. That minimal energy geodesic isosurface can be computed by the Hamilton-Jacobi equation of the quantum energy field.
• The arrow of time points in opposite ‘temporaldirections ‘ in the universe and mirror-verse, but it is important to understand that those directions are not directions in space. The arrow of time is the direction of causality. It points from high temporal energy density towards lower temporal energy density. It points from the past towards the future. It points in the direction of increasing entropy. Time always flows away from the singularity. It flows away from white hole quantum microsingularities and towards black hole quantum microsingularities. Note that the flow of time has no spatial direction. Time is not a spatial dimension. It is a temporal dimension. The spatial and temporal dimensions are qualitatively and quantitatively different. There is an invariant relation between them because temporal energy composes spacetime. That invariant relation is represented by the constant speed of light at all points in space. It is represented by the Lorentz transform.Also note the gap between the infinite singularity and the start of time and mirror-time. That gap represents the existence of the hypertime field. Physically, it is represented by the existence of virtual energy and virtual dark energy open strings. Those strings are differences in the singularity that are too small to be physically quantized; i.e., the energy difference is too small to form an event horizon and thus too small to form an energy quantum. Thus, sub-Planck scale energy differences represent less than one physical unit of energy. They represent less than one Planck time, and less than one Planck energy. Since energy is quantized, it must take a certain amount of it to form a quantum. Thus, the physical equivalent of zero actually starts with the existence of the first temporal (and mirror temporal) energy quantum. Energy quanta are unitary precisely because of their quantization. They are quantized precisely because they have enough energy to form an event horizon; i.e., the difference in the singularity they represent is large enough to form an event horizon. That event horizon is a closed string. It is the first naturally quantifiable unit of difference between the singularity and itself. This implies that a temporal field energy quantum has a circumference of one Planck length, and thus a wavelength of one Planck length. Thus its Schwarzschild radius is the reduced Planck constant ħ. The ‘temporal distance’ between the singularity and the temporal boson is then iħ/2. In other words, the minimum uncertainty (due to lack of quantization) is iħ/2. That explains the origin and physical cause of Heisenberg’s Uncertainty principle. In effect, the uncertainty is due to an error in our conception of the meaning of the number zero. It is caused by mistakenly conceiving the origin of the number system as zero, when in fact, to exist in one-to-one correspondence with physical existence, the true origin must be infinity. In turn, that means we need to introduce a new field of numbers between infinity and plus and minus zero in order to properly represent sub-Planck scale physical phenomena. I call that the hypertime field because it exists in an independent ‘fractional dimension’ beneath time. That dimension is not representable, or accessible using current numbers because it exists ‘inside zero’. If we want to represent it mathematically, then we must introduce an independent number field between the singularity and the beginning of time, whose maximum magnitude is iħ/2 and whose minimum value is ∞. The direct representational plus zero is then equivalent to iħ/2 in the current indirect representation of numbers. This is not simply a notational convenience. It is fundamental. We cannot continue to treat zero as the origin, because it ignores the distinction between the concepts of zero and infinity. The current origin of the number system is not in one-to-one correspondence with the origin of existence. Consequently, the current number system is inconsistent and useless for describing sub-Planck scale phenomena or phenomena at superluminal velocities. The hypertime field is physically represented by non-unitary open energy strings. Hypertime is the region between the singularity and the beginning of time; i.e., it is the region between infinity and the beginning of energy quantization. It is non-unitary. It represents a fraction of a Planck unit of time, and less than a Planck unit of energy. If energy quanta represent natural quantitative units; i.e., nature’s equivalent of a complex integer, The beginning of time is plus zero, and the beginning of mirror-time is minus zero. The beginning of time marks the beginning of quantization and the origin of ‘numbers’ as conventionally conceived in indirect representation.
• Strings that are an integer multiple of a Planck length can form closed loop event horizons.Thus, the ground state wavelength of a Temporal boson is one Planck length and the circumference of a ground state temporal boson is one Planck length.First, I should mention that there is no such thing as coordinates in physical existence. Coordinates (of all kinds) are artifacts of indirect representation. If we are going to represent things indirectly using coordinates, we need to be careful to establish a correlation between the representation of coordinates and that of direct representation.In special relativity, the Lorentz transformation converts between two different observers&apos; measurements of spatial position and time, where one observer is in constant (unaccelerated) motion with respect to the other. The laws of physics need to be invariant under a Lorentz transformation because the laws of physics cannot depend on any observer’s chosen coordinate system or reference frame. This is obviously true because the laws of physics cannot be observer or observation dependent. The universe existed a long time before there were any observers around to observe it.It also means that all observers in inertial reference frames observe the same spacetime interval between two events. This occurs because the speed of light is constant in all inertial frames of reference, regardless of the observer’s velocity.From: http://en.wikipedia.org/wiki/Minkowski_spaceIn 1905–6 it was noted by Henri Poincaré that, by using an imaginary time coordinate √−1 ct, the Lorentz transformation can be regarded as a rotation in a four-dimensional Euclidean space with imaginary time being the fourth dimension.[1] This idea was elaborated by Hermann Minkowski[2] who used it to restate the Maxwell equations in four dimensions showing directly their invariance under Lorentz transformation. He further reformulated in four dimensions the then-recent theory of special relativity of Einstein. From this he concluded that time and space should be treated equally and so arose his concept of events taking place in a unified four-dimensional space-time continuum.Representing time as an imaginary time component works because the temporal dimension is orthogonal to all spatial dimensions in the same way imaginary numbers are orthogonal to real numbers. While this works in some contexts, it is not entirely correct. It does not work correctly in all contexts.The temporal dimension is not the same as the fourth dimension of space. The dimension of time is qualitatively and quantitatively different than that of space. There are some differences between the temporal and spatial dimensions. Space contains time, but time does not contain space. That means we can represent spacetime using a temporal coordinate plus three spatial coordinates, but we cannot represent time the same way. Time does not have spatial coordinates. Space is composed of time, but time is not composed of space. Another difference is that the spatial coordinates of each point in space are unique to that point (assuming they are represented consistently), but many different spatial positions can have the same temporal coordinate value. If we want to assign a spatial coordinate to time, then in homogenous Minkowski spacetime, a single temporal coordinate would be associated with every point in space at a distance d/c time units from the time of some reference event. If the space is not homogenous, then the resulting temporal distribution will not be a spherical shell, but it will have to account for the index of refraction of each point in spacetime in all paths between the reference event and the measured time. Of course, it would also have to account for any differences in the local gravitational field between the reference event and the measured event in spacetime.The temporal field, and thus time, expand from the singularity, but the singularity exists everywhere, so assuming space exists, time expands in all spatial directions from everywhere, following an event. Note that time does not travel through space. Time is the first-order energy component of spacetime. Temporal energy exists everywhere in spacetime. The converse is not always true. Space does not exist everywhere that temporal energy exists. For example, there is a region deep inside a black hole, within 2 Planck lengths of the singularity, where time exists, yet space does not. The same is true at spatial scales below two Planck lengths. There was also a very brief moment at the beginning of the big bang where time existed, but the spatial field from the big bang had not yet formed.Photons do not travel through space (but electrons, positrons, and fermions do). Since photons carry no net charge, there is no force that can propel photons through spacetime at the speed of light. There is simply no net charge in a photon for a force field to act against to propel it through spacetime at the speed of light. Instead, photons are stationary relative to spacetime. Time, and thus spacetime, expand at the speed of light, carrying photons with it. Space can only carry particles with no rest mass with it as it expands. In other words, massive particles do not expand with spacetime. Instead, spacetime expands around, and between massive particles, and carries photons they emit between them.Photons exist beneath space. They are part of the composition of spacetime. They are part of the zero point virtual quantum field that composes spacetime.It is tempting to say photons travel through time, or through the temporal dimension, but that would be incorrect. Photons are composed from differences in the temporal field, so they are stationary relative to time. Photons expand along with the temporal field that composes them. Because the temporal field expands in every spatial dimension, in every direction, a single photon can appear to travel through multiple paths in space. In effect, a single photon can appear at multiple locations in spacetime at the same time, provided those locations are the same distance from the single photon emitter and the intervening space or conductor has homogenous electromagnetic permittivity, permeability and thus the same index of refraction. It should be possible for multiple observers to simultaneously observe a single photon, such that each observer observes the same photon at a different location in space at the same time.To test this, we need to conduct an experiment where we set up a single photon emitter inside a spherical shell of photomultiplier tubes, or CCD single photon detectors, connected to a coincidence detector. We should see coincidence detections at multiple detectors (at multiple locations) from the same photon. In effect, this allows a single photon to travel through all directions in space at the same time. It can do this because it is not really travelling through space. It is part of space and the space it is part of is expanding at the speed of light. The expanding spacetime carries whatever photons that exist at each point along with it as it expands.
• Representing time as an imaginary time component works because the temporal dimension is orthogonal to all spatial dimensions in the same way imaginary numbers are orthogonal to real numbers. While this works in some contexts, it is not entirely correct. It does not work correctly in all contexts.The temporal dimension is not the same as the fourth dimension of space. The dimension of time is qualitatively and quantitatively different than that of space. There are some differences between the temporal and spatial dimensions. Space contains time, but time does not contain space. That means we can represent spacetime using a temporal coordinate plus three spatial coordinates, but we cannot represent time the same way. Time does not have spatial coordinates. Space is composed of time, but time is not composed of space. Another difference is that the spatial coordinates of each point in space are unique to that point (assuming they are represented consistently), but many different spatial positions can have the same temporal coordinate value. If we want to assign a spatial coordinate to time, then in homogenous Minkowski spacetime, a single temporal coordinate would be associated with every point in space at a distance d/c time units from the time of some reference event. If the space is not homogenous, then the resulting temporal distribution will not be a spherical shell, but it will have to account for the index of refraction of each point in spacetime in all paths between the reference event and the measured time. Of course, it would also have to account for any differences in the local gravitational field between the reference event and the measured event in spacetime.The temporal field, and thus time, expand from the singularity, but the singularity exists everywhere, so assuming space exists, time expands in all spatial directions from everywhere, following an event. Note that time does not travel through space. Time is the first-order energy component of spacetime. Temporal energy exists everywhere in spacetime. The converse is not always true. Space does not exist everywhere that temporal energy exists. For example, there is a region deep inside a black hole, within 2 Planck lengths of the singularity, where time exists, yet space does not. The same is true at spatial scales below two Planck lengths. There was also a very brief moment at the beginning of the big bang where time existed, but the spatial field from the big bang had not yet formed.
• The temporal and mirror anti-temporal boson event horizons are co-located in the zero point quantum virtual energy field, and they are connected via an Einstein-Rosen bridge.Temporal and mirror anti-temporal bosons cannot annihilate each other because they are separated by the singularity. Energy always flows from higher potential to lower potential. The singularity has higher potential than the temporal and mirror anti-temporal bosons. The temporal and mirror anti-temporal bosons can’t merge with each other unless they fall into a gravitational black hole and white hole respectively, and are compressed back into singularity by the black and white hole’s gravitational field. However, by the time they reach the singularity, they are no longer time and mirror-time. They are both compressed into the potential energy that forms the grand unified field. Thus, they are transformed into the common infinite state-less state, and they still cannot annihilate. That explains why energy is conserved absolutely. The conservation of energy is even obeyed inside the singularity. In other words, the singularity is a kind of energy.
• Photons exist beneath space. They are part of the composition of spacetime. They are part of the zero point virtual quantum field that composes spacetime.It is tempting to say photons travel through time, or through the temporal dimension, but that would be incorrect. Photons are composed from differences in the temporal field, so they are stationary relative to time. Photons expand along with the temporal field that composes them.
• Photons do not travel through space (but electrons, positrons, and fermions do). Since photons carry no net charge, there is no force that can propel photons through spacetime at the speed of light. There is simply no net charge in a photon for a force field to act against to propel it through spacetime at the speed of light. Instead, photons are stationary relative to spacetime. Time, and thus spacetime, expand at the speed of light, carrying photons with it. Space can only carry particles with no rest mass with it as it expands. In other words, massive particles do not expand with spacetime. Instead, spacetime expands between massive particles, and carries photons they emit between them.
• Because the temporal field expands in every spatial dimension, in every direction, a single photon can appear to travel through multiple paths in space. In effect, a single photon can appear at multiple locations in spacetime at the same time, provided those locations are the same distance from the single photon emitter and the intervening space or conductor has homogenous electromagnetic permittivity, permeability and thus the same index of refraction. It should be possible for multiple observers to simultaneously observe a single photon, such that each observer observes the same photon at a different location in space at the same time.
• We don’t really need full spherical photon detection for this experiment. A couple of different detectors in each spatial axis equidistant from the single photon emitter should be sufficient.
• Schematic viewGeometrically, the photon is a 3-simplex so it forms a tetrahedron.Photons and anti-photons in the universe are right handed. Photons proceed forward in time. Anti-photons proceed backward in time. Note: Forwards in time is away from the singularity. Backwards in time is towards the singularity. This is true both in the A-verse and in the B-verse.In the universe, energy travels backwards in time inside black holes. Dark energy travels backwards in time inside white holes.Note that since a photon is composed from four quanta, it has 4(3)/2 = 6 relations. Those relations constitute the three colors and three anti-colors that compose the strong force.
• A photon is a minimal energy (stable) entangled quantum state composed from a neutrino and an anti-neutrino. Left handedness results in left helicity in which spin is in the direction opposite the direction of motion (i.e., opposite the direction of the Poynting vector in the A-verse). There are no left-handed photons in the A-verse. Left-handed photons exist in the B-verse. They are a kind of dark energy.For the sake of convention, we can assume a photon moves in the +Z direction in space. Since photons can move in every direction in space, this must be a local coordinate system for each photon. From a global perspective, +Z always points in the direction in space perpendicular to the singularity, and oriented away from it. In spacetime, there is a black hole quantum microsingularity in the center of the photon (not shown here because this is a photon, not spacetime). The direction of time (+Z) lies along the line between the center of the photon’s black hole quantum microsingularity and the center of the RGB quantum state vertex.Mirror photons (dark energy) is left-handed. Photons are right handed. In both cases, Prefix + = repulsionPrefix - = attractionSuperscript - = negative charge / normal chargeSuperscript + = positive charge / anti-chargeUnderscore = dark energy / mirror chargeNote that a photon has four verticesEdges are also the 3 colors and 3 anti-colors of the strong forceI suspect the entangled combination of (t-, -g, t+) is a neutrino.I suspect the entangled combination of (t+, -g, t-) is an anti-neutrino.The projection of Time on Anti-Time is orthogonal. The projection of a Photon on an Anti-Photon is orthogonal.The projection of a Neutrino on an Anti-Neutrino is orthogonal.The projection of all colors and anti-colors is orthogonal.Note that the color assignments are arbitrary, except that all orthogonality relations must be preserved. An anti-color relation opposes every color vertex. The specific anti-color is formed from the superposition of the two vertex colors at its ends.That means we can get an equivalent system by swapping red and green, or red and blue, or blue and green vertices. If we swap vertex color assignments, we must also swap the anti-color relations of the other two sides that form the base of the tetrahedron to preserve their orthogonality.
• Since photons can be polarized, gluons and quarks should be too.
• More research needs to be done to establish which relations represent the weak force bosons. In particular, modeling of the quarks, and mesons is needed.At this time, I suspect the mesons are represented by two spacetime double pentachorons, and the quarks by three.
• Notice that the combination of a photon and mirror photon tetrahedrons produces a double tetrahedron with eight quantum states and 8(7)/2 = 28 relations. If you subtract the original 12 relations in the photon, mirror photon pair, that leaves 16 new relations. Those 16 relations combine to compose the 8 flavors of gluons plus 8 flavors of dark energy mirror gluons. As we will see soon, the double tetrahedron is one of two forms gluons can exist in. The double tetrahedron form is actually unstable. It is a double pentachoron transient state that composes the weak force W+, W- and Z0, and anti-Z0 bosons. The double pentachoron collapses into a cubic configuration that composes the zero point virtual quantum energy field that composes spacetime. This figure is drawn to scale. It assumes that each temporal field event horizon has a circumference of one Planck length and that each photon has an event horizon with a Schwarzschild radius of one Planck length. Thus each photon has a minimum diameter of two Planck lengths.
• The fact that the pentachoron is self-dual means its permutation cycle repeats with the addition of additional vertices. That means spacetime never expands beyond four dimensions. Note that the pentachoron lattice structure of spacetime is consistent with results predicted by the causal dynamical triangulation theory.
• This animation shows how a photon and a dark energy mirror photon should combine to form a spacetime cubic double pentachoron lattice. The composition should occur due to the attraction of the W+ and W- bosons, (and their mirror verse counterparts). The W+ and W- bosons would then combine to form the Z0 boson and its mirror verse counterpart. The W and Z bosons are unstable, so they decay back into the singularity after the photon / gluons and their mirror photon and mirror gluon counterparts combine to form the spacetime cubic lattice. In this animation, I’ve increased the relative size of the temporal and anti-temporal field microsingularity event horizons to better show how the photons and mirror photons pack into the spacetime double pentachoron cubic lattice. Note that I intentionally removed all the gluon relationships from this animation to avoid clutter. If I had shown them, each quantum state vertex in the spacetime cube would be connected to every other via an intensional quantum state relation.
• Fermions obey Fermi-Dirac particle statistics (no two particles can occupy the same quantum state). They also obey the Pauli exclusion principle, so no two fermions can occupy the same space. In DR, Fermi-Dirac particle statistics and the Pauli exclusion principle are caused because fermions are composed of the spacetime fields, and they exist in spacetime. It is not possible to remove the space from a fermion without destroying it because fermions are composed from specific configurations of the spacetime field. More research is needed to determine the specific quantum field configuration of different types of fermions.
• As the figure shows, practically every physical object in nature is made of the same basic sub-atomic particles. These combine to form atoms, which in turn combine into molecules. These tiny objects in turn coalesce into stars, inter-stellar matter, and planets. And it keeps going. Stars and other objects coalesce into galaxies. Galaxies are just stupendously enormous. Our own Milky Way galaxy, for example, contains hundreds of billions of stars. Nevertheless, galaxies also group together, sometimes by the thousands, into clusters. Finally, galactic clusters merge into superclusters, which are the largest structures known. This concentric organization, from particles to superclusters, is universal in a very literal sense-it characterizes the entire known universe.
• If we narrow our focus greatly and look at life on earth, we see an even deeper nesting of structures. Every living thing on earth, from bacteria to whales, is made of the bottom eleven layers, from the singularity to cells. Large organisms like ourselves include more layers, such as tissues, organs, and organ systems. Looking outward, organisms are themselves part of larger wholes. There are local groupings, as when bees form hives and birds form flocks. There are also large regional groupings of members of a species, called populations. Multiple species combine to form ecosystems. Ecosystems form biomes; huge areas with similar climates and life-forms, such as the bands of rainforest or grassland in Africa. At the largest scale, all of the life forms on earth combine into the biosphere, the thin shell around the earth that contains life.
• Multi-layered structures are also common in the human world. Since larger units are composed of smaller units, we have our own cultural compositions. Geographical regions are composed of smaller regions. Organizations have nested structures of divisions and subdivisions. Armies, for example, are divided into a whole ladder of sub-parts, from corps to squads. Books have multiple divisions, from chapters down to individual letters. Books themselves are also parts of larger groupings, such as the subject headings and subheadings used to arrange bookstores and libraries. Everywhere we look, we see large complex wholes composed of simpler smaller parts.
• The multi-dimensional I am referring to here is in quantum state space, not spacetime dimensions. Spacetime is limited to 3 spatial dimensions plus one temporal dimension.
• Of course, the ultimate parts are the fundamental units of existence. They are energy and dark energy quanta.
• Here is an abstract illustration of emergence: Figure A shows ten dots, scattered randomly. What can we say about them? Well, they&apos;re black, round, there are ten of them, and they&apos;re in no apparent order. Now look at Figure B. Here, the dots have been arranged into a certain simple shape, a triangle. A new feature has appeared, which was not there before. We can scratch out &quot;no apparent order&quot;, and replace it with &quot;combined in the shape of a triangle&quot;. A new feature has appeared, and the key point is this: it isn&apos;t present in the individual dots. Each one is identical, so if you could only see one at a time, you couldn&apos;t tell whether it was part of a random arrangement or part of a triangle. The triangle is a pattern that exists at a higher level, resulting from the specific arrangement of dots. It is an emergent property.
• When we move up and down in a system, from level to level, the remarkable thing is that each level seems to have its own logic. New features and rules come into view that are not visible from other levels. The spiral shape of many galaxies isn&apos;t present in lower levels, such as stars and atoms. Water and ice are very different, even though they are made of the same molecules. The European Union has its own rules and features, above and beyond those of its member states. In other words, larger systems can be different from their sub-systems in quality as well as quantity.
• As this slide shows, the triangle could form part of an even larger pattern, a hexagon, which can be a part of a larger pattern, and so on, ad infinitum.At each level of composition, different properties may emerge due to the composition of relations at different scales.
• Showing hierarchical nesting as concentric circleshas its own limitations. Though it shows clearly that smaller systems are inside larger ones, not below them, it is also an oversimplification. Larger systems don&apos;t surround smaller ones like so many layers of an onion. They are the smaller ones, in the sense that they are composed of them. Atoms combine and interact to form molecules; they don&apos;t rattle around inside them. Another problem with a concentric image is that it doesn&apos;t show the relative numbers of parts and wholes. There is one circle for subatomic particles, and one circle for stars. But that can&apos;t be right. If wholes are made of parts, then logic dictates that there are more parts than wholes. Stars are vastly, overwhelmingly outnumbered by subatomic particles.This picture is useful because it shows some of the challenges we face when trying to form accurate representations of our world, mapping its features in a way that is accurate and useful. The world is subtle and complex, much more so than we can reproduce in our heads or on paper. The flower picture illustrates this. It shows the complexity and subtlety of nature&apos;s hierarchies by being complex and subtle itself. In fact, it&apos;s too complex to be useful on a regular basis. We can&apos;t draw thousands of little circles every time we want to show a few levels of a part/whole hierarchy. This image is very complicated. In fact, drawing it slowed my graphics program to a crawl. Even so, it is extremely simplified-it goes from sub-atomic particles to flowers in only five jumps, leaving out several levels. Besides, real flowers are made of billions of cells, of many different types, not seven identical ones. So, in finding good models of nature, our task is too find the proper balance between simplicity, accuracy, and usefulness. As Einstein once said, &quot;Things should be made as simple as possible, but not simpler&quot;.So, it&apos;s time to expand our model of the hierarchy of nature, to avoid oversimplification, and to get a better look at some of its subtleties. Take a look at this slide. Here we have a large hexagon, labeled &quot;FLOWER&quot;, which I hope it vaguely resembles. Like real flowers, it&apos;s made of smaller parts, in this case, smaller hexagons labeled &quot;Cell&quot;. These, in turn, are made of parts representing molecules. The molecules are made of &quot;atoms&quot;, which, if you look closely, are made of circular &quot;particles&quot;. This is a more accurate representation of the relationship between parts and wholes, because it avoids some of the problems with simpler images. The wholes in the picture really are made of the parts, so it doesn&apos;t seem as though they contain them like a jar contains cookies. Plus, it&apos;s obvious that there are more parts than wholes. It takes 2401 &quot;particles&quot; to make one &quot;flower&quot;. Finally, it shows that flowers exist at a &quot;higher&quot; level than particles only in the sense of their different levels of composition.
• Here, we have simplified things by returning to concentric circles, keeping in mind that we are talking about parts and wholes, not containers. This allows us to draw a clear line around higher level systems, like flowers and cells. We have also reduced the numbers of parts, to make the image less complicated. This is an improvement over the simple ladders and circles shown earlier, because it shows exactly how nature&apos;s hierarchies are stratified-into many parts and fewer wholes. This doesn&apos;t (necessarily) imply other kinds of stratification. It says nothing about literal highness and lowness, dominance and submission, or high or low status. The only relation implied is composition of parts and wholes.
• Notice that this tree diagram contains a subset of the same information in a different form. Here we are focusing on connection instead of inclusion. The trunk is equivalent to the biggest circle, representing the largest system, the flower. The branches, like the smaller circles, represent finer and finer subdivisions. And there are also other options. We don&apos;t need to use a graphic diagram at all. We can put the same information in the form of a textual outline.All these forms are extremely useful, which explains why they show up so often. Mathematicians and logicians use circle diagrams, called Venn Diagrams or Euler Circles, to explore the logic of sets and categories. Tree diagrams are used to represent family histories, tournaments, computer programs, chains of command, and evolutionary histories, among many other things. Outlines relate general ideas to specific ones, forming the backbone of books and lectures. These images come up over and over again because they represent common relationships in our world: whole and part, general and specific, if and then. Generally speaking, they come up any time we need to represent the relationship between the few and the many. One book, for instance, includes many chapters, sections and sub-sections. One couple produces numerous descendants. A move in chess leads to a cascade of possibilities. A field of contestants is narrowed down to one winner. All of these situations can be described by such diagrams. They are useful across a wide range of situations.So far, we have determined that the kind of stratification we are mainly talking about is that of whole and part. We have seen that the parts are more numerous than the wholes, for the simple and unavoidable reason that there cannot be more wholes than parts (unless each part is simultaneously a component of several wholes – which is impossible in nature). Returning to the metaphor of higher and lower, we have found a difference between the higher levels and the lower levels in nature&apos;s hierarchy-the lower levels are more numerous. Going bottom up, the transition from part to whole is also a transition from many to few. Here we are onto something important-there are differences between nature&apos;s higher and lower levels. Relative number is only one of the changes that we see when we move up the scale of nature&apos;s forms. These differences offer valuable clues about how nature works, and how its hierarchies formed.
• As we go up the hierarchy of composition, from bottom to top, there are fewer and fewer individuals because each higher-level individual is composed of multiple lower-level individuals by value. Since the composition is by value, lower-level individuals can only exist as part of a single higher-level individual. For example there are far more energy quanta in the universe than atoms. There are far more atoms than stars.
• The idea that there are more parts than wholes seems fairly straightforward. There are obviously more atoms in the universe than stars. How could it be any other way? But wait-think about letters and sentences. There are only 26 letters in the English language, but there are a practically infinite number of possible sentences. Does this go against the idea that there are more parts than wholes? No, but it does point to another issue. As we move from part to whole, we find that there are more possible kinds of wholes than parts. This is a consequence of the mathematics of combinations, called combinatorics. There are always more (possible) combinations of parts than parts themselves. Three letters can be combined in a row of three in six different ways; four letters have 24 combinations, and five have 120 combinations. As the number of parts increases one by one, the number of combinations of those parts increases exponentially, a phenomenon called combinatorial explosion.The 26 letters of the English alphabet can be rearranged into about 426 different 26-letter strings. That‘s about 4.5 x 1015 words. Even so, it&apos;s a low estimate for the number of sentences that can be created with 26 letters. For one thing, we can use the same letters over and over. Plus, sentences can have as many letters as we want them to. If we set no limit on the size of a sentence, then the number of sentences we can construct is infinite. Of course sentences do have practical size limits, so the number of useful sentences we can construct is not infinite, but for our purposes, it might as well be. This means that sentences are far, far more diverse than letters. Letters are still more numerous, but the same letters are repeated over and over again. If you counted all the letters in this presentation, there would be many more letters than sentences. But the sentences take more forms. There are only 26 letters in English, but most sentences are unique.Combining these differences, it seems that parts tend to be greater in number, and fewer in kind, than wholes. There are more sub-atomic particles than atoms, but many more kinds of atoms. Similarly, molecules are more diverse, and less numerous, than atoms. There are more kinds of animals than animal cells. In general, gross numbers increase as we move down the scale, while the diversity of forms increases as we move up. Let&apos;s think about this graphically. We saw before that circle and tree diagrams show up whenever there is a relationship between the many and the few. Now we have identified two such relationships, going in opposite directions. In the first case, we move from many to few in terms of gross numbers of systems as we go from part to whole, moving &quot;upward“.But then, we also go from many to few moving &quot;downward&quot;, from whole to part, because the number of kinds of things decreases. In the second case, the branching of the tree represents a change from unity to diversity as we move up the scale of systems. In fact, the idea of a tree is really more appropriate here. Representing a few wholes and many parts as a tree often doesn&apos;t make sense, because the same branches are repeated over and over again. It&apos;s technically accurate, but why bother? The tree makes more sense in the opposite direction, going from a handful of particles, to a hundred or so kinds of atoms, to countless molecules. So, let&apos;s differentiate these opposing tendencies, by giving them different images, which we will return to often. Since the tree branching from whole to part tends to be redundant, I will use another image. A better one is shown in the next slide. We can think of the decrease in gross numbers of systems from part to whole as a pyramid, which gets smaller the further upward we move. We will reserve the tree for the increase in types of system as we move up nature&apos;s hierarchy. Combining the two, we get an image that conveys both these trends. As we move upward, systems tend to become fewer in number, and greater in diversity. This image will become very important later on.
• A pyramid is a better way to illustrate changes in numbers. We can use the width of each rectangle in the pyramid to represent the relative change in numbers between levels.We can think of the decrease in gross numbers of systems from part to whole as a pyramid, which gets smaller the further upward we move.
• We go from many to few moving &quot;downward&quot;, from whole to part, because the number of kinds of things decreases. As we move up from parts to wholes, the branching of the tree represents a change from unity to diversity. In fact, the idea of a tree is really more appropriate here. Representing a few wholes and many parts as a tree often doesn&apos;t make sense, because the same branches are repeated over and over again. It&apos;s technically accurate, but why bother? The tree makes more sense in the opposite direction, going from a handful of particles, to a hundred or so kinds of atoms, to countless molecules.
• Wholes tend to be fewer in number (pyramid) and greater in kind (tree) as we move bottom-up.A word of caution is in order. A greater number of parts than wholes is a logical necessity. There can never be more wholes than parts, at least not at one time. But the increasing diversity of wholes is not absolute. There are always more possible combinations of parts than parts themselves. But many combinations are never formed, so there are not necessarily more actual combinations than parts. Even though atoms are components of stars, there are still more kinds of atoms than stars. This is because the laws of physics only allow a few kinds of stars. If it were not for the constraints of physics, there could be more kinds of star-sized objects than atoms. One can imagine stars shaped like donkeys, smiley faces, or any number of equally amusing shapes. But, alas, the laws of physics don&apos;t allow it.So, the increase in diversity from parts to wholes is only a tendency. It is common, because the laws of combinatorics are universal. But there are many other factors shaping things in the real world. That&apos;s why, even though there are many noticeable differences between parts and wholes, many of these differences are only tendencies, not absolutes. This doesn&apos;t make them any less interesting or important, but we do have to be careful about making sweeping generalizations.
• Wholes are composed of smaller parts. Multicellular organisms are larger than the cells that compose them. Stars are larger than the atoms that compose them. Molecules are larger than the atoms that compose them. Etc.
• The temporal field strength is about 15,569 times as strong as the maximum (10,000 N) strong force field. Thus the temporal charge contains far more energy than all other types of charge. It is the dominant force that directs causality.Specifically, each temporal field boson has a predicted energy of 155,689,562 Joules. Of course that is part of the zero point virtual quantum field energy, not energy we can measure.
• If parts tend to be more numerous, and less diverse, than wholes, then everything must be made of just a few basic parts. In other words, the smallest parts are more universal, and larger wholes, since they are more diverse, are more particular. As we move down the scale of things, then, smaller parts become more and more ubiquitous. Thus, every material object in the universe, from atoms to galaxies, is made of just a handful of subatomic particles. The same tendency holds on Earth. Organisms can take many forms, but they are all made of cells. Cells take many forms, but they are all made of organic molecules, which themselves take many forms. And so on.What this means is that if you are studying the laws of fundamental particles, you are studying laws that apply across the universe. If you study molecules, this is less universally applicable, because not everything is made of molecules. Studying cells is a little less universal, and whole organisms even less so. So, as we move up the scale, examining the emergent features of larger systems, we find that they are less universal, and more particular. At the bottom level, everything obeys the rules of fundamental physics. Many things obey the rules of molecular chemistry, some things obey the rules of biology, and only a relatively tiny number of entities follow the rules of economics.
• If parts tend to be more numerous, and less diverse, than wholes, then everything must be made of just a few basic parts. In other words, the smallest parts are more universal, and larger wholes, since they are more diverse, are more particular. As we move down the scale of things, then, smaller parts become more and more ubiquitous. Thus, every material object in the universe, from atoms to galaxies, is made of just a handful of subatomic particles. The same tendency holds on Earth. Organisms can take many forms, but they are all made of cells. Cells take many forms, but they are all made of organic molecules, which themselves take many forms. And so on.What this means is that if you are studying the laws of fundamental particles, you are studying laws that apply across the universe. If you study molecules, this is less universally applicable, because not everything is made of molecules. Studying cells is a little less universal, and whole organisms even less so. So, as we move up the scale, examining the emergent features of larger systems, we find that they are less universal, and more particular. At the bottom level, everything obeys the rules of fundamental physics. Many things obey the rules of molecular chemistry, some things obey the rules of biology, and only a relatively tiny number of entities follow the rules of economics.
• Another difference between parts and wholes is that parts tend to be simple, obeying relatively precise laws, while wholes tend to be more complex; their behavior more subtle and nuanced. There are only a few kinds of fundamental particles, and two of the same kind behave absolutely identically (as far as we currently know). They are simple and orderly, in the sense that their behavior can be described rather exhaustively with a set of equations. Atoms are a little more complicated, and molecules more complicated still. When you move up as far as cells, the complexity gets mind boggling. A cell is an astonishingly complex system of dozens of different kinds of molecules, interacting in thousands of different ways, all at the same time. And think of the complexity of ecosystems, or economies. One way of thinking about this difference is in terms of how much information it takes to describe systems at different levels. Fill a few pages with data about an electron; giving values for its mass and voltage, and equations about how it interacts with other particles, and you have pretty much pinned it down. A carbon atom would take longer to describe well, and you could fill an encyclopedia with information about a cell.However, when we start talking about complexity, things get very tricky, so we have to be careful. First, while everyone has an idea of what complexity is, there is no universally accepted definition. Second, as we have discussed, complexity has a tendency to rise over time, and from part to whole. But the fact that it is a tendency of nature doesn&apos;t mean that it is a goal of nature. Nature has no teleology or purpose, or will. Energy flows from sources to sinks any way it can as fast as it can. As it does so, it tends to create larger and larger concentrations of mass. Eventually those mass concentrations get so dense they form gravitational black holes, which then convert matter back into energy and convert energy back into the singularity. Finally, larger systems are not always more complex than smaller ones. Think of a person who is a member of a football team. Is the team more complex than that person? That depends. In many ways, it is simpler. The dynamics of a football team are not nearly as complex as all the chemical, biological, and mental processes that go on in a human being. The football team is only more complex if we include the complexity of its members in its description. Complexity, then, is a slippery concept. We will explore some of the subtleties of the idea later on, looking at ways that complex systems differ from simple, orderly ones, and why complexity has increased over time.
• Another difference between parts and wholes is that parts tend to be simple, obeying relatively precise laws, while wholes tend to be more complex; their behavior more subtle and nuanced. There are only a few kinds of fundamental particles, and two of the same kind behave absolutely identically (as far as we currently know). They are simple and orderly, in the sense that their behavior can be described rather exhaustively with a set of equations. Atoms are a little more complicated, and molecules more complicated still. When you move up as far as cells, the complexity gets mind boggling. A cell is an astonishingly complex system of dozens of different kinds of molecules, interacting in thousands of different ways, all at the same time. And think of the complexity of ecosystems, or economies. One way of thinking about this difference is in terms of how much information it takes to describe systems at different levels. Fill a few pages with data about an electron; giving values for its mass and voltage, and equations about how it interacts with other particles, and you have pretty much pinned it down. A carbon atom would take longer to describe well, and you could fill an encyclopedia with information about a cell.However, when we start talking about complexity, things get very tricky, so we have to be careful. First, while everyone has an idea of what complexity is, there is no universally accepted definition. Second, as we have discussed, complexity has a tendency to rise over time, and from part to whole. But the fact that it is a tendency of nature doesn&apos;t mean that it is a goal of nature. Finally, larger systems are not always more complex than smaller ones. Think of a person who is a member of a football team. Is the team more complex than that person? That depends. In many ways, it is simpler. The dynamics of a football team are not nearly as complex as all the chemical, biological, and mental processes that go on in a human being. The football team is only more complex if we include the complexity of its members in its description. Complexity, then, is a slippery concept. We will explore some of the subtleties of the idea later on, looking at ways that complex systems differ from simple, orderly ones, and why complexity has increased over time.
• Another difference between things at the bottom of the scale and things at the top is that the bottom-most structures tend to be the oldest. According to the scientific idea of the history of the universe, subatomic particles were the first to form. Atoms came a little later, followed by molecules, stars and galaxies. On earth, there were atoms before there were cells, and there were single-celled organisms before there were multicellular organisms. In human societies, there were towns before there were countries, and countries before there were international organizations.Here again, we have to be careful. Obviously, the individual cells around today are not necessarily older than multicellular organisms. I am many times older than most of the cells in my body. The forms of the lower level structures are older, not the individual structures themselves. My body is creating new molecules constantly, but many molecules of the same kind were around before there was life on earth. Also, the greater age of lower level structures is only a tendency. It results from the fact that structures in nature often form from the bottom up. Complex things like cells don&apos;t just appear all at once. They were built up, step by step, from simpler things, atoms and molecules. But there are other factors at work. For one thing, physical forces work simultaneously at all scales, so they are creating objects of all sizes at the same time. For example, as soon as there were single-celled organisms, there were ecosystems composed of those organisms.Some higher level structures are actually older than lower level ones. For example, multicellular organisms have been around much longer than specific organs, such as lungs and spleens. There was never an &quot;Age of Organs&quot;, when wild hearts and livers roamed the earth before combining into whole animals. Some parts were formed in a more top-down process, as organisms grew more internally complex during evolution. Human organizations often show this kind of top-down differentiation of parts. The United States government has developed most of its internal divisions over time, as the need arose. Still, there is a tendency for the lower level, simpler parts to be older, and it is a tendency with important implications.
• As we move up the hierarchy of systems in nature, there are some real differences between the objects near the bottom and those near the top; between the parts and the wholes. Moving upward, instances tend to get larger and less numerous, while types tend to become more diverse, more complex, and younger. These differences are not absolute-there are many exceptions-but they are still there. I think they have a story to tell. Any theory of everything must describe the cause of those tendencies. These gradients are all a result of the same basic process: the quantum evolution of the universe. The shape of the hierarchies of nature is based on two main factors: the fact each instance of existence expands from the singularity, and the way it has evolved since. The simplicity, universality, and lack of diversity of the basic particles are a result of the fact that each instance of existence expanded from the infinite singularity. The singularity is very simple. It is the absence of any difference in quantum state. By definition, it is simple, it is universal, and it lacks diversity. The more diverse and complex structures we find when moving up the scale are the result of the natural history of the universe; the way that things interact, compose, and evolve over time.
•  The laws of thermodynamics are utterly central to understanding any realm that is concerned with the availability of energy (or matter and resources in general), from astronomy, to ecology, to economics and politics. If one were asked to give a one-sentence summary of what makes the physical world go round, it would be hard to do better than saying that, deep down, the world is matter and energy dancing to the tune of the laws of thermodynamics. To say that a steaming cup of tea is hotter than a glass of ice water is to say that its molecules are vibrating more frantically. Heat flows up a metal spoon placed in the cup because the molecules in the tea collide with those in the spoon, making them vibrate faster as well. The molecules in the spoon then collide with each other until they are all vibrating faster. That&apos;s one way heat flows-through millions of microscopic collisions. This is called conduction. There are two other ways that heat can move from place to place. One is radiation, where electromagnetic waves are emitted from a hot object. When they strike another object, they heat it up by causing its atoms to vibrate faster. Heat radiation is what you feel when you put your hand near a hot light bulb. Unlike other ways of transferring heat, radiation can cross empty space, as is obvious if you stand outside on a sunny summer day and feel the rays of the sun. The other way heat is transported is through convection; a current in a fluid (a gas or liquid). Hot fluids tend to rise, and cold fluids tend to fall, which results in circular convection currents.Convection currents are the driving force behind weather systems, ocean currents, and even the creeping movement of continents.
• The first law is often paraphrased with the cliché &quot;there is no such thing as a free lunch&quot;. When it comes to energy, the cliché is absolutely true-all the energy in the universe comes from somewhere, and it all goes somewhere. It never just appears or vanishes.The first law of thermodynamics is analogous to Newton’s first law of motion. Both are consequences of the principle of conservation of energy. The first law deals with the relation between changes in heat, pressure, volume, work, and energy within a system, whereas Newton’s first law is about the relation between energy and changes in motion of a system. It is important to understand that heat, pressure, volume, and motion are all ultimately different forms of energy. At the lowest level of existence all types of quantized energy are resolvable into finite symmetric differences in the singularity. When energy composes higher-order energy forms, it creates emergent properties just like everything else that exists. We view those different properties and call them different types of energy. Energy is consumed to perform work. When energy is consumed it changes form. It does not disappear. Any change in the state of a system is equivalent to a change in the distribution, configuration or form of energy in the system. In fact, a change in the form of energy in a system is a change in the configuration, and thus a change in the distribution of energy in the system. It is a change in the current quantum state of existence.The actual first law of thermodynamics is described using several different differential equations depending on what type of system one wants to analyze. I am not going to present the equations here because that level of detail is not required for present purposes, and it would take too much time to go over. You can consult http://en.wikipedia.org/wiki/First_law_of_thermodynamics or any standard text on thermodynamics if you want more detail.Note that matter is a particular configuration of energy, so energy can be transformed into matter and matter can be converted into energy.Even time and space are particular configurations of energy. Also note that the “total amount of energy” assumes that energy is quantized. This isn’t always true. For example, virtual energy is not quantized, so it is not directly measurable. Even so, the total amount of energy in the singularity is a finite constant, even if it can’t always be measured or quantized. In the singularity, all energy is in an indistinguishable degenerate state, that makes it impossible to quantify, but if it was all converted to energy quanta, the same amount of energy would be present in every instance of the universe. No energy is ever lost or destroyed.
• Imagine that you have two glass bottles full of air. One has been outside in the sun on a hot day, so the air inside is hot. The other has been in a freezer, so its air is cold. If we put the bottles together, there would be a flow of heat from one to the other, as the molecules in each bottle start to collide. This flow will always be in the same direction, from the hot bottle to the cold one. The hot bottle will lose heat and cool down, and the cold one will gain heat and warm up, until they both reach the same temperature. You&apos;ll never see the cold bottle get colder and the hot one get hotter.&quot;Of course things don&apos;t happen that way!&quot; you may be thinking. But it&apos;s not so easy to say why they don&apos;t happen that way. Heat could flow from the cold bottle into the hot bottle, thus making it hotter, and the first law would not be violated, because the total energy in both bottles would remain the same. Here&apos;s how it could work: the air molecules in the hot bottle are, on average, flying around and vibrating faster than those in the cold bottle. But the hot bottle has a few atoms moving more slowly than the ones in the cold bottle, and the cold bottle has a few moving faster than the average in the hot bottle. It&apos;s conceivable that only the fast molecules could move from cold to hot, and only the slow ones from hot to cold. This would reverse the normal heat flow, without violating the first law.So why doesn&apos;t it happen? You may have already guessed. It might be conceivable for heat to flow from cold to hot, but it is incredibly unlikely. It&apos;s so unlikely, in fact, that you would have to wait for several times the age of the universe, staring hopefully at coupled bottles, before you could expect to see it happen. The second law, then, has a different character than the first. The first law is ironclad, as far as we know. Energy is always conserved, period. This is a result of a fundamental symmetry in the laws of physics, as we will see. The second law, though, is a result of the laws of probability. It&apos;s vastly more likely that heat will flow from hot to cold, so that is practically always what happens.
• Well, so what? What does all this have to do with day to day life? Quite a bit, actually. The law that heat never flows from cold to hot is an aspect of the broader fact that orderly, specific states are less likely than random states. To understand the implications of this, let&apos;s look at a more general way of stating the second law:Entropy, a measure of disorder or disorganization, never decreases in isolated systems.We&apos;ve seen that heat is a disordered form of energy. Entropy is disorder of any sort, of matter or energy. This way of stating the second law just broadens the first. A hot bottle next to a cold bottle is an orderly, improbable arrangement. There are zillions of possible arrangements of the air molecules in the two bottles. In almost all of them, the temperature is averaged out across the two bottles. The arrangement where the hot air is in one bottle and the cold air is in the other is an unusual one, vastly outnumbered by more likely arrangements. If left alone, this orderly arrangement will degenerate, as the air in the two bottles moves toward a more probable arrangement.But disorder doesn&apos;t just take the form of heat. We see disorder increasing all the time up here in our macroscopic world. Abandoned houses, for example, never get more tidy. The wallpaper starts to peel, the boards rot, and the whole place finally falls down. This is true for the same reason that heat flows from hot to cold: probability. There are more ways for a house to be untidy than tidy, and what is most likely is what eventually happens.As we discussed earlier, some systems have properties that are more than the sum of their parts, because the parts are arranged in such a way that the entire system has properties that would be absent if the parts were arranged differently. A dismantled radio will play no music, even if all of its parts are present. This gives us a clue to understanding entropy. The world is full of systems for which it matters how the parts are arranged. Some arrangements have consequences that others don&apos;t. But these significant arrangements are vastly, overwhelmingly outnumbered by insignificant arrangements. In the absence of outside influence, systems are more likely to find themselves in an insignificant state than a significant one. That is why houses don&apos;t become more orderly without outside help. When I clean my house, I arrange it more or less the same way every time. But every mess I make is unique.
• Energy has strong conformist tendencies. It constantly seeks to blend in, to minimize its differences with its surroundings, and thus minimize its potential. This is why, even though we can convert energy from one form to another, it gets less useful every time we do so. Some of the potential is lost at each step, as the energy blends in with its surroundings; gaining entropy. Most of the time, potential energy is lost as heat.
• Entropy can actually be seen in a positive light. Decreasing potential energy can be a good thing. The universe began as pure potential energy. We couldn&apos;t have existed then, because we would have been instantly vaporized. On a cosmic scale, we are delicate, low-energy creatures. The fact that things have been seeking low energy states for eons means that most of the violent energy around us has been canceled out, so we don&apos;t have to tiptoe around to avoid explosive reactions. The physicist Richard Feynman once pointed out that if the total electrical force between two grains of sand was not mostly canceled out, they would snap together with the force of a speeding locomotive. We shouldn&apos;t be too quick to lament the fact that we are surrounded by useless energy.Many of the objects in nature are a result of a stable balance between opposing forces. Galaxies are a balance between gravity and centrifugal force, the force that holds water in a spinning bucket. Stars are a balance between inward gravity and outward pressure. Such balancing of forces can result in elegant, beautiful forms; things like soap bubbles, the six-sided cells of honeycombs, and the mirrored surface of a still pond. Practically everything in nature seeks an equilibrium, a balance of some sort. Balance can be beautiful.
• Not all equilibria are low-energy ones, and not everything in nature settles down. And that is another reason for optimism. You may have realized I have been leaving something out. If everything is becoming disorderly, why do we see so many highly-organized, low-entropy structures all around us? The sparrows outside my window can&apos;t be explained as a simple balance of forces, nor as a random arrangement of atoms. Come to think of it, we are constantly seeing things get more organized. Ugly construction sites become attractive buildings. We chew up and digest food, and then turn it into amazingly complex cells and tissues. Refrigerators even cause heat to flow from cold to hot.Doesn&apos;t all this violate the second law? No. Notice how some of the broad statements I&apos;ve been using are worded. Entropy never decreases in isolated systems. Heat never flows spontaneously from hot to cold. The universe as a whole is becoming more disorderly. Across the board, entropy always increases. But there&apos;s nothing to prevent local decreases of entropy, as long as it doesn&apos;t decrease globally. Nothing in the second law prevents eddies of organization from forming in the river of increasing disorder. This can even happen in inert substances, if the conditions are right. Water, for example, will spontaneously turn into ice, which is a lower-entropy substance than liquid water (because its water molecules are arranged in specific locations). But when ice forms, it releases heat into its surroundings, increasing the entropy of the environment more than its own entropy is decreased.Ice, however, is a low-energy equilibrium state. Ice in a freezer has the same temperature as the inside of the freezer. But some systems, such as sparrows, cells, and refrigerators, go much farther than this, actively becoming more organized while moving farther away from equilibrium. Things that can do this are called open systems (See Slide 167). There are two main components to their trick of beating entropy. First, to maintain organization, energy must be spent. Abandoned houses get neater when someone spends energy fixing them up. Increasing organization always requires energy. Any structure that can decrease its entropy must be able to harness a source of useful energy. Cities take in torrents of power and raw materials, plants capture sunlight, and animals eat. The other part of the trick is that such systems must export entropy elsewhere. A sparrow takes in ordered matter and energy, uses it to maintain its body, and then radiates low-grade energy as heat, and excretes excess matter. The bird&apos;s local order is compensated by an increase in entropy elsewhere, such as your windshield.
• There are two main components to their trick of beating entropy. First, to maintain organization, energy must be spent. Abandoned houses get neater when someone spends energy fixing them up. Increasing organization always requires energy. Any structure that can decrease its entropy must be able to harness a source of useful energy. Cities take in torrents of power and raw materials, plants capture sunlight, and animals eat. The other part of the trick is that such systems must export entropy elsewhere. A sparrow takes in ordered matter and energy, uses it to maintain its body, and then radiates low-grade energy as heat, and excretes excess matter. The bird&apos;s local order is compensated by an increase in entropy elsewhere, such as your windshield.
• The reason there can be so much complexity on earth is that we have a wonderful source of energy-the sun. Practically every form of energy on earth can be traced back to solar energy. Coal and petroleum are packets of solar energy captured by ancient plants. Wind and running water are set in motion by heat from the sun, which causes air to circulate, and water to evaporate and fall as rain. The earth can escape the full force of the second law, because the sun will be providing energy for another five billion years or so. But what about entropy? If open systems export entropy, why don&apos;t we see heat and disorder building up all around us? What happens (at least in nature) is that disorderly matter is recycled, and heat is radiated harmlessly into space.In many ways, a refrigerator exemplifies the general features of open systems. It cools the air inside it below surrounding temperatures, against the flow of entropy and equilibrium. But to do so, it has to be plugged in. Take away its energy source, and it will quickly settle into an equilibrium with the surrounding room temperature. It also has to export entropy, in the form of heat, to make up for its gain in order. That&apos;s why you can&apos;t cool the kitchen by opening the refrigerator door. It will always pump more than enough heat out the back to make up for the cool air inside
• Some open systems aren&apos;t just open to energy-they are also open to matter. Both matter and energy flow through. Simple examples of such systems include the little whirlpool that appears when you drain the bathtub, or standing waves on rivers. Such systems can be quite stable. Kayakers know that a particular standing wave can last for years, and they return to especially fun ones. A standing wave is a stable structure in a different sense than, say, a rock. The rock keeps more or less the same set of atoms as long as it exists, but most of the matter in a standing wave is replaced in a matter of seconds. What&apos;s stable is a pattern of organization. Matter and energy comes and goes, but the pattern remains.
• Of course, living things are the most obvious example of such systems. A cow eats to take in matter, from which it extracts energy by breaking down complex molecules. It radiates energy as heat, and excretes waste matter as cow patties. So, for a system like a cow, we need to add to the open system diagram, to show matter as well as energy flowing through. High grade matter and energy flows in, and low grade matter and energy flows out. Now we can introduce some terms scientists sometimes use to describe such processes. Since matter and energy are conserved, these things have to come from somewhere, and then they have to go somewhere when they are done. Let&apos;s refer to both together as resources. Where resources come from is called a source, and where waste goes is called a sink. We can trace a cow&apos;s resources to several different sources. The energy ultimately comes from sunlight captured by grass, which was combined with matter from the environment (mostly CO2 from the air) to make the grass stems. The cow gets both matter and energy from eating grass. There are also multiple sinks. Some energy is eventually radiated away as heat in the form of infrared light, which will eventually slip into space. Some energy is still bound up in matter that is excreted. Thus, cow patties become a source for decomposing organisms, like fungus and bacteria. The earth as a whole is a system that is open to energy and mostly closed to matter, so the matter will be recycled, while most of the energy will eventually radiate into space.Resources that are entering an open system are referred to as input, while those that are leaving are called output. The flow of matter and energy through the system is referred to as throughput. For many systems, resources build up and are stored for a while. These are called stocks. A simple example is a bathtub. If both the tap and the drain are open, water will flow through the bathtub. If the input of water is low, hardly any water will build up. But if more water flows through the tap than the drain can keep up with, a stock of water will build up, and eventually overflow. If both are opened just right, you can get an equilibrium where the same amount of water flows in as out. This causes the stock of water to stabilize at a certain level. Water is still flowing through, and is thus constantly replaced, but the level stays the same. This is an example of a steady state, or homeostasis. Homeostasis is very important in stable open systems (otherwise they wouldn&apos;t be stable). Living things, for example, maintain all sorts of homeostatic steady states, maintaining stable nutrient levels, pH balance, and body temperature.
• A basic feature of nature is the specific way its elements are arranged. As we sawwith hierarchical composition, patterns matter. Two systems with identical elements may have different emergent properties, because those elements are arranged in different patterns. We&apos;ve seen the importance of patterns in open systemsas well. The universe began with a simple pattern in the big bang, which became more complex over time. Matter and energy have pattern-they can be organized or disorganized, and the ways of being disorganized far outnumber the ways of being organized. To speak of a law of nature is almost synonymous with speaking about pattern, because a law of nature is a pattern that we can expect matter and energy to follow in space and time.
• Everything with mass is composed of, and in terms of its relation to the spacetime pentachoron lattice. Since spacetime is composed of energy and dark energy quanta by value, two different spacetime pentachorons cannot occupy the same spacetime. Massless bosons can occupy the same spacetime because they exist beneath spacetime.
• Each instance of existence begins with a big bang in the infinite alpha singularity. The singularity is the absence of dimension. It isn’t zero dimensional because the physical counterpart of the concept of number and dimension do not exist yet.The singularity has perfect symmetry. With no differences to discriminate between states, there are no states, so there can be no dimensions and no order or disorder. The singularity still exists, but it all exists in the same featureless, stateless-state.Order and disorder are states of existence. They cannot exist in the absence of states.
• Order represents a sequence relation between states along some dimension. With no differences to differentiate between different states, there can be no states, and thus, there can be no relations between states. Obviously, there can be no sequence relations without relations. Hence, there can be no sequence and thus no order. In fact, without relations, even dimensions can’t exist.
• Since disorder is the lack of order, there can be no disorder either. There can be no disorder without order to define it relative to. Just as order is represented by a sequence relation between states, disorder is represented by a mixed up, or disordered sequence. Disorder is a sequence that lacks order. The key point is that there can be no order or disorder without sequence relations, and there can be no sequence relations without differences between states.
• As soon as spontaneous symmetry breaking creates the first difference between the singularity and itself, two virtual states exist, and there exists a non-unitary virtual difference between those two virtual states. Collections of those differences represent a random sequence. They represent disorder. Each difference is a virtual energy / virtual dark energy open string.An open string is unstable. It is transient. It cannot persist through time because it lacks symmetry. It lacks a stable degree of freedom for the energy difference that it represents to persist in. Instead, it interferes with itself and with other strings until it decays back into the singularity, or until it increases in length to a Planck length and forms a closed string loop.The precise mechanisms of hypertime string and loop formation require further investigation.
• A string isn’t stable until the energy difference it represents increases to a Planck length in wavelength. At that point, it can form a unitary closed loop with a circumference of one Planck length. Each closed string represents the existence of a stable unit of existence. It represents the existential equivalent of dimension zero. It represents the existential equivalent of a point. It represents the existence of the first temporal and anti-mirror temporal bosons and the existence of the first neutrino.I suspect the combination of a temporal and anti-mirror temporal boson is a sterile neutrino. The opposite combination of an anti-temporal boson and a mirror temporal boson would then represent a sterile anti-neutrino. The combination of a sterile neutrino and a sterile anti-neutrino represents a photon and mirror photon.In this figure, t- refers to temporal charge, while t+ refers to anti-mirror temporal charge. The – and + superscripts represent charge and anti-charge respectively. The underscore represents the fact that the energy quantum it refers to exists in the mirror-verse; i.e., it is dark energy. In these figures, the white lines represent energy and the black lines represent dark energy. The arrows represent the direction of spin. I have chosen clockwise spin to represent negative charge, so counterclockwise spin represents positive charge. +0 refers to the origin of time,