Artificial Intelligence once presented great promise as prior researchers and philosophical explorers dared to step into the infinite space of the human mind. Such as John McCarthy who received his Turing award for recognition in the mathematical logic used in creating the AI development language called LISP. Today, Dr. Husserl’s work in phenomenology is being substantiated. Today, new potential of Dr. Hartman’s work is being presented that gives dimension to the broadening of static subjectiveness to cognitive temporality. Also today, Kurt Godel’s continuum hypothesis is being challenged as provable and last but not least, a contradiction to Alan Turing statement that the numbers and functions for GOFAI are to far too complex for any modern machine.
Many software architectures are being classified as Artificial Intelligence. No synthetic intelligence has yet to classify, subsequently all AI developments fall into the category of being either fuzzy systems meaning best guess, neural networks meaning that it is taught and the evolutionary meaning it learns. Synthetic intelligence best describes the once referred to as GOFAI, good old fashion artificial intelligence.
The goal once upon a time was to achieve the ability to synthetically emulate the human mind and observe its workings. From that alone great potential exists in the exploration in the field of phenomenological psychology. The most important thing about developing a GOFAI methodology is emulating what is probably the most important part of biological intelligence and that is the capability to learn, anticipate, predict and capability to recognize realized and unrealized potential in the evolution of its environment.
The problem with GOFAI making advancements is quit frankly knowing where to start and what to start with. In general the AI development community depict that GOFAI failure has been that the present level of development environments lack the programmatic capacity and bad operating systems.
Unfortunately after thirty years of increasing unreliability and low productivity in software development, a new software development project can expect a 50% chance of never being deployed. In addition to unreliability and low productivity in the application development of software, is the growing unreliability and low productivity of the Internet. The Internet wasn’t planed and its growth was a phenomena. Everyone knows that information happens but the issue arises because the search is capturing unstructured and probably non-relevant results.
The future of the Internet is called the Semantic Web. The goal provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. It is a collaborative effort led by W3C with participation from a large number of researchers and industrial partners. It is based on the Resource Description Framework. Now completing the third year in the semantics R&D to assist in evolving a better web experience.
For software developers, the ability to survive the millennium will require a change in the way software is designed. The modernized structure will need to have two core and fundamental capabilities. These capabilities are the ability to change and information dissemination without structured processes.
Standard software is typically made up of two distinctly different structures referred to as executable code and data. By comprising the development architecture to have three distinct structures i.e., interchangeable frameworks, inference engine and data, the framework satisfies the ability to change and the engine satisfies information dissemination.
In addition to the ability to change and implement change, frameworks provide two additional convincing advantages. The first advantage is that a framework is created in a fraction of the time that Rapid Application Development requires. The second advantage to frameworks is the significant reduction in cost. To drastically reduce application development time, Application program interfaces or API’s will allow developers of structured processes to place greater resources to the algorithms. The concept distributes API’s as included in a custom framework or as drag-n-drop deployable into any framework. Application development is not obsolete, but rather the obsolescence is more relative to the methodology of structuring software programs.
Axiology is presently “static” meaning based upon the subjective by eye and hand questionnaire propositions. This interaction is referred to as post cognitive propositioning and not influenced by interactive axiom propositions that suggest asymptotic cognition. Subjectiveness by influences of asymptotic cognition may be interpreted as interactive cognitive propositioning. Many may wish to assert that Axiology is the best method of proof in the whole of the phenomenology of GOFAI. Axiology does in fact inertly fit the psychological phenomenology [bracketing], but Axiology does lack the second phenomenology of the subjectiveness to spatiotemporality. It is apparent Dr. Husserl had great influence on Dr. Hartman. It is also apparent that Dr. Hartman had the Psychological insights to differentiate the Radical in such manor that Dr. Husserl clearly lacked. Although Axiological value results are accurate and the value result is propositioned by subjectiveness, Dr. Hartman’s work was nearly completed, Axiology as known today lacks the [bracketing] of a full system dynamics that define spatiotemporality.
I discovered that the expansion of expressing asymptotic influences would be required to manifest a clarity to the methodology of building a GOFAI database management system. It was also apparent that I could achieve the expressions without altering the existing core definitions of Asymptotic Analysis or Asymptotic Geometry. Asymptotic analysis is a method of describing “limiting” behavior and Asymptotic geometry is a method to quantify surfaces virtually permeated within a spatial environment. I refer to the expanded expression set the Asymptotic Proposition.
Asymptotic Geometry is used for creating spatiotemporalities to be modeled in the dimension you wish to view exponentially powered by the number of “visible plains”, power of three or “cube” multiplied by the total view plains you wish to view “therefore 6 as a cube has six total plains”. This is why our particular view point of three dimensions become most significant, first importance is that it is the dimension we live in and the only one we can even truly conceive to exist. Therefore the dimension is the third dimension powered by the number of visible plains (set to three) view plains are set to six for three dimensional viewing of the dimension the framework will be manifested.
First is is best to understand the standard string theory. The standard string theory is a model used for fundamental physics to make up the building blocks of one dimensional strings. Strings give properties to zero-dimensional particles. With the string theory the possibility emerges that “unify” the known natural forces by describing them with the same set of equations such as gravitational, electromagnetic, weak nuclear and strong nuclear forces.
The M-String Theory supports that super string theories may be related by dualities which allows one super string to infer to another super string theory. Asymptotic stratum strings are similar to dual string theory in that they unify strings using theoretic dualities with exception that the inferences are from asymptotic influences
GOFAI asserts the need for the expression of asymptotic stratums or dualities. Stratums are made up of short, long and super-strings manifested to support object properties of the spatiotemporalities. Asymptotic stratums assert that a descriptive framework causes to exist by duality an asymptotic declarative framework. The stratums are linked by inference from spatiotemporality of the descriptive influence.
The Axiological values of systemic, extrinsic and intrinsic are based upon triples. Therefore set the Axiological framework width tp triples. Next we prepare the virtually differentiated surfaces for triples.
The spatial environment is now virtually differentiated into 162 surfaces. These 162 surfaces also are to accommodate the number set applicable to the dimensioning of the framework width of triples. Cognition is defined in 27 spatiotemporalities that transcend the Content Aware Object systematically thru the continuum. The defined cognition axiomates within a virtually differentiated spatial environment that is explainable by simple mathematics and geometric construction.
Temporal cognition is established by simple arithmetic of adding 3 single digit numbers for a modal result. In a properly numbered and differentiated spatial environment there are 9 uniquely differentiations that the modal result are the same. Based upon the subjective influences the cognition interacts upon axioms by moving the object and its properties (though) the aligning modal results to number sets defining the spatiotemporality
Three persistents and nine persistent plains
Stratums allow for content propositioned by a “Descriptive” “closed world” framework to be evaluated in “open world architectures” and dynamically create a ”Declarative” framework made up of concepts to manipulate the results
Quantum Gates Cognitive Constellations 3 Constellations with modal 15 have 1,0,2 3 Prime Numbers 3 Constellations with modal 15 have 1,2,2 5 Prime Numbers 2 Constellations with modal 15 & 1 modal 20 have 1,2,2 5 Prime Numbers 3 Constellations with modal 15 have 1,2,0 3 Prime Numbers 2 Constellations with modal 15 have 1 Prime Number 2 Constellations with modal 15 have 2 Prime Number 2 Constellations with modal 15 have 3 Prime Number