SlideShare a Scribd company logo
1 of 22
Download to read offline
N=NP Solution
By Christian Kiss,
November 14th
, 2015 – January 24th
2016
Keywords:
n=np, claymath, P vs NP Problem, Optimisation, Optimization of data, Networks,
triangulation, travelling salesman, hamilton, cryptography, scrambler,
Subjects: C00, C3, C6, C8
C - Mathematical and Quantitative Methods > C0 - General
C - Mathematical and Quantitative Methods > C3 - Multiple or Simultaneous Equation
Models ; Multiple Variables
C - Mathematical and Quantitative Methods > C6 - Mathematical Methods ; Programming
Models ; Mathematical and Simulation Modeling
C - Mathematical and Quantitative Methods > C8 - Data Collection and Data Estimation
Methodology ; Computer Programs
1 Introduction: The problem...................................................................................................... 2
2 The solution: Self explanatory................................................................................................. 3
3 Formula.................................................................................................................................... 6
4 Implications on Hamilton and Travelling Salesman................................................................ 7
5 Not a 3D function over GPU s only........................................................................................ 10
6 Conclusion ............................................................................................................................. 21
7 Sources .................................................................................................................................. 21
8 Appendix: A small Prime numbers oddity + a few Fax pages explaining the solution above
.................................................................................................................................................. 22
Figure 1 Original detail how to describe the solution................................................................ 4
Figure 2 Here the distance in the Hamiltonian problem/ Travelling Salesman is optimized.
The hierarchy of the value is again self explanatory. Each number doesn’t have to be known
in advance................................................................................................................................... 8
Figure 3 Example: Simple brute force approach starting with the highest value or its inverse
value, cutting out the previous variable while proceeding with the remaining variables .......... 9
Figure 4 Highly condensed data processing only counter values (clocks) instead classical
X,Y,Z lattice systems ............................................................................................................... 12
Figure 5 Highly condensed data in a simple optimization layout with optimized data and
optimized command, as well a simple raid/dual-channel hardware example.......................... 13
Figure 6 Using the clock value/ tower height as radius (i.e. clockvalue+Pi) the data density
can be increased. Diverse relations between merge depth, a-b distance, sea level cut offs of
many circles, volumes/diameters/perimeters each with their merge depth cutoff at a search
level/ sea level etc .................................................................................................................... 15
Figure 7 Variant with interdependency of travelling distance to node/city-value (while Figure
6 shows equal node/city values)............................................................................................... 15
Figure 8 Refined concept for keeping the peaks, quantum computing error correction etc.. 17
Figure 9 It can be dots forming a block.................................................................................... 18
Figure 10 Possible Example: Optimization model implemented in an atomic model
simulation. Adding spin and clock lines affecting each other may simulate also decay
transitions and thermal shake ................................................................................................... 19
Figure 11 Sketch summary of electric transmutation, here 4th refinement, as well as Lagrange
superconductor theory.............................................................................................................. 20
1 Introduction: The problem1
Large parts of Cooks paper2
/ Millennium Challenge problem3
seem to define the
1
It took me originally about half an hour to think of the solution, reading the problem description however was a
lot more complicated than the problem. Originally I met the problem November 14th
2015 over
http://www.viralberg.com/?p=1748 “
You’ll get $1,000,000 if you can solve this math problemYou’ll get $1,000,000 if you can solve this math
problem
Posted by RYOT on Friday, November 6, 2015” keeping the link with the comment:
“codebreakers die usually dont they? maybe noone should solve it then? ;D ;D ;D /// aha.... apparently its about
OPTIMISATION of information searching aha....... intels again”. The solution as refined from November14th
2015 to January15th2016 over many times.
mathematical realm of values, while there were a few revisions to clarify the N vs. NP
problem. As I understood the paper I suspected a clear data optimization problem, which
has severe implications on code breaking (especially ciphers and scrambler codes), code
making, quantum computing error correction, giant networks data administration, up to
hedge fund algorithms and scientific institutions with massive data loads and on and on and
on4
. The best metaphor for the problem and its solution is therefore a large city of
skyscrapers from above, or a giant field of different flowers. Each is a data point, or a set of
data. How do we find one certain tower in this large city, a sea of skyscrapers? How do we
find one particular flower in this broad field of different flowers and grass? How do we find
one particular data set and this as efficiently as possible? This means that during the search
we have to go possibly each way only once, while finding what we want as soon as possible,
while being able to measure it?
While Hamilton and or the Travelling Salesman focus on the optimization of the distance
between the nodes/ cities, assuming that each node/city is equal, in data we do not have
this problem. Data has different values, the nodes/cities are not necessarily equal.
2 The solution: Self explanatory
The figure below describes it at a glance. It is so simple that it actually is self explanatory. In
the search process we do not walk the field or the city for one particular flower, we burn
them all down. We make a simple mirrored counter city, or mirrored counter field of
flowers. Mathematically it’s simply a counter value – a minus. And the rest is a vector. We
move the city skyline and the mirrored counter city skyline towards each other. The tallest
2
Cook S.(2000): The P vs NP Problem, under http://www.claymath.org/sites/default/files/pvsnp.pdf
3
Kaye R. / Stewart I. (2015): Minesweeper http://www.claymath.org/sites/default/files/minesweeper.pdf
4
Any major process of the modern world creates data in computer systems and networks, especially since the
increased connectivity
meet first – the highest values meet first- which instantly makes us know which the highest
values are.
Figure 1 Original detail how to describe the solution
Tower 1 vectored towards counter Tower 1 burns down each other as they meet, and we
know which the highest value is, with measuring the time it took till they met.
2 vectored towards -2 becomes 0, while we don’t have to know the value 2 in advance. That
it took a certain time makes the value, 2 in this case.
It is a valid argument to think that 2 + (-2) = 0 does not seem overly complicated, but lets
have a look at the implications. In a large set of data, where we do not know which value is
which, which is bigger, smaller, how much bigger or smaller, this simple system orders the
data automatically from biggest to smallest (can be in variations), while measuring the data
in the same cycle, while ensuring maximum efficiency of the search path. In simple: we make
a sea level in the city, which represents our search value and merge the mirrored towers
until this search line. This sea level line – or search value time – can be freely set.
Example: How many towers are there which are 200 meters and where are they?
City and counter city merge towards each other, the super towers burn away, and then the
mediocre sized ones and suddenly at the sea level of 200 meters we found all at that size.
If we remember where the roofs were, and check a second city, we can find out if both cities
have the towers on the same location, i.e. in the data we process accumulations with
keeping the peaks of the data. This may be interesting for any system that measures clutter
and has to find significance in clutter. This is important from scientific data of telescopes,
nuclear science, military radar systems, and error correction in quantum computing, since
mistakes repeat randomly, results not.
3 Formula
Mathematically it can be described as simple addition, but since measuring the time with
vectors it gets a bit more complex than that.
V(arrow left) N1(value) + V(arrow right/or counter vector) -N1(counter value)= 0 t
I.e. sequenced 0 from highest value to first
0(tx)=N1>N2>N3>N4… means simply that the result is sequenced by the dataset. This means
that the highest values meet each other first, and become first 0.
It makes sense to have as few towers burn as possible until one particular is found, i.e. the
particular data value is found. So it can make sense to inverse the data by its contrary value
in some data sets. If the data sets have many large values and a few small ones, you don’t
want to burn down over and over all the city, until you find the lowest value. You inverse the
data set simply, so the tallest towers are the smallest values. That would reverse the values
but not the Nx order, and saves significant optimization time. We can process even the
difference between the towers as well, and know the value difference between N1 and N3
for instance, all within the same cycle.
4 Implications on Hamilton and Travelling Salesman
Hamilton and/or The Travelling Salesman problem focused on the distance. It is absolutely
no problem to optimize the distance value between these5
, with the same algorithm. The
longest distance meets first, the shortest last, the more diverse the distances the better. We
do not have to know the values even. Which is the longest way or shortest way we measure
in one cycle, and it orders itself from the highest value first. Using the shortest ways, which
contain as many cities as possible first, is a no-brainer or inverse the longest way
combination first (Figure 3). However if the cities ABCD (Figure 2) have different values we
do not need any travelling path. We visit the tallest first with applying the algorithm on the
cities, and not on the ways/paths to it.
1: xxxxx xxxx xxxxxx xxxxxxxx => big amount of unknown unsorted data
2: xxxxxxxx xxxxxx xxxxx xxxx => lines vectored against each other: highest meet first
3: 38495293 148392 92321 2442 => enumerator (or super enumerator atomic clock)
counts and gives each its value in the Master File
table
5
Initially I was not sure if this chapter is necessary, to esstablish the relation between the equal node valued
approach with focussing on the distance vs. the shown tower example algorythm, which focusses on unequal
node values / city heights
Figure 2 Here the distance in the Hamiltonian problem6
/ Travelling Salesman is optimized. The hierarchy
of the value is again self explanatory. Each number doesn’t have to be known in advance.
6
Unmodified Graph opensourced from
https://en.wikipedia.org/wiki/Travelling_salesman_problem#/media/File:Weighted_K4.svg
Figure 3 Example: Simple brute force approach starting with the highest value or its inverse value7
,
cutting out the previous variable while proceeding with the remaining variables
7
Unmodified Graph opensourced from
https://upload.wikimedia.org/wikipedia/commons/d/d2/Minimum_spanning_tree.svg
5 Not a 3D function over GPU s only
So great let’s get ahead and make some animations of 3D towers, and let a Graphics
processor handle it? Then you have clearly not understood it. Each tower can be one. Single.
Point. It can be one single bit, whether its atom sized, or a Hard drive cluster, or a bit in
memory chip. It can be one single bit in a radio signal, and the second or third repeat cuts
the clutter out.
It can be many different points in one data block, forming metaphorically an irregular
particular rooftop. Your Master file table decides what which rooftop means, or which
length means what. This enables data compression or a code simply. You can store the
height of the tower like pit and island of optical devices and form a dataset by that, then you
basically defined a length literally physically. You can cut the skyscrapers dynamically by the
smallest one and process only the relative heights between the towers, i.e. you may store
only the relative data, the difference between the data values. If you want to make 3D
objects however there are limitless ways to alter shape or color of the tower or data and
each means something else- which is not an optimal solution in my opinion. This invention/
concept can make Data storage, can make raid systems, can define Memory protocols,
robotics and artificial intelligence learning and so forth. Generally speaking I wouldn’t view
most of the scrambler codes to date as reliable anymore, and handle them as deceit
channels from now on – which can reveal what is covered.
Not to be underestimated is the application in triangulation systems, electromagnetic or
acoustic. These can be done over relatively complicated calculations so far, while a system
based on this optimization finds the highest value (or loudest signal) first. The second strong
(or loud) signal as second. While all of the clutter (and more silent signals) can be processed,
the strongest ones would be findable without driving massive I.T. into a warzone for
instance.
Using an atomic clock can make sense as super enumerator, i.e. the shortest failure safe
measurable unit, defines the speed of the system. In an inverse data set with many large
numbers, the most gigantic number may be simply one to a few counting cycles until the
counter values meet.
Example: We want data efficiency- so what do we really want to store, when we have a
super enumerator? Simply the clock value (counter value of the shortest reliably feasibly
measurable time counting unit). We may not even need X,Y,Z as point systems + the clock
unit, depending on the storage model we may need only the clock unit. Figure 2 beneath
explains how such a highly dense and condensed system may look like. If the commands and
findability data is stored as negative value, these commands and location markers can be
optimized as well. Here the data volumes may be smaller, and for enhancing the velocity of
commands, these optimized commands and findability data sets may be perfectly feasible as
cache data. However if the negative values are processed this way (as command and
location clocks), the negative values of the data set itself have to be stored as well in the
positive dataset.
Here maybe a gap between two equal positive values can mean negative values. This way
negative data is stored also quickly findable in the optimization process. The controller
accessing the Master File table handles it intrinsically as negative value, without
complicating the optimization process. Optimizing the command+location layer enhances
the time critical findability of data. The height of the towers/clocks in this layer can easily
trigger a sequence of commands in a certain order. Example: First command > second
command > third command.
Figure 4 Highly condensed data processing only counter values (clocks) instead classical X,Y,Z lattice
systems
Figure 5 Highly condensed data in a simple optimization layout with optimized data and optimized
command, as well a simple raid/dual-channel hardware example
Achieving an efficient algorithm and greater data density can be done with counting each
clock value not simply as tower but as radius, in a possibly second stage of the Master File
table. Let’s have a look what additional information we gain then, compared to tower height
only:
One clock value used as radius (with Pi) means:
• The clock value length i.e. the radius
• Perimeter i.e. circle length or half
• Volume of the ring/ circle/ half circle
• Diameter at maximum merge of green and blue rings/ or half rings
• Two points a,b moving apart from each other, per merging count unit when the
green counter circles merge towards the blue. This leads to a strict ratio of 2-points
distance to merge depth
• Sea levels per circle i.e. different circles have different cut lengths at a certain search
level/ sea level
• If full ring circle there is a decreasing distance between the two points (on further
merger after maximum diameter), which can be negative values
• The distance between the rings themselves can be processed. This can make data
interdependencies each search level/ sea level as well8
. This is just a Master File table
function involving only Pi + clock value, but other, more compressed data sets can
8
A: I think it would be an incredibly elegant and efficient code, B: Do we beat binary datadensity and
calculation efficiency?, C:What other superefficient algorythms can make 1clock value intrinsically data dense.
Ie compression code
process the clock value.
Figure 6 Using the clock value/ tower height as radius (i.e. clockvalue+Pi) the data density can be
increased. Diverse relations between merge depth, a-b distance, sea level cut offs of many circles,
volumes/diameters/perimeters each with their merge depth cutoff at a search level/ sea level etc
Figure 7 Variant with interdependency of travelling distance to node/city-value (while Figure 6 shows
equal node/city values)
The figures beneath show the evolution of the Optimization concept visually at a glance. It is
self explanatory once the concept has been understood. The chequered symbol represents a
simple quantum computing error correction pattern. Results repeat, mistakes are random.
Still I believe that quantum computing super positions are error generation machines, where
each cycles generates additional errors, to claim afterwards that one path was faster. This
Optimization concept/ invention can be neutered for error correction protocols, but it’s a
waste of potential as some of the possibilities were described above.
The red lines represent the search value or sea level metaphorically, the purple lines the
accumulation spots. These are important to find significance in data. Multiple data tables
can process the same towers in different meanings. This can be used for hidden code or for
greater data density.
In all the examples of highly condensed data sets and super enumerators processing only
clock values (or X,Y,Z+clock value lattices), or using the value as radius, there is nothing that
prevents the application of these principles in any existing platform or binary system and any
conventional processor including GPU's as raid controllers. Some systems may simulate a
controller software or hard drive file table, misusing cache or solid-state devices as
command+location layer, while data remains on the hard drive etc.
Figure 8 Refined concept for keeping the peaks, quantum computing error correction etc
Figure 9 It can be dots forming a block
How would a clock unit look simulating an atomic model? The figures below are based on my
electric transmutation theory- Figure 8 the 4th
refinement sketch, as well as on Lagrange
superconductor theory, i.e. that superconductors are nothing more but Lagrange points-
where nuclear forces offset each other with the right spin and density and the right
temperatures.
Figure 10 Possible Example: Optimization model implemented in an atomic model simulation. Adding
spin and clock lines affecting each other may simulate also decay transitions and thermal shake
Figure 11 Sketch summary of electric transmutation, here 4th refinement, as well as Lagrange
superconductor theory
6 Conclusion
Instead of mathematically unsolvable optimization problems focusing on the path, so
unsolvable that it becomes a millennium quest with one million dollar prize money, we use a
vector to process away unwanted Node values/ cities, which orders and measures all values
in the same cycle. This ensures maximum efficiency in finding the exact wished values. A set
of approaches enables many more data storage and processing options, than 3D modeling
blocks. While this optimization algorithm can be applied on the paths of the existing N vs NP
problems (Figure 2 and 3), especially useful for large unknown and unsorted values, it may
no longer be necessary to measure the paths at all, to achieve an efficient data
optimization.
7 Sources
Cook S.(2000): The P vs. NP Problem, under
http://www.claymath.org/sites/default/files/pvsnp.pdf obtained from
http://www.claymath.org/millennium-problems/p-vs-np-problem
Kaye R. / Stewart I. (2015): Minesweeper
http://www.claymath.org/sites/default/files/minesweeper.pdf
Opensourced Graphs (2016):
https://upload.wikimedia.org/wikipedia/commons/d/d2/Minimum_spanning_tree.svg
https://en.wikipedia.org/wiki/Travelling_salesman_problem#/media/File:Weighted_K4.svg
Unknown Blogger/ RYOT (2015): You will Get $1,000,000 If You Solve This Math Problem,
under http://www.viralberg.com/?p=1748 referring to the short 40 Seconds challenge
video under https://www.facebook.com/RYOT/videos/1210097135670582/
8 Appendix: A small Prime numbers oddity + a few Fax pages
explaining the solution above
> Gesendet: Samstag, 16. Januar 2016 um 02:18 Uhr
> Von: "Christian KISS" <Christian.Kiss@gmx.de>
> An: maths.directory@lms.ac.uk
> Betreff: Fw: prime numbers riddle: 2-11=-9 2-9=-7 2-7=-5 (!?) 2-5=-3 2-1= 1
2+1= 3 2+5= 7 2+7= 9 2+9=11 2+11=13... x3,x7,x9,x1 2-13=-11... -x3,-x5,-x7,-x9
Christian Kiss BabyAWACS - TRULY Independent. > Gesendet: Samstag, 16.
Januar 2016 um 01:59 Uhr
> > prime numbers riddle:
> > 2-13=-11... -x3,-x5,-x7,-x9
> > 2-11=-9
> > 2-9=-7
(!)> > 2-7=-5 (!?)
> > 2-5=-3
***************
> > 2-1= 1
***************
> > 2+1= 3
> > 2+5= 7
> > 2+7= 9
> > 2+9=11
> > 2+11=13... x3,x7,x9,x1

More Related Content

Similar to SSRN-id2718694

Brief Introduction to Error Correction Coding
Brief Introduction to Error Correction CodingBrief Introduction to Error Correction Coding
Brief Introduction to Error Correction CodingBen Miller
 
Applications of mathematics in real life
Applications of mathematics in real lifeApplications of mathematics in real life
Applications of mathematics in real lifeThasneemRazia
 
Quantum computers
Quantum computersQuantum computers
Quantum computerskavya1219
 
Backtracking based integer factorisation, primality testing and square root c...
Backtracking based integer factorisation, primality testing and square root c...Backtracking based integer factorisation, primality testing and square root c...
Backtracking based integer factorisation, primality testing and square root c...csandit
 
A Technical Seminar on Quantum Computers By SAIKIRAN PANJALA
A Technical Seminar on Quantum Computers By SAIKIRAN PANJALAA Technical Seminar on Quantum Computers By SAIKIRAN PANJALA
A Technical Seminar on Quantum Computers By SAIKIRAN PANJALASaikiran Panjala
 
Applications of Matrices
Applications of MatricesApplications of Matrices
Applications of Matricessanthosh kumar
 
Mathematics Research Paper - Mathematics of Computer Networking - Final Draft
Mathematics Research Paper - Mathematics of Computer Networking - Final DraftMathematics Research Paper - Mathematics of Computer Networking - Final Draft
Mathematics Research Paper - Mathematics of Computer Networking - Final DraftAlexanderCominsky
 
Aggregating In Accumulo
Aggregating In AccumuloAggregating In Accumulo
Aggregating In AccumuloBill Slacum
 
20131106 acm geocrowd
20131106 acm geocrowd20131106 acm geocrowd
20131106 acm geocrowdDongpo Deng
 
Manifold Blurring Mean Shift algorithms for manifold denoising, report, 2012
Manifold Blurring Mean Shift algorithms for manifold denoising, report, 2012Manifold Blurring Mean Shift algorithms for manifold denoising, report, 2012
Manifold Blurring Mean Shift algorithms for manifold denoising, report, 2012Florent Renucci
 
Real life application of Enginneering mathematics
Real life application of Enginneering mathematicsReal life application of Enginneering mathematics
Real life application of Enginneering mathematicsNasrin Rinky
 
Change Point Analysis
Change Point AnalysisChange Point Analysis
Change Point AnalysisMark Conway
 
Topological Data Analysis of Complex Spatial Systems
Topological Data Analysis of Complex Spatial SystemsTopological Data Analysis of Complex Spatial Systems
Topological Data Analysis of Complex Spatial SystemsMason Porter
 
A robust blind and secure watermarking scheme using positive semi definite ma...
A robust blind and secure watermarking scheme using positive semi definite ma...A robust blind and secure watermarking scheme using positive semi definite ma...
A robust blind and secure watermarking scheme using positive semi definite ma...ijcsit
 
Introduction and Applications of Discrete Mathematics
Introduction and Applications of Discrete MathematicsIntroduction and Applications of Discrete Mathematics
Introduction and Applications of Discrete Mathematicsblaircomp2003
 

Similar to SSRN-id2718694 (20)

1283920.1283924
1283920.12839241283920.1283924
1283920.1283924
 
Brief Introduction to Error Correction Coding
Brief Introduction to Error Correction CodingBrief Introduction to Error Correction Coding
Brief Introduction to Error Correction Coding
 
Applications of mathematics in real life
Applications of mathematics in real lifeApplications of mathematics in real life
Applications of mathematics in real life
 
Quantum computers
Quantum computersQuantum computers
Quantum computers
 
Backtracking based integer factorisation, primality testing and square root c...
Backtracking based integer factorisation, primality testing and square root c...Backtracking based integer factorisation, primality testing and square root c...
Backtracking based integer factorisation, primality testing and square root c...
 
A Technical Seminar on Quantum Computers By SAIKIRAN PANJALA
A Technical Seminar on Quantum Computers By SAIKIRAN PANJALAA Technical Seminar on Quantum Computers By SAIKIRAN PANJALA
A Technical Seminar on Quantum Computers By SAIKIRAN PANJALA
 
Applications of Matrices
Applications of MatricesApplications of Matrices
Applications of Matrices
 
Mathematics Research Paper - Mathematics of Computer Networking - Final Draft
Mathematics Research Paper - Mathematics of Computer Networking - Final DraftMathematics Research Paper - Mathematics of Computer Networking - Final Draft
Mathematics Research Paper - Mathematics of Computer Networking - Final Draft
 
Aggregating In Accumulo
Aggregating In AccumuloAggregating In Accumulo
Aggregating In Accumulo
 
20131106 acm geocrowd
20131106 acm geocrowd20131106 acm geocrowd
20131106 acm geocrowd
 
Manifold Blurring Mean Shift algorithms for manifold denoising, report, 2012
Manifold Blurring Mean Shift algorithms for manifold denoising, report, 2012Manifold Blurring Mean Shift algorithms for manifold denoising, report, 2012
Manifold Blurring Mean Shift algorithms for manifold denoising, report, 2012
 
Real life application of Enginneering mathematics
Real life application of Enginneering mathematicsReal life application of Enginneering mathematics
Real life application of Enginneering mathematics
 
Computer Network Assignment Help
Computer Network Assignment HelpComputer Network Assignment Help
Computer Network Assignment Help
 
CPP Homework Help
CPP Homework HelpCPP Homework Help
CPP Homework Help
 
Simulation notes
Simulation notesSimulation notes
Simulation notes
 
Change Point Analysis
Change Point AnalysisChange Point Analysis
Change Point Analysis
 
Topological Data Analysis of Complex Spatial Systems
Topological Data Analysis of Complex Spatial SystemsTopological Data Analysis of Complex Spatial Systems
Topological Data Analysis of Complex Spatial Systems
 
A robust blind and secure watermarking scheme using positive semi definite ma...
A robust blind and secure watermarking scheme using positive semi definite ma...A robust blind and secure watermarking scheme using positive semi definite ma...
A robust blind and secure watermarking scheme using positive semi definite ma...
 
Introduction and Applications of Discrete Mathematics
Introduction and Applications of Discrete MathematicsIntroduction and Applications of Discrete Mathematics
Introduction and Applications of Discrete Mathematics
 
[PPT]
[PPT][PPT]
[PPT]
 

SSRN-id2718694

  • 1. N=NP Solution By Christian Kiss, November 14th , 2015 – January 24th 2016 Keywords: n=np, claymath, P vs NP Problem, Optimisation, Optimization of data, Networks, triangulation, travelling salesman, hamilton, cryptography, scrambler, Subjects: C00, C3, C6, C8 C - Mathematical and Quantitative Methods > C0 - General C - Mathematical and Quantitative Methods > C3 - Multiple or Simultaneous Equation Models ; Multiple Variables C - Mathematical and Quantitative Methods > C6 - Mathematical Methods ; Programming Models ; Mathematical and Simulation Modeling C - Mathematical and Quantitative Methods > C8 - Data Collection and Data Estimation Methodology ; Computer Programs
  • 2. 1 Introduction: The problem...................................................................................................... 2 2 The solution: Self explanatory................................................................................................. 3 3 Formula.................................................................................................................................... 6 4 Implications on Hamilton and Travelling Salesman................................................................ 7 5 Not a 3D function over GPU s only........................................................................................ 10 6 Conclusion ............................................................................................................................. 21 7 Sources .................................................................................................................................. 21 8 Appendix: A small Prime numbers oddity + a few Fax pages explaining the solution above .................................................................................................................................................. 22 Figure 1 Original detail how to describe the solution................................................................ 4 Figure 2 Here the distance in the Hamiltonian problem/ Travelling Salesman is optimized. The hierarchy of the value is again self explanatory. Each number doesn’t have to be known in advance................................................................................................................................... 8 Figure 3 Example: Simple brute force approach starting with the highest value or its inverse value, cutting out the previous variable while proceeding with the remaining variables .......... 9 Figure 4 Highly condensed data processing only counter values (clocks) instead classical X,Y,Z lattice systems ............................................................................................................... 12 Figure 5 Highly condensed data in a simple optimization layout with optimized data and optimized command, as well a simple raid/dual-channel hardware example.......................... 13 Figure 6 Using the clock value/ tower height as radius (i.e. clockvalue+Pi) the data density can be increased. Diverse relations between merge depth, a-b distance, sea level cut offs of many circles, volumes/diameters/perimeters each with their merge depth cutoff at a search level/ sea level etc .................................................................................................................... 15 Figure 7 Variant with interdependency of travelling distance to node/city-value (while Figure 6 shows equal node/city values)............................................................................................... 15 Figure 8 Refined concept for keeping the peaks, quantum computing error correction etc.. 17 Figure 9 It can be dots forming a block.................................................................................... 18 Figure 10 Possible Example: Optimization model implemented in an atomic model simulation. Adding spin and clock lines affecting each other may simulate also decay transitions and thermal shake ................................................................................................... 19 Figure 11 Sketch summary of electric transmutation, here 4th refinement, as well as Lagrange superconductor theory.............................................................................................................. 20 1 Introduction: The problem1 Large parts of Cooks paper2 / Millennium Challenge problem3 seem to define the 1 It took me originally about half an hour to think of the solution, reading the problem description however was a lot more complicated than the problem. Originally I met the problem November 14th 2015 over http://www.viralberg.com/?p=1748 “ You’ll get $1,000,000 if you can solve this math problemYou’ll get $1,000,000 if you can solve this math problem Posted by RYOT on Friday, November 6, 2015” keeping the link with the comment: “codebreakers die usually dont they? maybe noone should solve it then? ;D ;D ;D /// aha.... apparently its about OPTIMISATION of information searching aha....... intels again”. The solution as refined from November14th 2015 to January15th2016 over many times.
  • 3. mathematical realm of values, while there were a few revisions to clarify the N vs. NP problem. As I understood the paper I suspected a clear data optimization problem, which has severe implications on code breaking (especially ciphers and scrambler codes), code making, quantum computing error correction, giant networks data administration, up to hedge fund algorithms and scientific institutions with massive data loads and on and on and on4 . The best metaphor for the problem and its solution is therefore a large city of skyscrapers from above, or a giant field of different flowers. Each is a data point, or a set of data. How do we find one certain tower in this large city, a sea of skyscrapers? How do we find one particular flower in this broad field of different flowers and grass? How do we find one particular data set and this as efficiently as possible? This means that during the search we have to go possibly each way only once, while finding what we want as soon as possible, while being able to measure it? While Hamilton and or the Travelling Salesman focus on the optimization of the distance between the nodes/ cities, assuming that each node/city is equal, in data we do not have this problem. Data has different values, the nodes/cities are not necessarily equal. 2 The solution: Self explanatory The figure below describes it at a glance. It is so simple that it actually is self explanatory. In the search process we do not walk the field or the city for one particular flower, we burn them all down. We make a simple mirrored counter city, or mirrored counter field of flowers. Mathematically it’s simply a counter value – a minus. And the rest is a vector. We move the city skyline and the mirrored counter city skyline towards each other. The tallest 2 Cook S.(2000): The P vs NP Problem, under http://www.claymath.org/sites/default/files/pvsnp.pdf 3 Kaye R. / Stewart I. (2015): Minesweeper http://www.claymath.org/sites/default/files/minesweeper.pdf 4 Any major process of the modern world creates data in computer systems and networks, especially since the increased connectivity
  • 4. meet first – the highest values meet first- which instantly makes us know which the highest values are. Figure 1 Original detail how to describe the solution
  • 5. Tower 1 vectored towards counter Tower 1 burns down each other as they meet, and we know which the highest value is, with measuring the time it took till they met. 2 vectored towards -2 becomes 0, while we don’t have to know the value 2 in advance. That it took a certain time makes the value, 2 in this case. It is a valid argument to think that 2 + (-2) = 0 does not seem overly complicated, but lets have a look at the implications. In a large set of data, where we do not know which value is which, which is bigger, smaller, how much bigger or smaller, this simple system orders the data automatically from biggest to smallest (can be in variations), while measuring the data in the same cycle, while ensuring maximum efficiency of the search path. In simple: we make a sea level in the city, which represents our search value and merge the mirrored towers until this search line. This sea level line – or search value time – can be freely set. Example: How many towers are there which are 200 meters and where are they? City and counter city merge towards each other, the super towers burn away, and then the mediocre sized ones and suddenly at the sea level of 200 meters we found all at that size. If we remember where the roofs were, and check a second city, we can find out if both cities have the towers on the same location, i.e. in the data we process accumulations with keeping the peaks of the data. This may be interesting for any system that measures clutter and has to find significance in clutter. This is important from scientific data of telescopes, nuclear science, military radar systems, and error correction in quantum computing, since mistakes repeat randomly, results not.
  • 6. 3 Formula Mathematically it can be described as simple addition, but since measuring the time with vectors it gets a bit more complex than that. V(arrow left) N1(value) + V(arrow right/or counter vector) -N1(counter value)= 0 t I.e. sequenced 0 from highest value to first 0(tx)=N1>N2>N3>N4… means simply that the result is sequenced by the dataset. This means that the highest values meet each other first, and become first 0. It makes sense to have as few towers burn as possible until one particular is found, i.e. the particular data value is found. So it can make sense to inverse the data by its contrary value in some data sets. If the data sets have many large values and a few small ones, you don’t want to burn down over and over all the city, until you find the lowest value. You inverse the data set simply, so the tallest towers are the smallest values. That would reverse the values but not the Nx order, and saves significant optimization time. We can process even the difference between the towers as well, and know the value difference between N1 and N3 for instance, all within the same cycle.
  • 7. 4 Implications on Hamilton and Travelling Salesman Hamilton and/or The Travelling Salesman problem focused on the distance. It is absolutely no problem to optimize the distance value between these5 , with the same algorithm. The longest distance meets first, the shortest last, the more diverse the distances the better. We do not have to know the values even. Which is the longest way or shortest way we measure in one cycle, and it orders itself from the highest value first. Using the shortest ways, which contain as many cities as possible first, is a no-brainer or inverse the longest way combination first (Figure 3). However if the cities ABCD (Figure 2) have different values we do not need any travelling path. We visit the tallest first with applying the algorithm on the cities, and not on the ways/paths to it. 1: xxxxx xxxx xxxxxx xxxxxxxx => big amount of unknown unsorted data 2: xxxxxxxx xxxxxx xxxxx xxxx => lines vectored against each other: highest meet first 3: 38495293 148392 92321 2442 => enumerator (or super enumerator atomic clock) counts and gives each its value in the Master File table 5 Initially I was not sure if this chapter is necessary, to esstablish the relation between the equal node valued approach with focussing on the distance vs. the shown tower example algorythm, which focusses on unequal node values / city heights
  • 8. Figure 2 Here the distance in the Hamiltonian problem6 / Travelling Salesman is optimized. The hierarchy of the value is again self explanatory. Each number doesn’t have to be known in advance. 6 Unmodified Graph opensourced from https://en.wikipedia.org/wiki/Travelling_salesman_problem#/media/File:Weighted_K4.svg
  • 9. Figure 3 Example: Simple brute force approach starting with the highest value or its inverse value7 , cutting out the previous variable while proceeding with the remaining variables 7 Unmodified Graph opensourced from https://upload.wikimedia.org/wikipedia/commons/d/d2/Minimum_spanning_tree.svg
  • 10. 5 Not a 3D function over GPU s only So great let’s get ahead and make some animations of 3D towers, and let a Graphics processor handle it? Then you have clearly not understood it. Each tower can be one. Single. Point. It can be one single bit, whether its atom sized, or a Hard drive cluster, or a bit in memory chip. It can be one single bit in a radio signal, and the second or third repeat cuts the clutter out. It can be many different points in one data block, forming metaphorically an irregular particular rooftop. Your Master file table decides what which rooftop means, or which length means what. This enables data compression or a code simply. You can store the height of the tower like pit and island of optical devices and form a dataset by that, then you basically defined a length literally physically. You can cut the skyscrapers dynamically by the smallest one and process only the relative heights between the towers, i.e. you may store only the relative data, the difference between the data values. If you want to make 3D objects however there are limitless ways to alter shape or color of the tower or data and each means something else- which is not an optimal solution in my opinion. This invention/ concept can make Data storage, can make raid systems, can define Memory protocols, robotics and artificial intelligence learning and so forth. Generally speaking I wouldn’t view most of the scrambler codes to date as reliable anymore, and handle them as deceit channels from now on – which can reveal what is covered. Not to be underestimated is the application in triangulation systems, electromagnetic or acoustic. These can be done over relatively complicated calculations so far, while a system based on this optimization finds the highest value (or loudest signal) first. The second strong
  • 11. (or loud) signal as second. While all of the clutter (and more silent signals) can be processed, the strongest ones would be findable without driving massive I.T. into a warzone for instance. Using an atomic clock can make sense as super enumerator, i.e. the shortest failure safe measurable unit, defines the speed of the system. In an inverse data set with many large numbers, the most gigantic number may be simply one to a few counting cycles until the counter values meet. Example: We want data efficiency- so what do we really want to store, when we have a super enumerator? Simply the clock value (counter value of the shortest reliably feasibly measurable time counting unit). We may not even need X,Y,Z as point systems + the clock unit, depending on the storage model we may need only the clock unit. Figure 2 beneath explains how such a highly dense and condensed system may look like. If the commands and findability data is stored as negative value, these commands and location markers can be optimized as well. Here the data volumes may be smaller, and for enhancing the velocity of commands, these optimized commands and findability data sets may be perfectly feasible as cache data. However if the negative values are processed this way (as command and location clocks), the negative values of the data set itself have to be stored as well in the positive dataset. Here maybe a gap between two equal positive values can mean negative values. This way negative data is stored also quickly findable in the optimization process. The controller accessing the Master File table handles it intrinsically as negative value, without complicating the optimization process. Optimizing the command+location layer enhances
  • 12. the time critical findability of data. The height of the towers/clocks in this layer can easily trigger a sequence of commands in a certain order. Example: First command > second command > third command. Figure 4 Highly condensed data processing only counter values (clocks) instead classical X,Y,Z lattice systems
  • 13. Figure 5 Highly condensed data in a simple optimization layout with optimized data and optimized command, as well a simple raid/dual-channel hardware example
  • 14. Achieving an efficient algorithm and greater data density can be done with counting each clock value not simply as tower but as radius, in a possibly second stage of the Master File table. Let’s have a look what additional information we gain then, compared to tower height only: One clock value used as radius (with Pi) means: • The clock value length i.e. the radius • Perimeter i.e. circle length or half • Volume of the ring/ circle/ half circle • Diameter at maximum merge of green and blue rings/ or half rings • Two points a,b moving apart from each other, per merging count unit when the green counter circles merge towards the blue. This leads to a strict ratio of 2-points distance to merge depth • Sea levels per circle i.e. different circles have different cut lengths at a certain search level/ sea level • If full ring circle there is a decreasing distance between the two points (on further merger after maximum diameter), which can be negative values • The distance between the rings themselves can be processed. This can make data interdependencies each search level/ sea level as well8 . This is just a Master File table function involving only Pi + clock value, but other, more compressed data sets can 8 A: I think it would be an incredibly elegant and efficient code, B: Do we beat binary datadensity and calculation efficiency?, C:What other superefficient algorythms can make 1clock value intrinsically data dense. Ie compression code
  • 15. process the clock value. Figure 6 Using the clock value/ tower height as radius (i.e. clockvalue+Pi) the data density can be increased. Diverse relations between merge depth, a-b distance, sea level cut offs of many circles, volumes/diameters/perimeters each with their merge depth cutoff at a search level/ sea level etc Figure 7 Variant with interdependency of travelling distance to node/city-value (while Figure 6 shows equal node/city values) The figures beneath show the evolution of the Optimization concept visually at a glance. It is self explanatory once the concept has been understood. The chequered symbol represents a simple quantum computing error correction pattern. Results repeat, mistakes are random. Still I believe that quantum computing super positions are error generation machines, where each cycles generates additional errors, to claim afterwards that one path was faster. This
  • 16. Optimization concept/ invention can be neutered for error correction protocols, but it’s a waste of potential as some of the possibilities were described above. The red lines represent the search value or sea level metaphorically, the purple lines the accumulation spots. These are important to find significance in data. Multiple data tables can process the same towers in different meanings. This can be used for hidden code or for greater data density. In all the examples of highly condensed data sets and super enumerators processing only clock values (or X,Y,Z+clock value lattices), or using the value as radius, there is nothing that prevents the application of these principles in any existing platform or binary system and any conventional processor including GPU's as raid controllers. Some systems may simulate a controller software or hard drive file table, misusing cache or solid-state devices as command+location layer, while data remains on the hard drive etc.
  • 17. Figure 8 Refined concept for keeping the peaks, quantum computing error correction etc
  • 18. Figure 9 It can be dots forming a block
  • 19. How would a clock unit look simulating an atomic model? The figures below are based on my electric transmutation theory- Figure 8 the 4th refinement sketch, as well as on Lagrange superconductor theory, i.e. that superconductors are nothing more but Lagrange points- where nuclear forces offset each other with the right spin and density and the right temperatures. Figure 10 Possible Example: Optimization model implemented in an atomic model simulation. Adding spin and clock lines affecting each other may simulate also decay transitions and thermal shake
  • 20. Figure 11 Sketch summary of electric transmutation, here 4th refinement, as well as Lagrange superconductor theory
  • 21. 6 Conclusion Instead of mathematically unsolvable optimization problems focusing on the path, so unsolvable that it becomes a millennium quest with one million dollar prize money, we use a vector to process away unwanted Node values/ cities, which orders and measures all values in the same cycle. This ensures maximum efficiency in finding the exact wished values. A set of approaches enables many more data storage and processing options, than 3D modeling blocks. While this optimization algorithm can be applied on the paths of the existing N vs NP problems (Figure 2 and 3), especially useful for large unknown and unsorted values, it may no longer be necessary to measure the paths at all, to achieve an efficient data optimization. 7 Sources Cook S.(2000): The P vs. NP Problem, under http://www.claymath.org/sites/default/files/pvsnp.pdf obtained from http://www.claymath.org/millennium-problems/p-vs-np-problem Kaye R. / Stewart I. (2015): Minesweeper http://www.claymath.org/sites/default/files/minesweeper.pdf Opensourced Graphs (2016): https://upload.wikimedia.org/wikipedia/commons/d/d2/Minimum_spanning_tree.svg https://en.wikipedia.org/wiki/Travelling_salesman_problem#/media/File:Weighted_K4.svg Unknown Blogger/ RYOT (2015): You will Get $1,000,000 If You Solve This Math Problem, under http://www.viralberg.com/?p=1748 referring to the short 40 Seconds challenge video under https://www.facebook.com/RYOT/videos/1210097135670582/
  • 22. 8 Appendix: A small Prime numbers oddity + a few Fax pages explaining the solution above > Gesendet: Samstag, 16. Januar 2016 um 02:18 Uhr > Von: "Christian KISS" <Christian.Kiss@gmx.de> > An: maths.directory@lms.ac.uk > Betreff: Fw: prime numbers riddle: 2-11=-9 2-9=-7 2-7=-5 (!?) 2-5=-3 2-1= 1 2+1= 3 2+5= 7 2+7= 9 2+9=11 2+11=13... x3,x7,x9,x1 2-13=-11... -x3,-x5,-x7,-x9 Christian Kiss BabyAWACS - TRULY Independent. > Gesendet: Samstag, 16. Januar 2016 um 01:59 Uhr > > prime numbers riddle: > > 2-13=-11... -x3,-x5,-x7,-x9 > > 2-11=-9 > > 2-9=-7 (!)> > 2-7=-5 (!?) > > 2-5=-3 *************** > > 2-1= 1 *************** > > 2+1= 3 > > 2+5= 7 > > 2+7= 9 > > 2+9=11 > > 2+11=13... x3,x7,x9,x1