This document discusses various probabilistic language models used in natural language processing applications. It covers n-gram models like bigram and trigram models used for tasks like speech recognition. It describes how probabilistic language models assign probabilities to strings of text based on counting word occurrences. It also discusses techniques like additive smoothing and linear interpolation that are used to handle zero probability word pairs in n-gram models. Finally, it introduces probabilistic context-free grammars which use rewrite rules with associated probabilities to model language structure.
The document discusses the author's experience and qualifications for a potential job or research opportunity. Specifically, it covers:
1) The author's educational background in physics and mathematics in Russia and experience programming in C/C++ and MATLAB.
2) Their master's research on using MATLAB to model plasma density and current measurements, including developing algorithms and GUI tools.
3) Their plans to soon publish their master's thesis, pass remaining exams, and desire to continue research for a PhD with a focus on continuum mechanics.
pptx - Psuedo Random Generator for Halfspacesbutest
This document summarizes research on constructing pseudorandom generators for halfspaces. The key results are:
1) The researchers developed a pseudorandom generator for halfspaces over arbitrary product distributions on Rn, requiring only that E[xi4] is constant. This improves on prior work that only handled the uniform distribution on {-1,1}n.
2) Their generator can simulate intersections of k halfspaces using a seed of length k log(n), and arbitrary functions of k halfspaces using a seed of length k2 log(n).
3) The generator exploits a "dichotomy" among halfspaces - they are either "dictator" functions depending on few variables, or
This document summarizes research on minimizing deterministic finite automata (DFAs) in MapReduce frameworks. It discusses two algorithms for DFA minimization - Hopcroft's algorithm and Moore's algorithm - and evaluates their performance on MapReduce. The key findings are that Hopcroft's algorithm outperforms Moore's algorithm in terms of communication cost when the alphabet size is at least 16 and in runtime when the alphabet size is at least 32. Both algorithms are equally sensitive to skewed input data.
The document discusses complexity analysis of algorithms. It defines time complexity as the calculation of the total time required for an algorithm to execute, and space complexity as the calculation of memory space required. Time and space complexity can be analyzed using asymptotic analysis, which studies how performance changes with increasing input size. Asymptotic notations like Big-O, Omega, and Theta are used to analyze best case, worst case, and average case time complexity. Big-O notation represents upper time bound, Omega lower time bound, and Theta both upper and lower time bound. Examples are given of functions and their time complexities using these notations.
String matching algorithms are used to find patterns within larger strings or texts. The example shows a text string "A B C A B A A C A B" and a pattern "A B A A" with a shift of 3. The naive string matching algorithm is described which compares characters between the text and pattern from index 0 to the string lengths to find all valid shifts where the pattern occurs in the text.
This document discusses and defines four common algorithms for string matching:
1. The naive algorithm compares characters one by one with a time complexity of O(MN).
2. The Knuth-Morris-Pratt (KMP) algorithm uses pattern preprocessing to skip previously checked characters, achieving linear time complexity of O(N+M).
3. The Boyer-Moore (BM) algorithm matches strings from right to left and uses pattern preprocessing tables to skip more characters than KMP, with sublinear worst-case time complexity of O(N/M).
4. The Rabin-Karp (RK) algorithm uses hashing techniques to find matches in text substrings, with time complexity of
This document discusses various probabilistic language models used in natural language processing applications. It covers n-gram models like bigram and trigram models used for tasks like speech recognition. It describes how probabilistic language models assign probabilities to strings of text based on counting word occurrences. It also discusses techniques like additive smoothing and linear interpolation that are used to handle zero probability word pairs in n-gram models. Finally, it introduces probabilistic context-free grammars which use rewrite rules with associated probabilities to model language structure.
The document discusses the author's experience and qualifications for a potential job or research opportunity. Specifically, it covers:
1) The author's educational background in physics and mathematics in Russia and experience programming in C/C++ and MATLAB.
2) Their master's research on using MATLAB to model plasma density and current measurements, including developing algorithms and GUI tools.
3) Their plans to soon publish their master's thesis, pass remaining exams, and desire to continue research for a PhD with a focus on continuum mechanics.
pptx - Psuedo Random Generator for Halfspacesbutest
This document summarizes research on constructing pseudorandom generators for halfspaces. The key results are:
1) The researchers developed a pseudorandom generator for halfspaces over arbitrary product distributions on Rn, requiring only that E[xi4] is constant. This improves on prior work that only handled the uniform distribution on {-1,1}n.
2) Their generator can simulate intersections of k halfspaces using a seed of length k log(n), and arbitrary functions of k halfspaces using a seed of length k2 log(n).
3) The generator exploits a "dichotomy" among halfspaces - they are either "dictator" functions depending on few variables, or
This document summarizes research on minimizing deterministic finite automata (DFAs) in MapReduce frameworks. It discusses two algorithms for DFA minimization - Hopcroft's algorithm and Moore's algorithm - and evaluates their performance on MapReduce. The key findings are that Hopcroft's algorithm outperforms Moore's algorithm in terms of communication cost when the alphabet size is at least 16 and in runtime when the alphabet size is at least 32. Both algorithms are equally sensitive to skewed input data.
The document discusses complexity analysis of algorithms. It defines time complexity as the calculation of the total time required for an algorithm to execute, and space complexity as the calculation of memory space required. Time and space complexity can be analyzed using asymptotic analysis, which studies how performance changes with increasing input size. Asymptotic notations like Big-O, Omega, and Theta are used to analyze best case, worst case, and average case time complexity. Big-O notation represents upper time bound, Omega lower time bound, and Theta both upper and lower time bound. Examples are given of functions and their time complexities using these notations.
String matching algorithms are used to find patterns within larger strings or texts. The example shows a text string "A B C A B A A C A B" and a pattern "A B A A" with a shift of 3. The naive string matching algorithm is described which compares characters between the text and pattern from index 0 to the string lengths to find all valid shifts where the pattern occurs in the text.
This document discusses and defines four common algorithms for string matching:
1. The naive algorithm compares characters one by one with a time complexity of O(MN).
2. The Knuth-Morris-Pratt (KMP) algorithm uses pattern preprocessing to skip previously checked characters, achieving linear time complexity of O(N+M).
3. The Boyer-Moore (BM) algorithm matches strings from right to left and uses pattern preprocessing tables to skip more characters than KMP, with sublinear worst-case time complexity of O(N/M).
4. The Rabin-Karp (RK) algorithm uses hashing techniques to find matches in text substrings, with time complexity of
The tractability of some combinatorial decision/optimisation problems in the ...Mickey Boz
Combinatorial decision problems and optimisation problems are related to search in exponential data using the Riemann Hypothesis and the net result is tractable algorithms--the result if valid is useful in Physics, Operations Research,Cryptography,Mathematics, Engineering and many other areas
NP completeness. Classes P and NP are two frequently studied classes of problems in computer science. Class P is the set of all problems that can be solved by a deterministic Turing machine in polynomial time.
This document introduces the concept of NP-completeness. It discusses that while some problems like shortest paths, minimum spanning trees, and bipartite matching have efficient polynomial time algorithms, other problems like satisfiability (SAT), the travelling salesman problem (TSP), integer linear programming (ILP), and set cover have only exponential time algorithms. It defines the class NP as problems that can be solved by a non-deterministic Turing machine in polynomial time. It states that if any NP-complete problem could be solved in polynomial time, then P would equal NP. Problems are NP-hard if all problems in NP can be reduced to them in polynomial time, and NP-complete if they are both in NP and NP-
Proving Lower Bounds to answer the P versus NP Questionguest383ed6
1. The document discusses research into proving lower bounds on the complexity of problems in the NP class in order to help answer the P versus NP question.
2. It describes current techniques like diagonalization and combinatorial circuits that are used to prove lower bounds and the limitations of these methods.
3. The researcher aims to conduct an experiment applying diagonalization and circuit techniques simultaneously to problems like the Traveling Salesman Problem to develop a more efficient new technique for determining lower bounds.
This document introduces algorithms that are polynomial time versus non-polynomial time and defines NP-complete problems. It discusses that NP-complete problems include Satisfiability (SAT), Traveling Salesman, Knapsack and Clique problems. These problems are difficult to solve in polynomial time and are mapped to each other through polynomial time reductions. While we can solve them in exponential time, finding a polynomial time algorithm would mean P=NP.
This document discusses P, NP and NP-complete problems. It begins by introducing tractable and intractable problems, and defines problems that can be solved in polynomial time as tractable, while problems that cannot are intractable. It then discusses the classes P and NP, with P containing problems that can be solved deterministically in polynomial time, and NP containing problems that can be solved non-deterministically in polynomial time. The document concludes by defining NP-complete problems as those in NP that are as hard as any other problem in the class, in that any NP problem can be reduced to an NP-complete problem in polynomial time.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les Cordeliers
Slides of Richard Everitt's presentation
This document discusses the complexity of primality testing. It begins by explaining what prime and composite numbers are, and why primality testing is important for applications like public-key cryptography that rely on the assumption that factoring large composite numbers is computationally difficult. It then covers algorithms for primality testing like the Monte Carlo algorithm and discusses their runtime complexities. It shows that while testing if a number is composite can be done in polynomial time, general number factoring is believed to require exponential time, making primality testing an important problem.
P, NP, NP-Complete, and NP-Hard
Reductionism in Algorithms
NP-Completeness and Cooks Theorem
NP-Complete and NP-Hard Problems
Travelling Salesman Problem (TSP)
Travelling Salesman Problem (TSP) - Approximation Algorithms
PRIMES is in P - (A hope for NP problems in P)
Millennium Problems
Conclusions
The document discusses numerical integration techniques including the trapezoidal rule and Simpson's rule. The trapezoidal rule approximates the area under a curve using trapezoids within subintervals, while Simpson's rule uses quadratic approximations within subintervals. The document provides the formulas for approximating integrals using these rules, including the coefficient patterns and expressions used. Examples are given to demonstrate applying the rules to approximate definite integrals. The end discusses estimating the error involved when using these approximation methods.
1) The document discusses the complexity classes P, NP, NP-hard and NP-complete. P refers to problems that can be solved in polynomial time, while NP includes problems that can be verified in polynomial time.
2) NP-hard problems are at least as hard as the hardest problems in NP. NP-complete problems are the hardest problems in NP. If any NP-complete problem could be solved in polynomial time, then P would be equal to NP.
3) Common NP-complete problems discussed include the traveling salesman problem and integer knapsack problem. Reductions are used to show that one problem is at least as hard as another.
The document discusses NP-complete problems and polynomial-time reductions between them. It analyzes permutation problems like Hamiltonian path/cycle and vertex coloring. It also covers subset problems like vertex cover, independent set, and satisfiability. The document proposes algorithms that use a decision box to solve these problems in polynomial time, even though directly finding optimal solutions is NP-complete. It shows how problems can be reduced to each other via the decision box approach.
The document discusses NP-complete problems and polynomial-time reductions between them. It summarizes several permutation and subset problems that are known to be NP-complete, including Hamiltonian path/cycle, vertex cover, and 3-SAT. It then describes polynomial-time algorithms for solving some of these problems exactly using a "decision box" that can determine in polynomial time whether an instance has a solution. For example, it presents an O(n) algorithm for finding a minimum vertex cover using a decision box to iteratively test subset sizes.
1) NP refers to problems that can be solved by a non-deterministic Turing machine in polynomial time. This includes problems where a potential solution can be verified in polynomial time.
2) Examples of NP-complete problems include the Hamiltonian cycle problem and the traveling salesman problem. These problems are among the hardest problems in NP.
3) It is a major open question whether P=NP, which would mean that NP-complete problems could be solved in polynomial time by a deterministic machine. Most experts believe P≠NP but there is no proof.
The document discusses time and space complexity analysis of algorithms. Time complexity measures the number of steps to solve a problem based on input size, with common orders being O(log n), O(n), O(n log n), O(n^2). Space complexity measures memory usage, which can be reused unlike time. Big O notation describes asymptotic growth rates to compare algorithm efficiencies, with constant O(1) being best and exponential O(c^n) being worst.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
This document summarizes a study on evaluating the rate of convergence of the Newton-Raphson method. A computer program was coded in Java to calculate cube roots from 1 to 25 using Newton-Raphson. The lowest rate of convergence was for the cube root of 16, and the highest was for 3. The average rate of convergence was found to be 0.217920. Formulas for estimating the rate of convergence from successive approximations are also presented.
In most of the algorithms analyzed until now, we have been looking and studying problems solvable in polynomial time. The polynomial time algorithm class P are algorithms that on inputs of size n have a worst case running time of O(n^k) for some constant k. Thus, informally, we can say that the Non-Polynomial (NP) time algorithms are the ones that cannot be solved in O(n^k) for any constant k
.
This document discusses solving NP-complete problems using graph embodiment on a quantum computation paradigm. It proposes a method for solving relational database queries by transforming the query problem and results into a labeled directed graph, where the results are derived as the maximum clique of the graph. The document suggests that if this method can be used to solve queries on both classical and quantum computers using graph embodiment, then P could equal NP. However, if it cannot be solved on both paradigms, then the P vs NP problem cannot be resolved with current computation models and new mathematical axioms would be needed.
The tractability of some combinatorial decision/optimisation problems in the ...Mickey Boz
Combinatorial decision problems and optimisation problems are related to search in exponential data using the Riemann Hypothesis and the net result is tractable algorithms--the result if valid is useful in Physics, Operations Research,Cryptography,Mathematics, Engineering and many other areas
NP completeness. Classes P and NP are two frequently studied classes of problems in computer science. Class P is the set of all problems that can be solved by a deterministic Turing machine in polynomial time.
This document introduces the concept of NP-completeness. It discusses that while some problems like shortest paths, minimum spanning trees, and bipartite matching have efficient polynomial time algorithms, other problems like satisfiability (SAT), the travelling salesman problem (TSP), integer linear programming (ILP), and set cover have only exponential time algorithms. It defines the class NP as problems that can be solved by a non-deterministic Turing machine in polynomial time. It states that if any NP-complete problem could be solved in polynomial time, then P would equal NP. Problems are NP-hard if all problems in NP can be reduced to them in polynomial time, and NP-complete if they are both in NP and NP-
Proving Lower Bounds to answer the P versus NP Questionguest383ed6
1. The document discusses research into proving lower bounds on the complexity of problems in the NP class in order to help answer the P versus NP question.
2. It describes current techniques like diagonalization and combinatorial circuits that are used to prove lower bounds and the limitations of these methods.
3. The researcher aims to conduct an experiment applying diagonalization and circuit techniques simultaneously to problems like the Traveling Salesman Problem to develop a more efficient new technique for determining lower bounds.
This document introduces algorithms that are polynomial time versus non-polynomial time and defines NP-complete problems. It discusses that NP-complete problems include Satisfiability (SAT), Traveling Salesman, Knapsack and Clique problems. These problems are difficult to solve in polynomial time and are mapped to each other through polynomial time reductions. While we can solve them in exponential time, finding a polynomial time algorithm would mean P=NP.
This document discusses P, NP and NP-complete problems. It begins by introducing tractable and intractable problems, and defines problems that can be solved in polynomial time as tractable, while problems that cannot are intractable. It then discusses the classes P and NP, with P containing problems that can be solved deterministically in polynomial time, and NP containing problems that can be solved non-deterministically in polynomial time. The document concludes by defining NP-complete problems as those in NP that are as hard as any other problem in the class, in that any NP problem can be reduced to an NP-complete problem in polynomial time.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les Cordeliers
Slides of Richard Everitt's presentation
This document discusses the complexity of primality testing. It begins by explaining what prime and composite numbers are, and why primality testing is important for applications like public-key cryptography that rely on the assumption that factoring large composite numbers is computationally difficult. It then covers algorithms for primality testing like the Monte Carlo algorithm and discusses their runtime complexities. It shows that while testing if a number is composite can be done in polynomial time, general number factoring is believed to require exponential time, making primality testing an important problem.
P, NP, NP-Complete, and NP-Hard
Reductionism in Algorithms
NP-Completeness and Cooks Theorem
NP-Complete and NP-Hard Problems
Travelling Salesman Problem (TSP)
Travelling Salesman Problem (TSP) - Approximation Algorithms
PRIMES is in P - (A hope for NP problems in P)
Millennium Problems
Conclusions
The document discusses numerical integration techniques including the trapezoidal rule and Simpson's rule. The trapezoidal rule approximates the area under a curve using trapezoids within subintervals, while Simpson's rule uses quadratic approximations within subintervals. The document provides the formulas for approximating integrals using these rules, including the coefficient patterns and expressions used. Examples are given to demonstrate applying the rules to approximate definite integrals. The end discusses estimating the error involved when using these approximation methods.
1) The document discusses the complexity classes P, NP, NP-hard and NP-complete. P refers to problems that can be solved in polynomial time, while NP includes problems that can be verified in polynomial time.
2) NP-hard problems are at least as hard as the hardest problems in NP. NP-complete problems are the hardest problems in NP. If any NP-complete problem could be solved in polynomial time, then P would be equal to NP.
3) Common NP-complete problems discussed include the traveling salesman problem and integer knapsack problem. Reductions are used to show that one problem is at least as hard as another.
The document discusses NP-complete problems and polynomial-time reductions between them. It analyzes permutation problems like Hamiltonian path/cycle and vertex coloring. It also covers subset problems like vertex cover, independent set, and satisfiability. The document proposes algorithms that use a decision box to solve these problems in polynomial time, even though directly finding optimal solutions is NP-complete. It shows how problems can be reduced to each other via the decision box approach.
The document discusses NP-complete problems and polynomial-time reductions between them. It summarizes several permutation and subset problems that are known to be NP-complete, including Hamiltonian path/cycle, vertex cover, and 3-SAT. It then describes polynomial-time algorithms for solving some of these problems exactly using a "decision box" that can determine in polynomial time whether an instance has a solution. For example, it presents an O(n) algorithm for finding a minimum vertex cover using a decision box to iteratively test subset sizes.
1) NP refers to problems that can be solved by a non-deterministic Turing machine in polynomial time. This includes problems where a potential solution can be verified in polynomial time.
2) Examples of NP-complete problems include the Hamiltonian cycle problem and the traveling salesman problem. These problems are among the hardest problems in NP.
3) It is a major open question whether P=NP, which would mean that NP-complete problems could be solved in polynomial time by a deterministic machine. Most experts believe P≠NP but there is no proof.
The document discusses time and space complexity analysis of algorithms. Time complexity measures the number of steps to solve a problem based on input size, with common orders being O(log n), O(n), O(n log n), O(n^2). Space complexity measures memory usage, which can be reused unlike time. Big O notation describes asymptotic growth rates to compare algorithm efficiencies, with constant O(1) being best and exponential O(c^n) being worst.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
This document summarizes a study on evaluating the rate of convergence of the Newton-Raphson method. A computer program was coded in Java to calculate cube roots from 1 to 25 using Newton-Raphson. The lowest rate of convergence was for the cube root of 16, and the highest was for 3. The average rate of convergence was found to be 0.217920. Formulas for estimating the rate of convergence from successive approximations are also presented.
In most of the algorithms analyzed until now, we have been looking and studying problems solvable in polynomial time. The polynomial time algorithm class P are algorithms that on inputs of size n have a worst case running time of O(n^k) for some constant k. Thus, informally, we can say that the Non-Polynomial (NP) time algorithms are the ones that cannot be solved in O(n^k) for any constant k
.
This document discusses solving NP-complete problems using graph embodiment on a quantum computation paradigm. It proposes a method for solving relational database queries by transforming the query problem and results into a labeled directed graph, where the results are derived as the maximum clique of the graph. The document suggests that if this method can be used to solve queries on both classical and quantum computers using graph embodiment, then P could equal NP. However, if it cannot be solved on both paradigms, then the P vs NP problem cannot be resolved with current computation models and new mathematical axioms would be needed.
Similar to Summary of the roo pooh-tigger studies (20)
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
1. SUMMARY OF THE ROO-POOH-
TIGGER STUDIES
(June, 2018)
(Now going into hibernation for some time)
HOBBYIST STUDIES
ENTERTAINMENT
(BEYOND GATE)
(The magic of the Riemann Hypothesis applied to the harmonious interaction of
competing and cooperating vanilla finite Harmonic Series using the mysterious
Euler-Mascheroni gamma constant which allows nondeterminism to be tractably
tackled by the traditional von Neumann computer)
2. These studies have yielded the strange results that P=AP=NP=PSPACE & that it is
possible to better in space requirements the Arabic positional representation of an
integer. These results may not be generally acceptable and the student should not use
them in formal/informal training/education programs. The student is advised to
follow the standard texts and accepted results.
THREE RESULTS
1. The Roo Number System.
An integer n represented in the Arabic Positional Number System is
bloated up to an enormous size using error control coding over
deterministic controlled channels where the Shannon Limit does not
apply. Then it is collapsed to a refereed NP-complete problem to a
succinct sizewhich is less thanlog(n). This works for all integers beyond
a certain basic size.
2. The Pooh-Tigger race.
All types of nondeterministic behaviour lend themselves to simple
explanations. Nondeterminism is merely practical deterministic
exhaustive search using Pooh-Tigger races which are based on the
Harmonic Series and the mysterious Euler-Mascheroni constant.
The search for an element in an exponential amount of data in
deterministic polynomial time is the key to the solution. In the case of
Existentialnondeterminism Tigger will be exactly half a house ahead of
Pooh when there is success. In the case of Universal nondeterminism
Tigger will never be exactly half a house ahead of Pooh. Both types of
exhaustivesearch take only a deterministic polynomial amount of time.
Thus it is possible to simulate in deterministic polynomial time an
3. alternating Turingmachinefor the TQBF problem by Pooh-Tigger races.
This allows the conclusion that P=AP=NP=PSPACE.
The Memex gives a rich number of methods that solve many problems
using alternating Turing machines in polynomial time. For example we
have context sensitive recognition and simulation of a NDTM in
PSPACE. With deterministic polynomial time Pooh-Tigger races the
alternating Turing machines can be simulated in deterministic
polynomial time.
3. The Riemann Hypothesis demystified.
The Riemann Hypothesis lends itself to a simple explanation with the
Pooh-Tigger races. A zero the Riemannzeta functionoccurswhen Tigger
is exactlyhalf a house ahead of Pooh in a cake distribution race!! A zero
corresponds exactly and uniquely to a pair consisting of a composite
integer and one of its factors.