This paper reviews algorithmic information theory, which is an attempt to apply information-theoretic and probabilistic
ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to
specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past
few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented
here and certain results of R. M. Solovay are reported.
Search for a substring of characters using the theory of non-deterministic fi...journalBEEI
The paper proposed an algorithm which purpose is searching for a substring of characters in a string. Principle of its operation is based on the theory of non-deterministic finite automata and vector-character architecture. It is able to provide the linear computational complexity of searching for a substring depending on the length of the searched string measured in the number of operations with hyperdimensional vectors when repeatedly searching for different strings in a target line. None of the existing algorithms has such a low level of computational complexity. The disadvantages of the proposed algorithm are the fact that the existing hardware implementations of computing systems for performing operations with hyperdimensional vectors require a large number of machine instructions, which reduces the gain from this algorithm. Despite this, in the future, it is possible to create a hardware implementation that can ensure the execution of operations with hyperdimensional vectors in one cycle, which will allow the proposed algorithm to be applied in practice.
The document describes three different searching techniques: linear search, binary search, and DN search. Linear search has a worst case time complexity of O(n) as it searches through each element sequentially. Binary search has a worst case time complexity of O(log n) but requires the list to be sorted. DN search also has a worst case time complexity of O(log n) and can be applied to both sorted and unsorted lists, making it more powerful than the other techniques.
This document discusses using genetic algorithms to cryptanalyze the RSA cryptosystem. It first provides an overview of the RSA algorithm and how it works. It then discusses the Karatsuba algorithm, which can be used to efficiently multiply very large numbers. The document goes on to explain the basic concepts behind genetic algorithms, including representation, selection, crossover, and mutation operators. The authors propose applying genetic algorithms to generate new number pairs p and q that could be used to factor the RSA modulus N and break the cryptosystem. Specifically, they suggest using genetic algorithm operators to generate a new population of p and q values to use in cryptanalyzing RSA.
The Nelder-Mead search algorithm is an optimization technique used to find the minimum or maximum of an objective function by iteratively modifying a set of parameter values. It begins with an initial set of points forming a simplex in parameter space and evaluates the objective function at each point, replacing the highest point with a new point to form a new simplex. This process repeats until the optimal value is found or a stopping criteria is met. The algorithm is simple to implement and can find local optima for problems with multiple parameters, but depends on the initial starting point and may get stuck in local optima rather than finding the global solution.
DLT stands for Direct Linear Transformation. It is an algorithm that estimates the camera matrix P by minimizing the algebraic error between measured image points xi and projected 3D points PXi. Specifically, DLT finds P by solving the equation Ap=0, where A is constructed from point correspondences and p contains the entries of P. This minimizes the sum of squared algebraic distances between the points. For affine cameras, the algebraic and geometric distances are equivalent. DLT provides an initial estimate of P that can be refined using nonlinear optimization techniques.
This document summarizes a study on evaluating the rate of convergence of the Newton-Raphson method. A computer program was coded in Java to calculate cube roots from 1 to 25 using Newton-Raphson. The lowest rate of convergence was for the cube root of 16, and the highest was for 3. The average rate of convergence was found to be 0.217920. Formulas for estimating the rate of convergence from successive approximations are also presented.
A Probabilistic Hough Transform for Opportunistic Crowd-Sensing of Moving Tra...Michiaki Tatsubori
A paper presentation at SDM 2018, San Diego, CA, USA, May 3, 2018.
Abstract:
Traffic congestion in developing cities like Nairobi, Kenya can be significantly impacted by the presence of Moving Traffic Obstacles (MTOs). These MTOs are events that temporarily exist on the road, moving with or against the direction of traffic at slower speeds. They include two-wheelers, pushcarts, animals, and pedestrians, which have quite different influence on traffic compared with static obstacles, such as potholes and speed bumps. As smart- phones and supporting 3G infrastructures are wide spread even in developing countries, recent studies enabled frugal traffic obstacle data collection from smartphones in probe cars. Assuming the opportunistic, unevenly-distributed, sparse and errorful observation of traffic obstacles, we propose an MTO detection algorithm extending an image analysis technique called Probabilistic Hough Transform for collective observations as input. Based on our experiences with a small set of real-world data collected in a smartphone-based probe car project with Nairobi City County, we conducted experiments with simulated observation data to see the effectiveness of the algorithm.
Search for a substring of characters using the theory of non-deterministic fi...journalBEEI
The paper proposed an algorithm which purpose is searching for a substring of characters in a string. Principle of its operation is based on the theory of non-deterministic finite automata and vector-character architecture. It is able to provide the linear computational complexity of searching for a substring depending on the length of the searched string measured in the number of operations with hyperdimensional vectors when repeatedly searching for different strings in a target line. None of the existing algorithms has such a low level of computational complexity. The disadvantages of the proposed algorithm are the fact that the existing hardware implementations of computing systems for performing operations with hyperdimensional vectors require a large number of machine instructions, which reduces the gain from this algorithm. Despite this, in the future, it is possible to create a hardware implementation that can ensure the execution of operations with hyperdimensional vectors in one cycle, which will allow the proposed algorithm to be applied in practice.
The document describes three different searching techniques: linear search, binary search, and DN search. Linear search has a worst case time complexity of O(n) as it searches through each element sequentially. Binary search has a worst case time complexity of O(log n) but requires the list to be sorted. DN search also has a worst case time complexity of O(log n) and can be applied to both sorted and unsorted lists, making it more powerful than the other techniques.
This document discusses using genetic algorithms to cryptanalyze the RSA cryptosystem. It first provides an overview of the RSA algorithm and how it works. It then discusses the Karatsuba algorithm, which can be used to efficiently multiply very large numbers. The document goes on to explain the basic concepts behind genetic algorithms, including representation, selection, crossover, and mutation operators. The authors propose applying genetic algorithms to generate new number pairs p and q that could be used to factor the RSA modulus N and break the cryptosystem. Specifically, they suggest using genetic algorithm operators to generate a new population of p and q values to use in cryptanalyzing RSA.
The Nelder-Mead search algorithm is an optimization technique used to find the minimum or maximum of an objective function by iteratively modifying a set of parameter values. It begins with an initial set of points forming a simplex in parameter space and evaluates the objective function at each point, replacing the highest point with a new point to form a new simplex. This process repeats until the optimal value is found or a stopping criteria is met. The algorithm is simple to implement and can find local optima for problems with multiple parameters, but depends on the initial starting point and may get stuck in local optima rather than finding the global solution.
DLT stands for Direct Linear Transformation. It is an algorithm that estimates the camera matrix P by minimizing the algebraic error between measured image points xi and projected 3D points PXi. Specifically, DLT finds P by solving the equation Ap=0, where A is constructed from point correspondences and p contains the entries of P. This minimizes the sum of squared algebraic distances between the points. For affine cameras, the algebraic and geometric distances are equivalent. DLT provides an initial estimate of P that can be refined using nonlinear optimization techniques.
This document summarizes a study on evaluating the rate of convergence of the Newton-Raphson method. A computer program was coded in Java to calculate cube roots from 1 to 25 using Newton-Raphson. The lowest rate of convergence was for the cube root of 16, and the highest was for 3. The average rate of convergence was found to be 0.217920. Formulas for estimating the rate of convergence from successive approximations are also presented.
A Probabilistic Hough Transform for Opportunistic Crowd-Sensing of Moving Tra...Michiaki Tatsubori
A paper presentation at SDM 2018, San Diego, CA, USA, May 3, 2018.
Abstract:
Traffic congestion in developing cities like Nairobi, Kenya can be significantly impacted by the presence of Moving Traffic Obstacles (MTOs). These MTOs are events that temporarily exist on the road, moving with or against the direction of traffic at slower speeds. They include two-wheelers, pushcarts, animals, and pedestrians, which have quite different influence on traffic compared with static obstacles, such as potholes and speed bumps. As smart- phones and supporting 3G infrastructures are wide spread even in developing countries, recent studies enabled frugal traffic obstacle data collection from smartphones in probe cars. Assuming the opportunistic, unevenly-distributed, sparse and errorful observation of traffic obstacles, we propose an MTO detection algorithm extending an image analysis technique called Probabilistic Hough Transform for collective observations as input. Based on our experiences with a small set of real-world data collected in a smartphone-based probe car project with Nairobi City County, we conducted experiments with simulated observation data to see the effectiveness of the algorithm.
Simulating Turing Machines Using Colored Petri Nets with Priority Transitionsidescitation
In this paper, we present a new way to simulate
Turing machines using a specific form of Petri nets such that
the resulting nets are capable of thoroughly describing
behavior of the input Turing machines. We model every
element of a Turing machine’s tuple (i.e., Q, Γ, b, Σ, δ, q0, F) with
an equivalent translation in Colored Petri net’s set of elements
with priority transitions such that the resulting translation
(is a Petri net that) accepts the same language as the original
Turing machine. In the second part of the paper we analyze
time complexity of Turing machine’s input in the resulting
Petri net and show that it is a polynomial coefficient of time
complexity in the Turing machine.
The document discusses brute force algorithms and exhaustive search techniques. It provides examples of problems that can be solved using these approaches, such as computing powers and factorials, sorting, string matching, polynomial evaluation, the traveling salesman problem, knapsack problem, and the assignment problem. For each problem, it describes generating all possible solutions and evaluating them to find the best one. Most brute force algorithms have exponential time complexity, evaluating all possible combinations or permutations of the input.
This document discusses numerical integration and solving ordinary differential equations (ODEs) numerically in MATLAB. It describes built-in integration functions such as quad, quadl, and trapz that can integrate functions and data points. It also outlines the basic steps for solving first-order ODEs numerically, including writing the ODE as an initial value problem, defining the derivative function, selecting an ODE solver, and using the solver to generate the solution over a time span. An example demonstrates solving an ODE numerically using ode45 and plotting the results.
This document reviews a fuzzy microscopic traffic model that uses fuzzy logic to simulate traffic streams at signalized intersections. The model represents vehicle parameters like position and velocity as fuzzy numbers. It combines aspects of cellular automata models and fuzzy calculus. Compared to traditional cellular automata models, the fuzzy microscopic model requires fewer simulation runs, stores less data, and estimates output distributions in a single run. Future work could explore a stochastic cellular automata model with fuzzy decision rules to analyze more complex traffic situations.
The document discusses different models of computation including Turing machines, lambda calculus, recursive functions, neural networks, and cellular automata. It provides details on each model, such as the components of a Turing machine including its tape, head, finite table of instructions, and state register. The document also discusses Chomsky's hierarchy of formal grammars from unrestricted to regular grammars and their relationships to different models of computation like Turing machines and pushdown automata.
Measuring the performance of algorithm, in terms of Big(O), theta, and Omega notation with respect to their Scientific notation like Worst, Average, Best, cases
The document discusses maximum likelihood estimation. It begins by explaining that maximum likelihood chooses parameter values that make the observed data most probable given a statistical model. This provides a justification for estimation techniques like least squares regression. The document provides an example of estimating a population proportion from a sample. It then generalizes maximum likelihood to cover a wide range of models and estimation problems. It discusses properties like consistency, efficiency, and how to conduct hypothesis tests based on maximum likelihood. Numerical optimization techniques are often required to find maximum likelihood estimates for complex models.
This document summarizes a paper that presents two parallel algorithms for batched range searching on a mesh-connected SIMD computer. The first algorithm is average-case efficient and based on cell division. The second algorithm is worst-case efficient and uses a divide-and-conquer approach. Both algorithms were implemented and experimentally evaluated on a MasPar MP-1 computer. The paper also provides background on mesh-connected SIMD computers and describes common operations like sorting that are used in the algorithms.
This work considers the multi-objective optimization problem constrained by a system of bipolar fuzzy relational equations with max-product composition. An integer optimization based technique for order of preference by similarity to the ideal solution is proposed for solving such a problem. Some critical features associated with the feasible domain and optimal solutions of the bipolar max-Tp equation constrained optimization problem are studied. An illustrative example verifying the idea of this paper is included. This
is the first attempt to study the bipolar max-T equation constrained multi-objective optimization problems
from an integer programming viewpoint.
Graph Traversal Algorithms - Breadth First SearchAmrinder Arora
The document discusses branch and bound algorithms. It begins with an overview of breadth first search (BFS) and how it can be used to solve problems on infinite mazes or graphs. It then provides pseudocode for implementing BFS using a queue data structure. Finally, it discusses branch and bound as a general technique for solving optimization problems that applies when greedy methods and dynamic programming fail. Branch and bound performs a BFS-like search, but prunes parts of the search tree using lower and upper bounds to avoid exploring all possible solutions.
The document provides an introduction to algorithms presented by Sachin Sharma Bhandari. It defines algorithms, discusses analyzing time and space efficiency, and provides examples of sorting and other problems solved by algorithms. It also reviews data structures like stacks, queues, linked lists, and binary trees that are important for algorithm design.
This document summarizes a paper on the comparison-based complexity of multi-objective optimization. It outlines that the paper presents complexity upper and lower bounds for finding the Pareto front and Pareto set in multi-objective optimization problems. For upper bounds, it shows the complexity is O(Nd log(1/e)) for finding the whole Pareto set and O(N+1-d log(1/e)) for finding a single point, where N is the dimension and d is the number of objectives. For lower bounds, it applies a proof technique using packing numbers from the mono-objective case to the multi-objective case to derive tight complexity bounds.
Beginning direct3d gameprogrammingmath02_logarithm_20160324_jintaeksJinTaek Seo
This document discusses logarithms, exponentiation, and their relationship. It also covers binary search and quicksort algorithms. Logarithms are the inverse of exponentiation, where the logarithm of a number is the exponent that the base must be raised to to produce that number. Binary search and quicksort both have average time complexities of O(log n). Quicksort uses a divide and conquer approach to partition an array around a pivot element.
This document presents several algorithms for radix sorting integers with no extra space beyond the input. It begins with a simple algorithm that compresses part of the input to gain space for sorting the remainder in linear time, but is not stable. It then presents a more sophisticated stable algorithm that recursively sorts portions of the input, compressing one portion to gain space to radix sort chunks of the remainder, and finally merges the sorted portions. The document also discusses how these techniques can be extended to handle arbitrary word lengths and read-only keys.
How to use probabilistic inference programming for application orchestration ...Veselin Pizurica
As companies are adopting serverless architectures and moving away from monolithic and microservice-based deployments, they realise that the challenge lies not only in the rewrite of an old application, but also in the shift towards a new way of thinking. We see many serverless architecture patterns today, such as function chaining, function chaining with rollback (for transaction), ASync HTTP, fan-out and more. We also have a number of tools on the market that ease application development using serverless, of which Apache OpenWhisk (via action chaining or using function composites) and Amazon Step Functions are some of the more popular. In this talk, we will present a new alternative way of building serverless applications based on the orchestration of typed functions, using the probabilistic inference programing paradigm. Inference-based programming brings about the best of the current modelling approaches: the expressiveness and simplicity of decision trees, the high debugging capabilities of state machines, the scalability and flexibility of flow based programming and superior logic expressions to forward chaining approaches. The talk will include a live demo of how to use probabilistic inference programming for a complex IoT application.
1. Exact inference in Bayesian networks is NP-hard in the worst case, so approximation techniques are needed for large networks.
2. Major approximation techniques include variational methods like mean-field approximation, sampling methods like Monte Carlo Markov Chain, and bounded cutset conditioning.
3. Variational methods introduce variational parameters to minimize the distance between the approximate and true distributions. Sampling methods draw random samples to estimate probabilities. Bounded cutset conditioning breaks loops by instantiating subsets of variables.
Introduction to complexity theory that solves your assignment problem it contains about complexity class,deterministic class,big- O notation ,proof by mathematical induction, L-Space ,N-Space and characteristics functions of set and so on
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEijscmc
Computing the minimum spanning tree of the graph is one of the fundamental computational problems. In
this paper, we present a new parallel algorithm for computing the minimum spanning tree of an undirected
weighted graph with n vertices and m edges. This algorithm uses the cluster techniques to reduce the
number of processors by fraction 1/f (n) and the parallel work by the fraction O ( 1 lo g ( f ( n )) ),where f (n) is an
arbitrary function. In the case f (n) =1, the algorithm runs in logarithmic-time and use super linear work on
EREWPRAM model. In general, the proposed algorithm is the simplest one.
Simulating Turing Machines Using Colored Petri Nets with Priority Transitionsidescitation
In this paper, we present a new way to simulate
Turing machines using a specific form of Petri nets such that
the resulting nets are capable of thoroughly describing
behavior of the input Turing machines. We model every
element of a Turing machine’s tuple (i.e., Q, Γ, b, Σ, δ, q0, F) with
an equivalent translation in Colored Petri net’s set of elements
with priority transitions such that the resulting translation
(is a Petri net that) accepts the same language as the original
Turing machine. In the second part of the paper we analyze
time complexity of Turing machine’s input in the resulting
Petri net and show that it is a polynomial coefficient of time
complexity in the Turing machine.
The document discusses brute force algorithms and exhaustive search techniques. It provides examples of problems that can be solved using these approaches, such as computing powers and factorials, sorting, string matching, polynomial evaluation, the traveling salesman problem, knapsack problem, and the assignment problem. For each problem, it describes generating all possible solutions and evaluating them to find the best one. Most brute force algorithms have exponential time complexity, evaluating all possible combinations or permutations of the input.
This document discusses numerical integration and solving ordinary differential equations (ODEs) numerically in MATLAB. It describes built-in integration functions such as quad, quadl, and trapz that can integrate functions and data points. It also outlines the basic steps for solving first-order ODEs numerically, including writing the ODE as an initial value problem, defining the derivative function, selecting an ODE solver, and using the solver to generate the solution over a time span. An example demonstrates solving an ODE numerically using ode45 and plotting the results.
This document reviews a fuzzy microscopic traffic model that uses fuzzy logic to simulate traffic streams at signalized intersections. The model represents vehicle parameters like position and velocity as fuzzy numbers. It combines aspects of cellular automata models and fuzzy calculus. Compared to traditional cellular automata models, the fuzzy microscopic model requires fewer simulation runs, stores less data, and estimates output distributions in a single run. Future work could explore a stochastic cellular automata model with fuzzy decision rules to analyze more complex traffic situations.
The document discusses different models of computation including Turing machines, lambda calculus, recursive functions, neural networks, and cellular automata. It provides details on each model, such as the components of a Turing machine including its tape, head, finite table of instructions, and state register. The document also discusses Chomsky's hierarchy of formal grammars from unrestricted to regular grammars and their relationships to different models of computation like Turing machines and pushdown automata.
Measuring the performance of algorithm, in terms of Big(O), theta, and Omega notation with respect to their Scientific notation like Worst, Average, Best, cases
The document discusses maximum likelihood estimation. It begins by explaining that maximum likelihood chooses parameter values that make the observed data most probable given a statistical model. This provides a justification for estimation techniques like least squares regression. The document provides an example of estimating a population proportion from a sample. It then generalizes maximum likelihood to cover a wide range of models and estimation problems. It discusses properties like consistency, efficiency, and how to conduct hypothesis tests based on maximum likelihood. Numerical optimization techniques are often required to find maximum likelihood estimates for complex models.
This document summarizes a paper that presents two parallel algorithms for batched range searching on a mesh-connected SIMD computer. The first algorithm is average-case efficient and based on cell division. The second algorithm is worst-case efficient and uses a divide-and-conquer approach. Both algorithms were implemented and experimentally evaluated on a MasPar MP-1 computer. The paper also provides background on mesh-connected SIMD computers and describes common operations like sorting that are used in the algorithms.
This work considers the multi-objective optimization problem constrained by a system of bipolar fuzzy relational equations with max-product composition. An integer optimization based technique for order of preference by similarity to the ideal solution is proposed for solving such a problem. Some critical features associated with the feasible domain and optimal solutions of the bipolar max-Tp equation constrained optimization problem are studied. An illustrative example verifying the idea of this paper is included. This
is the first attempt to study the bipolar max-T equation constrained multi-objective optimization problems
from an integer programming viewpoint.
Graph Traversal Algorithms - Breadth First SearchAmrinder Arora
The document discusses branch and bound algorithms. It begins with an overview of breadth first search (BFS) and how it can be used to solve problems on infinite mazes or graphs. It then provides pseudocode for implementing BFS using a queue data structure. Finally, it discusses branch and bound as a general technique for solving optimization problems that applies when greedy methods and dynamic programming fail. Branch and bound performs a BFS-like search, but prunes parts of the search tree using lower and upper bounds to avoid exploring all possible solutions.
The document provides an introduction to algorithms presented by Sachin Sharma Bhandari. It defines algorithms, discusses analyzing time and space efficiency, and provides examples of sorting and other problems solved by algorithms. It also reviews data structures like stacks, queues, linked lists, and binary trees that are important for algorithm design.
This document summarizes a paper on the comparison-based complexity of multi-objective optimization. It outlines that the paper presents complexity upper and lower bounds for finding the Pareto front and Pareto set in multi-objective optimization problems. For upper bounds, it shows the complexity is O(Nd log(1/e)) for finding the whole Pareto set and O(N+1-d log(1/e)) for finding a single point, where N is the dimension and d is the number of objectives. For lower bounds, it applies a proof technique using packing numbers from the mono-objective case to the multi-objective case to derive tight complexity bounds.
Beginning direct3d gameprogrammingmath02_logarithm_20160324_jintaeksJinTaek Seo
This document discusses logarithms, exponentiation, and their relationship. It also covers binary search and quicksort algorithms. Logarithms are the inverse of exponentiation, where the logarithm of a number is the exponent that the base must be raised to to produce that number. Binary search and quicksort both have average time complexities of O(log n). Quicksort uses a divide and conquer approach to partition an array around a pivot element.
This document presents several algorithms for radix sorting integers with no extra space beyond the input. It begins with a simple algorithm that compresses part of the input to gain space for sorting the remainder in linear time, but is not stable. It then presents a more sophisticated stable algorithm that recursively sorts portions of the input, compressing one portion to gain space to radix sort chunks of the remainder, and finally merges the sorted portions. The document also discusses how these techniques can be extended to handle arbitrary word lengths and read-only keys.
How to use probabilistic inference programming for application orchestration ...Veselin Pizurica
As companies are adopting serverless architectures and moving away from monolithic and microservice-based deployments, they realise that the challenge lies not only in the rewrite of an old application, but also in the shift towards a new way of thinking. We see many serverless architecture patterns today, such as function chaining, function chaining with rollback (for transaction), ASync HTTP, fan-out and more. We also have a number of tools on the market that ease application development using serverless, of which Apache OpenWhisk (via action chaining or using function composites) and Amazon Step Functions are some of the more popular. In this talk, we will present a new alternative way of building serverless applications based on the orchestration of typed functions, using the probabilistic inference programing paradigm. Inference-based programming brings about the best of the current modelling approaches: the expressiveness and simplicity of decision trees, the high debugging capabilities of state machines, the scalability and flexibility of flow based programming and superior logic expressions to forward chaining approaches. The talk will include a live demo of how to use probabilistic inference programming for a complex IoT application.
1. Exact inference in Bayesian networks is NP-hard in the worst case, so approximation techniques are needed for large networks.
2. Major approximation techniques include variational methods like mean-field approximation, sampling methods like Monte Carlo Markov Chain, and bounded cutset conditioning.
3. Variational methods introduce variational parameters to minimize the distance between the approximate and true distributions. Sampling methods draw random samples to estimate probabilities. Bounded cutset conditioning breaks loops by instantiating subsets of variables.
Introduction to complexity theory that solves your assignment problem it contains about complexity class,deterministic class,big- O notation ,proof by mathematical induction, L-Space ,N-Space and characteristics functions of set and so on
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEijscmc
Computing the minimum spanning tree of the graph is one of the fundamental computational problems. In
this paper, we present a new parallel algorithm for computing the minimum spanning tree of an undirected
weighted graph with n vertices and m edges. This algorithm uses the cluster techniques to reduce the
number of processors by fraction 1/f (n) and the parallel work by the fraction O ( 1 lo g ( f ( n )) ),where f (n) is an
arbitrary function. In the case f (n) =1, the algorithm runs in logarithmic-time and use super linear work on
EREWPRAM model. In general, the proposed algorithm is the simplest one.
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEijscmcj
Computing the minimum spanning tree of the graph is one of the fundamental computational problems. In this paper, we present a new parallel algorithm for computing the minimum spanning tree of an undirected weighted graph with vertices and edges. This algorithm uses the cluster techniques to reduce the number of processors by fraction and the parallel work by the fraction O ( 1 lo g ( f ( n )) ),where f (n) is an arbitrary function. In the case f (n) =1, the algorithm runs in logarithmic-time and use super linear work on EREWPRAM model. In general, the proposed algorithm is the simplest one.
This document describes a parallel algorithm for batched range searching on coarse-grained multicomputers. The algorithm is based on the range-tree method and solves the d-dimensional batched range searching problem in O(Ts(nlogd-1p;p)+Ts(mlogd-1p;p)+((m+n)logd-1(n/p)+mlogd-1plog(n/p)+k)/p) time. It constructs a range tree in parallel by sorting points and building subtrees on each processor. It then partitions query ranges and determines contained points in parallel by traversing the range tree.
This document provides an introduction to algorithms and algorithm analysis. It defines an algorithm as a set of unambiguous instructions to solve a problem in a finite amount of time. The most famous early algorithm is Euclid's algorithm for calculating greatest common divisors. Algorithm analysis involves proving an algorithm's correctness and analyzing its running time and space complexity. Common notations for analyzing complexity include Big-O, which provides upper bounds, Big-Omega, which provides lower bounds, and Big-Theta, which provides tight bounds. The goal of analysis is to determine the most efficient algorithm by evaluating performance as problem size increases.
This document discusses representing computing concepts like Turing machines, programming patterns, and virtual machines using semantic networks and RDF graphs. It describes how instructions, data structures, objects, and software patterns can be modeled as nodes and relationships in a graph. It also introduces RDF as a standardized data model for semantic networks and triplestores for efficiently storing and querying large semantic graphs.
A Numerical Method for the Evaluation of Kolmogorov Complexity, An alternativ...Hector Zenil
We present a novel alternative method (other than using compression algorithms) to approximate the algorithmic complexity of a string by calculating its algorithmic probability and applying Chaitin-Levin's coding theorem.
This document provides an overview of the theory of computation. It discusses the following key points:
1. Computation involves executing programs on computers to perform input-output transformations. Programs are algorithms expressed in programming languages.
2. The theory of computation classifies problems by their computability and complexity. It studies models of computation like Turing machines.
3. The theory has three main components: computability theory, complexity theory, and automata theory/formal languages.
4. Proofs in the theory use techniques like mathematical induction to show properties are true for all cases.
Pattern Matching using Computational and Automata TheoryIRJET Journal
This document discusses using finite automata for pattern matching. It begins by defining finite automata and their components. It then discusses how finite automata can be used to represent patterns for pattern matching by having states correspond to prefixes of the pattern. Transition tables are provided as an example for the patterns "memo" and detecting the string "barbara". Common pattern matching algorithms like Knuth-Morris-Pratt and Boyer-Moore are also mentioned. The document concludes that finite automata can effectively be used to match regular expressions and languages.
IRJET - Application of Linear Algebra in Machine LearningIRJET Journal
This document discusses the application of linear algebra concepts in machine learning. It begins with an introduction to linear algebra and key concepts like vectors, matrices, and linear transformations. It then provides an introduction to machine learning, including the different types of machine learning algorithms like supervised, unsupervised, and reinforcement learning. It discusses how machine learning is closely related to statistics and introduces some common statistical concepts. Finally, it discusses how linear algebra is widely used in machine learning algorithms like linear regression and support vector machines. Linear algebra allows machine learning models to represent data and map it to specific feature spaces.
Applied parallel coordinates for logs and network traffic attack analysisUltraUploader
This document summarizes a research paper that explores using parallel coordinate plots (PCC) to visualize large datasets for cybersecurity analysis. PCC allows high-dimensional event data to be visualized in 2D. The paper introduces the mathematical foundations of PCC and describes an open-source software called Picviz that generates PCC images from log and network traffic data. Examples are provided of how PCC revealed security issues in Cray system logs and detected botnet attacks. In particular, PCC can identify patterns indicating events like distributed denial-of-service attacks.
A Predictive Stock Data Analysis with SVM-PCA Model .......................................................................1
Divya Joseph and Vinai George Biju
HOV-kNN: A New Algorithm to Nearest Neighbor Search in Dynamic Space.......................................... 12
Mohammad Reza Abbasifard, Hassan Naderi and Mohadese Mirjalili
A Survey on Mobile Malware: A War without End................................................................................... 23
Sonal Mohite and Prof. R. S. Sonar
An Efficient Design Tool to Detect Inconsistencies in UML Design Models............................................. 36
Mythili Thirugnanam and Sumathy Subramaniam
An Integrated Procedure for Resolving Portfolio Optimization Problems using Data Envelopment
Analysis, Ant Colony Optimization and Gene Expression Programming ................................................. 45
Chih-Ming Hsu
Emerging Technologies: LTE vs. WiMAX ................................................................................................... 66
Mohammad Arifin Rahman Khan and Md. Sadiq Iqbal
Introducing E-Maintenance 2.0 ................................................................................................................. 80
Abdessamad Mouzoune and Saoudi Taibi
Detection of Clones in Digital Images........................................................................................................ 91
Minati Mishra and Flt. Lt. Dr. M. C. Adhikary
The Significance of Genetic Algorithms in Search, Evolution, Optimization and Hybridization: A Short
Review ...................................................................................................................................................... 103
This document provides an introduction to computer programming and mathematics through explaining a basic "Hello World" program written in C++. It discusses the structure of the program, including comments, functions, variables, arithmetic operations on integers, and printing output. It explains how to declare integer variables, assign values, perform basic math, and print values. The document aims to give the reader a foundation for writing their own simple mathematical programs.
/
p
This document provides an overview of the CS-2251 DESIGN AND ANALYSIS OF ALGORITHMS course. It defines algorithms and discusses algorithm design and analysis processes. It covers different algorithm efficiency measures, specification methods, important problem types, classification techniques, and examples like the Euclid algorithm. Key aspects of sorting, searching, graph, combinatorial, and numerical problems are outlined. The features of efficient algorithms and orders of algorithms are defined.
tt
h
In this paper, Assignment problem with crisp, fuzzy and intuitionistic fuzzy numbers as cost coefficients is investigated. In conventional assignment problem, cost is always certain. This paper develops an approach to solve a mixed intuitionistic fuzzy assignment problem where cost is considered real, fuzzy and an intuitionistic fuzzy numbers. Ranking procedure of Annie Varghese and Sunny Kuriakose [4] is used to transform the mixed intuitionistic fuzzy assignment problem into a crisp one so that the conventional method may be applied to solve the assignment problem. The method is illustrated by a numerical example. The proposed method is very simple and easy to understand. Numerical examples show that an intuitionistic fuzzy ranking method offers an effective tool for handling an intuitionistic fuzzy assignment problem.
The document describes an assignment given to Md. Mehedi Hasan on the topic of applying numerical methods in computer science engineering. The assignment was given by five students and includes an index listing numerical methods to cover: error analysis, N-R method, interpolation, differentiation and max/min, curve fitting, and integration.
The document contains details about experiments performed in a Digital Signal Processing practical course. It includes the aims, apparatus required, theory, source code and results for experiments involving MATLAB programs to generate basic signals like impulse, step, ramp and exponential signals; sine and cosine signals; quantization; sampling theorem; linear convolution; autocorrelation; and cross-correlation. Programs were written in MATLAB to perform the various digital signal processing tasks and the output was verified.
Investing in AI transformation today
The modern business advantage: Uncovering deep insights with AI
Organizations around the world have come to recognize AI as the transformative technology that enables them to gain real business advantage.
AI’s ability to organize vast quantities of data allows those who implement it to uncover deep business insights, augment human expertise, drive
operational efficiency, transform their products, and better serve their customers
Last year’s Global Risks Report warned of a world
that would not easily rebound from continued
shocks. As 2024 begins, the 19th edition of
the report is set against a backdrop of rapidly
accelerating technological change and economic
uncertainty, as the world is plagued by a duo of
dangerous crises: climate and conflict.
Underlying geopolitical tensions combined with the
eruption of active hostilities in multiple regions is
contributing to an unstable global order characterized
by polarizing narratives, eroding trust and insecurity.
At the same time, countries are grappling with the
impacts of record-breaking extreme weather, as
climate-change adaptation efforts and resources
fall short of the type, scale and intensity of climaterelated events already taking place. Cost-of-living
pressures continue to bite, amidst persistently
elevated inflation and interest rates and continued
economic uncertainty in much of the world.
Despondent headlines are borderless, shared
regularly and widely, and a sense of frustration at
the status quo is increasingly palpable. Together,
this leaves ample room for accelerating risks – like
misinformation and disinformation – to propagate
in societies that have already been politically and
economically weakened in recent years.
Just as natural ecosystems can be pushed to the
limit and become something fundamentally new;
such systemic shifts are also taking place across
other spheres: geostrategic, demographic and
technological. This year, we explore the rise of global
risks against the backdrop of these “structural
forces” as well as the tectonic clashes between
them. The next set of global conditions may not
necessarily be better or worse than the last, but the
transition will not be an easy one.
The report explores the global risk landscape in this
phase of transition and governance systems being
stretched beyond their limit. It analyses the most
severe perceived risks to economies and societies
over two and 10 years, in the context of these
influential forces. Could we catapult to a 3°C world
as the impacts of climate change intrinsically rewrite
the planet? Have we reached the peak of human
development for large parts of the global population,
given deteriorating debt and geo-economic
conditions? Could we face an explosion of criminality
and corruption that feeds on more fragile states and
more vulnerable populations? Will an “arms race” in
experimental technologies present existential threats
to humanity?
These transnational risks will become harder to
handle as global cooperation erodes. In this year’s
Global Risks Perception Survey, two-thirds of
respondents predict that a multipolar order will
dominate in the next 10 years, as middle and
great powers set and enforce – but also contest
- current rules and norms. The report considers
the implications of this fragmented world, where
preparedness for global risks is ever more critical but
is hindered by lack o
KOSMOS-1 is a multimodal large language model that can perceive and process language as well as visual inputs like images. It was trained on large web-scale datasets containing text, images, and image-caption pairs to align its vision capabilities with its natural language understanding. Experimental results showed that KOSMOS-1 can perform well on tasks involving language, vision, and their combination, including image captioning, visual question answering, and describing images based on text instructions, all without any fine-tuning. The ability to perceive and understand different modalities allows language models to acquire knowledge in new ways and expands their application to areas like robotics and document intelligence.
We present a causal speech enhancement model working on the
raw waveform that runs in real-time on a laptop CPU. The proposed model is based on an encoder-decoder architecture with
skip-connections. It is optimized on both time and frequency
domains, using multiple loss functions. Empirical evidence
shows that it is capable of removing various kinds of background noise including stationary and non-stationary noises,
as well as room reverb. Additionally, we suggest a set of
data augmentation techniques applied directly on the raw waveform which further improve model performance and its generalization abilities. We perform evaluations on several standard
benchmarks, both using objective metrics and human judgements. The proposed model matches state-of-the-art performance of both causal and non causal methods while working
directly on the raw waveform.
Index Terms: Speech enhancement, speech denoising, neural
networks, raw waveform
This document discusses IBM's reference architecture for data and AI. It provides guidance on designing systems that use AI and analyze large amounts of data. The reference architecture covers strategies for collecting, storing, processing and analyzing data at large scales using technologies like Apache Spark, Hadoop and containers. It is intended to help organizations build systems that extract insights from data.
1. El documento analiza las oportunidades y desafíos que plantea la inteligencia artificial para el crecimiento económico de Colombia. 2. Señala que la IA podría acelerar el crecimiento de Colombia en aproximadamente 1 punto porcentual anual durante la próxima década si se logra aumentar la tasa de adopción tecnológica. 3. Sin embargo, también destaca que esto requiere que las empresas colombianas sean más dinámicas en la absorción de nuevas tecnologías y que la fuerza laboral desarrolle
Artificial neural networks are the heart of machine learning algorithms and artificial intelligence
protocols. Historically, the simplest implementation of an artificial neuron traces back to the classical
Rosenblatt’s “perceptron”, but its long term practical applications may be hindered by the fast scal-
ing up of computational complexity, especially relevant for the training of multilayered perceptron
networks. Here we introduce a quantum information-based algorithm implementing the quantum
computer version of a perceptron, which shows exponential advantage in encoding resources over
alternative realizations. We experimentally test a few qubits version of this model on an actual
small-scale quantum processor, which gives remarkably good answers against the expected results.
We show that this quantum model of a perceptron can be used as an elementary nonlinear classifier
of simple patterns, as a first step towards practical training of artificial quantum neural networks
to be efficiently implemented on near-term quantum processing hardware
En los ̇ltimos 20 aÒos la Enfermedad de Alzheimer pasÛ de ser el paradigma
del envejecimiento normal -aunque prematuro y acelerado-, del cerebro,
para convertirse en una enfermedad autÈntica, nosolÛgicamente bien defini-
da y con una clara raÌz genÈtica. La enfermedad afecta hoy a m·s de 20
millones de personas, tiene enormes consecuencias sobre la economÌa de los
paÌses y constituye uno de los temas de investigaciÛn m·s activos en el ·rea
de salud.
Este artÌculo revisa el conocimiento actual sobre el tema. En esta primera
parte se analizan su epidemiologÌa, patogenia y genÈtica; se enumeran los
temas prioritarios de investigaciÛn; se revisa su relaciÛn con el concepto de
muerte celular programada (apoptosis) y se enumeran los elementos indis-
pensables para el diagnÛstico.
Palabras Clave
:Enfermedad de Alzhaimer; Demencia; GenÈtica; TerapÈuti-
ca.
Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.
There is an increasing interest in exploiting mobile sensing technologies and machine learning techniques for mental health monitoring and intervention. Researchers have effectively used contextual information, such as mobility, communication and mobile phone usage patterns for quantifying individuals’ mood and wellbeing. In this paper, we investigate the effectiveness of neural network models for predicting users’ level of stress by using the location information collected by smartphones. We characterize the mobility patterns of individuals using the GPS metricspresentedintheliteratureandemploythesemetricsasinputtothenetwork. We evaluate our approach on the open-source StudentLife dataset. Moreover, we discuss the challenges and trade-offs involved in building machine learning models for digital mental health and highlight potential future work in this direction.
La Hipertensión, es una de las mayores enfermedades que sufren los Hispanohablantes en el planeta . Es grato poder colocar este documento al público y haber podido hacer parte del equipo , ojalá sirvan a muchos las implementaciones. idioma más hablado según el foro Económico mundial - Me refiero al español ó castellano según sea -
segundo idioma y haber podido hacer parte de este equipo. Genuinamente, espero que se curen la mayor cantidad de personas con . Espero genuinamente puedan hacer algúna donación a este esfuerzo grupal. Espero Compartamos este "Paper" así como compartimos memes - En el sentido literal de la significancia-
** Refierase a Wikipedia sino tiene un diccionario a mano.
To thrive in the 21st century, students need more than traditional academic learning. They must be adept at collaboration, communication and problem-solving, which are some of the skills developed through social and emotional learning (SEL). Coupled with mastery of traditional skills, social and emotional proficiency will equip students to succeed in the swiftly evolving digital economy. In 2015, the World Economic Forum published a report that focused on the pressing issue of the 21st-century skills gap and ways to address it through technology (New Vision for Education: Unlocking the Potential of Technology). In that report, we defined a set of 16 crucial proficiencies for education in the 21st century. Those skills include six “foundational literacies”, such as literacy, numeracy and scientific literacy, and 10 skills that we labelled either “competencies” or “character qualities”. Competencies are the means by which students approach complex challenges; they include collaboration, communication and critical thinking and problem-solving. Character qualities are the ways in which students approach their changing environment; they include curiosity, adaptability and social and cultural awareness (see Exhibit 1).
In our current report, New Vision for Education: Fostering Social and Emotional Learning through Technology, we follow up on our 2015 report by exploring how these competencies and character qualities do more than simply deepen 21st-century skills. Together, they lie at the heart of SEL and are every bit as important as the foundational skills required for traditional academic learning. Although many stakeholders have defined SEL more narrowly, we believe the definition of SEL is evolving. We define SEL broadly to encompass the 10 competencies and character qualities.1 As is the case with traditional academic learning, technology can be invaluable at enabling SEL.
La expresión “futuro del trabajo” es actualmente uno de los conceptos más populares en las búsquedas en Google. Los numerosos avances tecnológicos de los últimos tiempos están modificando rápidamente la frontera entre las actividades realizadas por los seres humanos y las ejecutadas por las máquinas, lo cual está transformando el mundo del trabajo. Existe un creciente número de estudios e iniciativas que se están llevando a cabo con el objeto de analizar qué significan estos cambios en nuestro trabajo, en nuestros ingresos, en el futuro de nuestros hijos, en nuestras empresas y en nuestros gobiernos. Estos análisis se conducen principalmente desde la óptica de las economías avanzadas, y mucho menos desde la perspectiva de las economías en desarrollo y emergentes. Sin embargo, las diferencias en materia de difusión tecnológica, de estructuras económicas y demográficas, de niveles de educación y patrones
migratorios inciden de manera significativa en la manera en que estos cambios pueden afectar a los países en desarrollo y emergentes. Este estudio, El futuro del trabajo: perspectivas regionales, se centra en las repercusiones probables de estas tendencias en las economías en desarrollo y emergentes de África; Asia; Europa del Este, Asia Central y el Mediterráneo Sur y Oriental, y América Latina y el Caribe. Se trata de un esfuerzo mancomunado de los cuatro principales bancos regionales de desarrollo: el African Development Bank Group, el Asian Development Bank, el Banco Interamericano de Desarrollo y el European Bank for Reconstruction and Development. En el estudio se destacan las oportunidades que los cambios en la dinámica del trabajo podrían crear en nuestras regiones. El progreso tecnológico permitiría a los países con los que trabajamos crecer y alcanzar rápidamente mejores niveles de vida que en el pasado
El documento resume el ascenso de China como potencia mundial y cómo esto está cuestionando el orden internacional liderado por Estados Unidos. Explica que tras la Guerra Fría hubo un período de optimismo sobre la expansión de la democracia y la globalización, pero que ahora Estados Unidos se está retirando de su liderazgo global mientras China se fortalece económica y militarmente. Predice que Asia será el epicentro de los cambios geopolíticos a medida que China busque aumentar su influencia en la región.
The increasing use of electronic forms of communication presents
new opportunities in the study of mental health, including the
ability to investigate the manifestations of psychiatric diseases un-
obtrusively and in the setting of patients’ daily lives. A pilot study to
explore the possible connections between bipolar affective disorder
and mobile phone usage was conducted. In this study, participants
were provided a mobile phone to use as their primary phone. This
phone was loaded with a custom keyboard that collected metadata
consisting of keypress entry time and accelerometer movement.
Individual character data with the exceptions of the backspace key
and space bar were not collected due to privacy concerns. We pro-
pose an end-to-end deep architecture based on late fusion, named
DeepMood, to model the multi-view metadata for the prediction
of mood scores. Experimental results show that 90.31% prediction
accuracy on the depression score can be achieved based on session-
level mobile phone typing dynamics which is typically less than
one minute. It demonstrates the feasibility of using mobile phone
metadata to infer mood disturbance and severity
Defin
ing artificial intelligence is no easy matter. Since the mid
-
20th century when it
was first
recognized
as a specific field of research, AI has always been envisioned as
an evolving boundary, rather than a settled research field. Fundamentally, it refers
to
a programme whose ambitious objective is to understand and reproduce human
cognition; creating cognitive processes comparable to those found in human beings.
Therefore, we are naturally dealing with a wide scope here, both in terms of the
technical proced
ures that can be employed and the various disciplines that can be
called upon: mathematics, information technology, cognitive sciences, etc. There is
a great variety of approaches when it comes to AI: ontological, reinforcement
learning, adversarial learni
ng and neural networks, to name just a few. Most of them
have been known for decades and many of the algorithms used today were
developed in the ’60s and ’70s.
Since the 1956 Dartmouth conference, artificial intelligence has alternated between
periods of
great enthusiasm and disillusionment, impressive progress and frustrating
failures. Yet, it has relentlessly pushed back the limits of what was only thought to
be achievable by human beings. Along the way, AI research has achieved significant
successes: o
utperforming human beings in complex games (chess, Go),
understanding natural language, etc. It has also played a critical role in the history
of mathematics and information technology. Consider how many softwares that we
now take for granted once represen
ted a major breakthrough in AI: chess game
apps, online translation programmes, etc
Researchers surveyed 352 AI experts about their predictions for advances in artificial intelligence. The key findings were:
- Researchers predicted that AI will outperform humans in many tasks within the next 10 years, such as translating languages by 2024 and driving a truck by 2027.
- The experts estimated a 50% chance of AI outperforming humans in all tasks within 45 years and automating all human jobs within 120 years. Asian respondents expected these dates to occur much sooner than North American respondents.
- The researchers believed that progress in machine learning has accelerated in recent years. They saw the possibility of an "intelligence explosion" after high-level machine intelligence is achieved but estimated the probability as low, around 10
The document provides an overview of Microsoft's AI platform, which includes AI Services, Infrastructure, and Tools. The platform offers a comprehensive set of AI services for rapid development, enterprise-grade infrastructure to run AI workloads at scale, and modern tools for data scientists and developers to create and operationalize AI solutions. It allows building intelligent applications that augment human abilities across various industries.
This document proposes AttnGAN, an Attentional Generative Adversarial Network for fine-grained text-to-image generation. AttnGAN uses an attentional generative network with multiple generators that produce higher resolution images at each stage. It attends to relevant words for different image regions using attention models. AttnGAN also uses a Deep Attentional Multimodal Similarity Model to compute an image-text matching loss for training. Experimental results show AttnGAN significantly outperforms previous methods on benchmark datasets.
The Hamilton Project • Brookings i
Seven Facts on Noncognitive Skills
from Education to the Labor Market
Introduction
Cognitive skills—that is, math and reading skills that are measured by standardized tests—are generally
understood to be of critical importance in the labor market. Most people find it intuitive and indeed
unsurprising that cognitive skills, as measured by standardized tests, are important for students’ later-life
outcomes. For example, earnings tend to be higher for those with higher levels of cognitive skills. What is
less well understood—and is the focus of these economic facts—is that noncognitive skills are also integral to
educational performance and labor-market outcomes.
Due in large part to research pioneered in economics by Nobel laureate James J. Heckman, there is a robust and
growing body of evidence that noncognitive skills function similarly to cognitive skills, strongly improving
labor-market outcomes. These noncognitive skills—often referred to in the economics literature as soft skills and
elsewhere as social, emotional, and behavioral skills—include qualities like perseverance, conscientiousness,
and self-control, as well as social skills and leadership ability (Duckworth and Yeager 2015). The value of these
qualities in the labor market has increased over time as the mix of jobs has shifted toward positions requiring
noncognitive skills. Evidence suggests that the labor-market payoffs to noncognitive skills have been increasing
over time and the payoffs are particularly strong for individuals who possess both cognitive and noncognitive
skills (Deming 2015; Weinberger 2014).
Although we draw a conceptual distinction between noncognitive skills and cognitive skills, it is not possible to
disentangle these concepts fully. All noncognitive skills involve cognition, and some portion of performance on
cognitive tasks is made possible by noncognitive skills. For the purposes of this document, the term “cognitive
skills” encompasses intelligence; the ability to process, learn, think, and reason; and substantive knowledge
as reflected in indicators of academic achievement. Since the No Child Left Behind Act of 2001, education
policy has focused on accountability policies aimed at improving cognitive skills and closing test score gaps
across groups. These policies have been largely successful, particularly for math achievement (Dee and Jacob
2011; Wong, Cook, and Steiner 2009) and among students most exposed to accountability pressure (Neal and
Schanzenbach 2010). What has received less attention in policy debates is the importance of noncognitive skills.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
The Microsoft 365 Migration Tutorial For Beginner.pptx
Algoritmic Information Theory
1. G. J. Chaitin
Algorithmic Information Theory
Abstract: This paper reviews algorithmic information theory, which is an attempt to apply information-theoretic and probabilistic
ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to
specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past
few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are pre-
sented here and certain results of R. M. Solovay are reported.
350
G. J. CHAITIN
Historical introduction
To our knowledge, the first publication of the ideas of
algorithmic information theory was the description of
R. J. Solomonoff's ideas given in 1962 by M. L. Minsky
in his paper, "Problems of formulation for artificial intelli-
gence" [ 1]:
"Consider a slightly different form of inductive infer-
ence problem. Suppose that we are given a very long
'data' sequence of symbols; the problem is to make a
prediction about the future of the sequence. This is a
problem familiar in discussion concerning 'inductive
probability.' The problem is refreshed a little, perhaps,
by introducing the modern notion of universal computer
and its associated language of instruction formulas. An
instruction sequence will be considered acceptable if it
causes the computer to produce a sequence, perhaps
infinite, that begins with the given finite 'data' sequence.
Each acceptable instruction sequence thus makes a pre-
diction, and Occam's razor would choose the simplest
such sequence and advocate its prediction. (More gener-
ally, one could weight the different predictions by
weights associated with the simplicities of the
instructions.) If the simplicity function is just the length
of the instruction, we are then trying to find a minimal
description, i.e., an optimally efficient encoding of the
data sequence.
"Such an induction method could be of interest only if
one could show some significant invariance with respect
to choice of defining universal machine. There is no such
invariance for a fixed pair of data strings. For one could
design a machine which would yield the entire first string
with a very small input, and the second string only for
some very complex input. On the brighter side, one can
see that in a sense the induced structure on the space of
data strings has some invariance in an 'in the large' or
'almost everywhere' sense. Given two different univer-
sal machines, the induced structures cannot be desper-
ately different. We appeal to the 'translation theorem'
whereby an arbitrary instruction formula for one ma-
chine may be converted into an equivalent instruction
formula for the other machine by the addition of a con-
stant prefix text. This text instructs the second machine
to simulate the behavior of the first machine in operating
on the remainder of the input text. Then for data strings
much larger than this translation text (and its inverse)
the choice between the two machines cannot greatly
affect the induced structure. It would be interesting to
see if these intuitive notions could be profitably formal-
ized.
"Even if this theory can be worked out, it is likely that
it will present overwhelming computational difficulties in
application. The recognition problem for minimal descrip-
tions is, in general, unsolvable, and a practical induction
machine will have to use heuristic methods. [In this
connection it would be interesting to write a program to
play R. Abbott's inductive card game [2].]"
Algorithmic information theory originated in the inde-
pendent work of Solomonoff (see [I, 3- 6] ), of A. N.
Kolmogorovand P. Martin-Lof (see [7-14]), and of
G. J. Chaitin (see [15-26]). Whereas Solomonoff
weighted together all the programs for a given result into
a probability measure, Kolmogorov and Chaitin concen-
trated their attention on the size of the smallest program.
Recently it has been realized by Chaitin and indepen-
dently by L. A. Levin that if programs are stipulated to
be self-delimiting, these two differing approaches be-
come essentially equivalent. This paper attempts to cast
into a unified scheme the recent work in this area by
Chaitin [23,24] and by R. M. Solovay [27,28]. The
reader may also find it interesting to examine the parallel
efforts of Levin (see [29 - 35]). There has been a sub-
IBM J. RES. DEVELOP.
2. stantial amount of other work in this general area, often
involving variants of the definitions deemed more suit-
able for particular applications (see, e.g., [36-47]).
Algorithmic information theory of finite computations
[23]
• Definitions
Let us start by considering a class of Turing machines
with the following characteristics. Each Turing machine
has three tapes: a program tape, a work tape, and an
output tape. There is a scanning head on each of the
three tapes. The program tape is read-only and each of
its squares contains a 0 or a 1. It may be shifted in only
one direction. The work tape may be shifted in either
direction and may be read and erased, and each of its
squares contains a blank, a 0, or a 1. The work tape is
initially blank. The output tape may be shifted in only
one direction. Its squares are initially blank, may have a
0, a 1, or a comma written on them, and cannot be re-
written. Each Turing machine of this type has a finite
number n of states, and is defined by an n x 3 table,
which gives the action to be performed and the next
state as a function of the current state and the contents
of the square of the work tape that is currently being
scanned. The first state in this table is by convention the
initial state. There are eleven possible actions: halt, shift
work tape left/right, write blank/O/1 on work tape,
read square of program tape currently being scanned and
copy onto square of work tape currently being scanned
and then shift program tape, write 0/ 1/ comma on out-
put tape and then shift output tape, and consult oracle.
The oracle is included for the purpose of defining rela-
tive concepts. It enables the Turing machine to choose
between two possible state transitions, depending on
whether or not the binary string currently being scanned
on the work tape is in a certain set, which for now we
shall take to be the null set.
From each Turing machine M of this type we define a
probability P, an entropy H, and a complexity I. P(s) is
the probability that M eventually halts with the string s
written on its output tape if each square of the program
tape is filled with a 0 or a 1 by a separate toss of an un-
biased coin. By "string" we shall always mean a finite
binary string. From the probability P(s) we obtain the
entropy H (s ) by taking the negative base-two logarithm,
i.e., H(s) is -log2P(s). A string p is said to be a pro-
gram if when it is written on M's program tape and M
starts computing scanning the first bit of p, then M even-
tually halts after reading all of p and without reading any
other squares of the tape. A program p is said to be-a
minimal program if no other program makes M produce
the same output and has a smaller size. And finally the
complexity I(s) is defined to be the least n such that for
some contents of its program tape M eventually halts
with s written on the output tape after reading precisely
n squares of the program tape; i.e., I (s) is the size of a
minimal program for s. To summarize, P is the probabili-
ty that M calculates s given a random program, H is
-log2P, and I is the minimum number of bits required to
specify an algorithm for M to calculate s.
It is important to note that blanks are not allowed on
the program tape, which is imagined to be entirely filled
with O's and 1's. Thus programs are not followed by
endmarker blanks. This forces them to be self-delimit-
ing; a program must indicate within itself what size it
has. Thus no program can be a prefix of another one,
and the programs for M form what is known as a prefix-
free set or an instantaneous code. This has two very
important effects: It enables a natural probability distri-
bution to be defined on the set of programs, and it makes
it possible for programs to be built up from subroutines
by concatenation. Both of these desirable features are
lost if blanks are used as program endmarkers. This
occurs because there is no natural probability distribu-
tion on programs with endmarkers; one, of course,
makes all programs of the same size equiprobable, but it
is also necessary to specify in some arbitrary manner the
probability of each particular size. Moreover, if two sub-
routines with blanks as endmarkers are concatenated, it
is necessary to include additional information indicating
where the first one ends and the second begins.
Here is an example of a specific Turing machine M of
the above type. M counts the number n of O's up to the
first 1 it encounters on its program tape, then transcribes
the next n bits of the program tape onto the output tape,
and finally halts. So M outputs s iff it finds length(s) O's
followed by a 1 followed by s on its program tape. Thus
P(s) = eXP2(-21ength(s) - 1), H(s) = 2Iength(s) + 1,
and I(s) = 2Iength(s) + 1. Here eXP2(x) is the base-two
exponential function 2
x
. Clearly this is a very special-
purpose computer which embodies a very limited class
of algorithms and yields uninteresting functions P, H,
and I.
On the other hand it is easy to see that there are "gen-
eral-purpose" Turing machines that maximize P and
minimize H and I; in fact, consider those universal Tur-
ing machines which will simulate an arbitrary Turing
machine if a suitable prefix indicating the machine to
simulate is added to its programs. Such Turing machines
yield essentially the same P, H, and I. We therefore
pick, somewhat arbitrarily, a particular one of these, U,
and the definitive definition of P, H, and I is given in
terms of it. The universal Turing machine U works as
follows. If U finds i O's followed by a 1 on its program
tape, it simulates the computation that the ith Turing
machine of the above type performs upon reading the
remainder of the program tape. By the ith Turing ma- 351
JULY 1977 ALGORITHMIC INFORMATION THEORY
3. 352
G. J. CHAITlN
chine we mean the one that comes ith in a list of all pos-
sible defining tables in which the tables are ordered by
size (i.e., number of states) and lexicographically among
those of the same size. With this choice of Turing ma-
chine, P, H, and 1 can be dignified with the following ti-
tles: P(s) is the algorithmic probability of .1', H (.I') is the
algorithmic entropy of .1', and 1(.1') is the algorithmic infor-
mation of s, Following Solomonoff [3], P(s) and H (.I')
may also be called the a priori probability and entropy
of s. 1(.1') may also be termed the descriptive, program-
size, or information-theoretic complexity of S, And since
P is maximal and H and 1 are minimal, the above choice
of special-purpose Turing machine shows that P(s) :>
eXP2(-2 length(s) - 0(1), H(s) 2 length(s) +
0(1), and 1(.1') <: 2Iength(s) + 0(1).
We have defined P(s), Hts), and 1(.1') for individual
strings s. It is also convenient to consider computations
which produce finite sequences of strings. These are
separated by commas on the output tape. One thus de-
fines the joint probability P(sl' .. " .1'11), the joint entropy
H(sl"", s,,), and the joint complexity 1(.1'1"", .1',,) of an
n-tuple s1' . ", su' Finally one defines the conditional
probability P(t1, ' • " tmls1, ' • " sll) of the m-tuple tl" . " 1m
given the n-tuple sl" . " s" to be the quotient of the joint
probability of the n-tuple and the m-tuple divided by
the joint probability of the n-tuple. In particular P(tls) is
defined to be Pts.t) I P(s). And of course the conditional
entropy is defined to be the negative base-two logarithm
of the conditional probability. Thus by definition H (.I', t)
= H(s) + H(tls). Finally, in order to extend the above
definitions to tuples whose members may either be
strings or natural numbers, we identify the natural num-
ber n with its binary representation.
• Basic relationships
We now review some basic properties ofthese concepts.
The relation
H(s,t) = H(t,s) + 0(1)
states that the probability of computing the pair s, t is
essentially the same as the probability of computing the
pair t, s. This is true because there is a prefix that con-
verts any program for one of these pairs into a program
for the other one. The inequality
H(s) <: H(s,t) + 0(1)
states that the probability of computing s is not less than
the probability of computing the pair .1', I. This is true
because a program for s can be obtained from any pro-
gram for the pair s, t by adding a fixed prefix to it. The
inequality
Ht s, I) H(.I') + H(I) +0(1)
states that the probability of computing the pair .1', I is
not less than the product of the probabilities of comput-
ing sand t. and follows from the fact that programs are
self-delimiting and can be concatenated. The inequality
0(1) -e; H(tls) <:: H(t) + 0(1)
is merely a restatement of the previous two properties.
However, in view of the direct relationship between
conditional entropy and relative complexity indicated
below, this inequality also states that being told some-
thing by an oracle cannot make it more difficult to obtain
t. The relationship between entropy and complexity is
11(.1') l(s) 0(1),
i.e., the probability of computing .I' is essentially the
same as IIeXP2 (the size of a minimal program for .1').
This implies that a significant fraction of the probability
of computing s is contributed by its minimal programs,
and that there are few minimal or near-minimal programs
for a given result. The relationship between conditional
entropy and relative complexity is
H(tls) 18(t)+0(l).
Here 18
(1) denotes the complexity of t relative to a set
having a single element which is a minimal program for
s. In other words,
It s.t) I(s) + I s (t ) + 0(1).
This relation states that one obtains what is essentially a
minimal program for the pair s, t by concatenating the
following two subroutines:
• a minimal program for .I'
• a minimal program for calculating t using an oracle for
the set consisting of a minimal program for s.
• Algorithmic randomness
Consider an arbitrary strings of length n. From the fact
that H(I1) + H(sll1) = l1(n,s) H(s) + 0(1), it is casv
to show that H (.I') <:: n + 11(n) + 0(1), and that less than
eXP2 (n k + 0 (1 ) ) of the s of length n satisfy H (s ) < n
H(n) k. It follows that for most .I' of length n, H(s)
is approximately equal to n + H (n). These are the most
complex strings of length n, the ones which are most
difficult to specify, the ones with highest entropy, and
they are said to be the algorithmically random strings of
length n. Thus a typical strings of length n will have H (.I' )
close to n + H (n), whereas if .I' has pattern or can be
distinguished in some fashion, then it can be compressed
or coded into a program that is considerably smaller.
That 11(.1') is usually n +H(n) can be thought of as fol-
lows: In order to specify a typical strings of length n, it
is necessary first to specify its size 11, which requires H (n)
bits, and it is necessary then to specify each of the n
bits in .1', which requires n more bits and brings the total
IBM J. RES. DEVELOP.
4. to n + H(n). In probabilistic terms this can be stated as
follows: the sum of the probabilities of all the strings of
length n is essentially equal to pen), and most strings s
of length n have probability P(s) essentially equal to
P(n) /2". On the other hand, one of the strings of length n
that is least random and that has most pattern is the
string consisting entirely of O's, It is easy to see that this
string has entropy H (n) + 0 (l) and probability essen-
tially equal to P(n), which is another way of saying that
almost all the information in it is in its length. Here is an
example in the middle: If p is a minimal program of size
n, then it is easy to see that H(p) = n + 00) and P(p)
is essentially . Finally it should be pointed out that
since H(s) H(n) + H(sln) + 0(1) if sis of length n,
the above definition of randomness is equivalent to say-
ing that the most random strings oflength n have H(sln)
close to n, while the least random ones have H(sln)
close to O.
Later we shall show that even though most strings are
algorithmically random, i.e., have nearly as much entro-
pyas possible, an inherent limitation of formal axiomatic
theories is that a lower bound n on the entropy of a spe-
cific string can be established only if n is less than the
entropy of the axioms of the formal theory. In other
words, it is possible to prove that a specific object is of
complexity greater than n only if n is less than the com-
plexity of the axioms being employed in the demon-
stration. These statements may be considered to be an
information-theoretic version of Godel's famous in-
completeness theorem.
Now let us turn from finite random strings to infinite
ones, or equivalently, by invoking the correspondence
between a real number and its dyadic expansion, to ran-
dom reals. Consider an infinite string X obtained by flip-
ping an unbiased coin, or equivalently a real x uniformly
distributed in the unit interval. From the preceding con-
siderations and the Borel-Cantelli lemma it is easy to see
that with probability one there is a c such that H (Xn) >
n - c for all n, where X n denotes the first n bits of X,
that is, the first n bits of the dyadic expansion of x. We
take this property to be our definition of an algorithmi-
cally random infinite string X or real x.
Algorithmic randomness is a clear-cut property for
infinite strings, but in the case of finite strings it is a mat-
ter of degree. If a cutoff were to be chosen, however, it
would be well to place it at about the point at which
H (s) is equal to length (s). Then an infinite random string
could be defined to be one for which all initial segments
are finite random strings, within a certain tolerance.
Now consider the real number n defined as the halting
probability of the universal Turing machine U that we
used to define P, H, and I; i.e., n is the probability that
U eventually halts if each square of its program tape is
filled with a 0 or a I by a separate toss of an unbiased
coin. Then it is not difficult to see that n is in fact an
algorithmically random real, because if one were given
the first n bits of the dyadic expansion of n, then one
could use this to tell whether each program for U of size
less than n ever halts or not. In other words, when writ-
ten in binary the probability of halting n is a random or
incompressible infinite string. Thus the basic theorem of
recursive function theory that the halting problem is
unsolvable corresponds in algorithmic information the-
ory to the theorem that the probability of halting is algo-
rithmically random if the program is chosen by coin flip-
ping.
This concludes our review of the most basic facts re-
garding the probability, entropy, and complexity of finite
objects, namely strings and tuples of strings. Before pre-
senting some of Solovay's remarkable results regarding
these concepts, and in particular regarding fl, we would
like to review the most important facts which are known
regarding the probability, entropy, and complexity of
infinite objects, namely recursively enumerable sets of
strings.
Algorithmic information theory of infinite
computations [24]
In order to define the probability, entropy, and complex-
ity of r.e, (recursively enumerable) sets of strings it is
necessary to consider unending computations performed
on our standard universal Turing machine U. A compu-
tation is said to produce an r.e. set of strings A if all the
members of A and only members of A are eventually
written on the output tape, each followed by a comma. It
is important that U not be required to halt if A is finite.
The members of the set A may be written in arbitrary
order, and duplications are ignored. A technical point: If
there are only finitely many strings written on the output
tape, and the last one is infinite or is not followed by a
comma, then it is considered to be an "unfinished" string
and is also ignored. Note that since computations may
be endless, it is now possible for a semi-infinite portion
of the program tape to be read.
The definitions of the probability P(A), the entropy
H (A), and the complexity 1(A) of an Le. set of strings A
may now be given. P(A) is the probability that U pro-
duces the output set A if each square of its program tape
is filled with a 0 or a I by a separate toss of an unbiased
coin. H (A) is the negative base-two logarithm of P(A).
And 1(A) is the size in bits of a minimal program that
produces the output set A, i.e., I (A) is the least n such
that there is a program tape contents that makes U un-
dertake a computation in the course of which it reads
precisely n squares of the program tape and produces
the set of strings A. In order to define the joint and con-
ditional probability and entropy we need a mechanism
for encoding two r.e, sets A and B into a single set A join 353
JULY 1977 ALGORITHMIC INFORMATION THEORY
5. 354
B. To obtain A join B one prefixes each string in A with
a 0 and each string in B with a 1 and takes the union of
the two resulting sets. Enumerating A join B is equiva-
lent to simultaneously enumerating A and B. So the joint
probability P(A, B) is peA join B), the joint entropy
H(A,B) is H(Ajoin B), and the joint complexity I(A,B)
is I(A join B). These definitions can obviously be ex-
tended to more than two r.e. sets, but it is unnecessary to
do so here. Lastly, the conditional probability p(BIA) of
B given A is the quotient of P(A,B) divided by P(A), and
the conditional entropy H (B IA) is the negative base-two
logarithm of P(BIA). Thus by definition H(A,B) H(A)
+ H(BIA).
As before, one obtains the following basic inequali-
ties:
H(A,B) = H(B,A) + 0(1),
H(A)::: H(A,B) + 0(1),
H(A, B) = H(B,A) + 0(1),
0(1) <: H(BIA) <: H(B) + 0(1),
I(A,B) <: I(A) + I(B) + 0(1).
In order to demonstrate the third and the fifth of these
relations one imagines two unending computations to be
occurring simultaneously. Then one interleaves the bits
of the two programs in the order in which they are read.
Putting a fixed size prefix in front of this, one obtains a
single program for performing both computations simul-
taneously whose size is 0 ( I) plus the sum of the sizes
of the original programs.
So far things look much as they did for individual
strings. But the relationship between entropy and com-
plexity turns out to be more complicated for r.e. sets
than it was in the case of individual strings. Obviously
the entropy H (A) is always less than or equal to the
complexity l(A), because of the probability contributed
by each minimal program for A:
H(A) <: I(A).
But how about bounds on I (A) in terms of H (A)? First
of all, it is easy to see that if A is a singleton set whose
only member is the string s, then H (A) = H (s) + 0 (1)
and I (A) = I (s) + 0 (I ). Thus the theory of the algo-
rithmic information of individual strings is contained in
the theory of the algorithmic information of r.e. sets as
the special case of sets having a single element:
For singleton A, leA) = H(A) + O( I).
There is also a close but not an exact relationship be-
tween H and I in the case of sets consisting of initial
segments of the set of natural numbers (recall we identi-
fy the natural number n with its binary representation).
Let us use the adjective "initial" for any set consisting
of all natural numbers less than a given one:
For initial A, leA) H(A) +O(logH(A».
Moreover, it is possible to show that there are in-
finitely many initial sets A for which I (A) > H (A) +
O(logH(A». This is the greatest known discrepancy
between I and H for r.e. sets. It is demonstrated by
showing that occasionally the number of initial sets A
with H (A) < n is appreciably greater than the number
of initial sets A with I(A) < n. On the other hand, with
the aid of a crucial game-theoretic lemma of D. A. Martin,
Solovay [28] has shown that
l(A) <: 3 H(A) + O(logH(A».
These are the best results currently known regarding the
relationship between the entropy and the complexity of
an r.e. set; clearly much remains to be done. Further-
more, what is the relationship between the conditional
entropy and the relative complexity of r.e. sets? And
how many minimal or near-minimal programs for an r.e.
set are there?
We would now like to mention some other results
concerning these concepts. Solovay has shown that:
There are eXP2 (n H (n) + 0 (I ) singleton sets A with H (A) < n,
There are exP2(n - H(n) + 0(1» singleton sets A with I(A) < n.
We have extended Solovay's result as follows:
There are exP2(n H'Ln) + 0(1) finite sets A with H(A) < n,
There are exP2(n- H(Lnl + O(logH(Ln» ) sets A with I(A) < n,
There are eXP2 (n - H' (L n ) + o(IogH' (L.») sets A with H (A) < n.
Here Ln is the set of natural numbers less than n, and H'
is the entropy relative to the halting problem; if U is
provided with an oracle for the halting problem instead
of one for the null set, then the probability, entropy, and
complexity measures one obtains are P', H', and I' in-
stead of P, H, and I. Two final results:
I' (A, the complement ofA) ::: H (A) + 0 (I) ;
the probability that the complement of an r.e, set has
cardinality n is essentially equal to the probability that a
set r.e. in the halting problem has cardinality n.
More advanced results [27]
The previous sections outline the basic features of the
new formalism for algorithmic information theory ob-
tained by stipulating that programs be self-delimiting in-
stead of having endmarker blanks. Error terms in the
basic relations which were logarithmic in the previous
approach [9] are now of the order of unity.
In the previous approach the complexity of 11 is usu-
ally log211 + 0 (1), there is an information-theoretic char-
G. J. CHAITIN IBM J. RES. DEVELOP.
6. acterization of recursive infinite strings [25, 26], and
much is known about complexity oscillations in random
infinite strings [14]. The corresponding properties in the
new approach have been elucidated by Solovay in an
unpublished paper [27]. We present some of his results
here. For related work by Solovay, see the publications
[28,48,49].
• Recursive bounds on H(n)
Following [23, p. 337], let us consider recursive upper
and lower bounds on H(n). Let fbe an unbounded re-
cursive function, and consider the series ~ eXP2(-f( n) )
summed over all natural numbers n. If this infinite series
converges, then H (n) < f(n) + 0 ( I) for all n. And if it
diverges, then the inequalities H(n) > f(n) and H(n) <
f(n) each hold for infinitely many n. Thus, for example,
for any e > 0, H (n) < logn + loglogn+ (1 +e) logloglogn
+ 0(1) for all n, and H(n) > logn + loglogn + loglog-
logn for infinitely many n, where all logarithms are base
two. See [50] for the results on convergence used to
prove this.
Solovay has obtained the following results regarding
recursive upper bounds on H, i.e., recursive h such that
H(n) < h(n) for all n. First he shows that there is a
recursive upper bound on H which is almost correct
infinitely often, i.e., IH(n) - h(n) I < c for infinitely
many values of n. In fact, the lim sup of the fraction of
values of i less than n such that h (i) IH (i) - h (i) I < c is
greater than O. However, he also shows that the values
of n for which IH(n) - h(n) I < c must in a certain sense
be quite sparse. In fact, he establishes that if h is any
recursive upper bound on H then there cannot exist a
tolerance c and a recursive function f such that there are
always at least n different natural numbers i less than
f( n) at which h (i) is within c of H (i). It follows that the
lim inf of the fraction of values of i less than n such that
IH(i) - h(i)1 < c is zero.
The basic idea behind his construction of h is to
choose f so that ~ eXP2 (-f(n)) converges "as slowly" as
possible. As a byproduct he obtains a recursive conver-
gent series of rational numbers ~an such that if ~bn is
any recursive convergent series of rational numbers,
then lim sup ani bn is greater than zero.
• Nonrecursive infinite strings with simple initial seg-
ments
At the high-order end of the complexity scale for infinite
strings are the random strings, and the recursive strings
are at the low order end. Is anything else there? More
formally, let X be an infinite binary string, and let Xn be
the first n bits of X. If X is recursive, then we have
H(Xn) = H(n) + 0(1). What about the converse, i.e.,
what can be said about X given only that H(Xn) = H (n) +
0(1)? Obviously H(Xn) = H(n,Xn) + 0(1) = H(n) +
H(Xnln) + 0(1). So H(Xn) = H(n) + 0(1) iff H(Xnln)
= 0 (1). Then using a relativized version of the proof in
[37, pp. 525 - 526], one can show that X is recursive in
the halting problem. Moreover, by using a priority argu-
ment Solovay is actually able to construct a nonrecur-
sive X that satisfies H (Xn ) = H (n) + 0 (I).
• Equivalent definitions of an algorithmically random
real
Pick a recursive enumeration 00' 0 1, O2 , ' ' ' of all open
intervals with rational endpoints. A sequence of open
sets V o, V 1, V 2 , ' " is said to be simultaneously r.e. if
there is a recursive function h such that V n is the union
of those O, whose index i is of the form h(n,j), for some
natural number j. Consider a real number x in the unit
interval. We say that x has the Solovay randomness prop-
erty if the following holds. Let V o' V p V 2 , ' " be any
simultaneously r.e, sequence of open sets such that the
sum of the usual Lebesgue measure of the Vn converges.
Then x is in only finitely many of the V n' We say that x
has the Chaitin randomness property if there is a c such
that H(Xnl > n - c for all n, where X n is the string con-
sisting of the first n bits of the dyadic expansion of x.
Solovay has shown that these randomness properties are
equivalent to each other, and that they are also equiva-
lent to Martin-Lof's definition [10] of randomness.
• The entropy of initial segments of algorithmically
random and of fl-like reals
Consider a random real x. By the definition of random-
ness, H(Xn ) > n + 0(1). On the other hand, for any
infinite string X, random or not, we have H(Xn) :s n +
H (n) + 0 (1 ). Solovay shows that thl& above bounds are
each sometimes sharp. More precisely, consider a ran-
dom X and a recursive functionfsuch that I eXP2 (-f(n))
diverges (e.g., f(n) = integer part of log,»). Then there
are infinitely many natural numbers n such that H(Xn )
::: n + f( n). And consider an unbounded monotone
increasing recursive function g (e.g., g(n) = integer part
of loglog n). There are infinitely many natural numbers n
such that it is simultaneously the case that H (Xn ) :s n +
g(n) and H(n) :::f(n).
Solovay has obtained much more precise results along
these lines about fl and a class of reals which he calls
"fl-like." A real number is said to be an r.e. real if the
set of all rational numbers less than it is an r.e. subset of
the rational numbers. Roughly speaking, an r.e, real x is
fl-like if for any r.e, real y one can get in an effective man-
ner a good approximation to y from any good ap-
proximation to x, and the quality of the approximation to
y is at most 0 (1) binary digits worse than the quality of
the approximation to x. The formal definition of fl-like is
as follows. The real x is said to dominate the real y if 355
JULY 1977 ALGORITHMIC INFORMATION THEORY
7. 356
G. J. CHAITIN
there is a partial recursive function f and a constant c
with the property that if q is any rational number that is
less than x, thenf(q) is defined and is a rational number
that is less than y and satisfies the inequality clx - ql :::
Iy - f(q) I· And a real number is said to be a-like if it is
an r.e. real that dominates all r.e. reals. Solovay proves
that 0. is in fact a-like, and that if x and yare a-like,
then H(X,,) = H(Yn ) + O( 1), where X n and Y n are the
first n bits in the dyadic expansions of x and y. It is an
immediate corollary that if x is a-like then H(Xn ) =
H (0.,,) + 0 ( 1), and that all a-like reals are algorithmi-
cally random. Moreover Solovay shows that the algo-
rithmic probability P(s) of any string s is always an a-
like real.
In order to state Solovay's results contrasting the
behavior of H(an ) with that of H(Xn ) for a typical real
number x, it is necessary to define two extremely slowly
growing monotone functions a and a'. a (n) = min H (j)
(j ::: n), and a ' is defined in the same manner as a except
that H is replaced by H', the algorithmic entropy rela-
tive to the halting problem. It can be shown (see [29,
pp. 90-91]) that a goes to infinity, but more slowly
than any monotone partial recursive function does.
More precisely, iff is an unbounded nondecreasing par-
tial recursive function, then a(n) is less than f(n) for
almost all n for which f(n) is defined. Similarly a' goes
to infinity, but more slowly than any monotone partial
function recursive in a does. More precisely, if f is an
unbounded nondecreasing partial function recursive in
the halting problem, then a' (n) is less than f(n) for al-
most alln for whichf(n) is defined. In particular, a'(n)
is less than a(a(n)) for almost all n.
We can now state Solovay's results. Consider a real
number x uniformly distributed in the unit interval. With
probability one there is a c such that H(Xn
) > n + H(n)
- c holds for infinitely many n. And with probability
one, H(X,,) > n + a(n) + O(loga(n)). Whereas if x is
a-like, then the following occurs: H(Xn ) < n + Hin) -
a(n) + O(loga(n)), and for infinitely many n we have
H(Xn) < n + a'(n) + O(loga'(n)). This shows that the
complexity of initial segments of the dyadic expansions
of a-like reals is atypical. It is an open question whether
H (an) - n tends to infinity; Solovay suspects that it does.
Algorithmic information theory and metamathe-
matics
There is something paradoxical about being able to
prove that a specific finite string is random; this is per-
haps captured in the following antinomies from the writ-
ings ofM. Gardner [51] and B. Russell [52]. In reading
them one should interpret "dull," "uninteresting," and
"indefinable" to mean "random," and "interesting" and
"definable" to mean "nonrandom."
"[Natural] numbers can of course be interesting in a
variety of ways. The number 30 was interesting to
George Moore when he wrote his famous tribute to 'the
woman of 30,' the age at which he believed a married
woman was most fascinating. To a number theorist 30 is
more likely to be exciting because it is the largest integer
such that all smaller integers with which it has no com-
mon divisor are prime numbers. . . . The question arises:
Are there any uninteresting numbers? We can prove
that there are none by the following simple steps. If
there are dull numbers, we can then divide all numbers
into two sets-interesting and dull. In the set of dull
numbers there will be only one number that is the small-
est. Since it is the smallest uninteresting number it be-
comes, ipso facto, an interesting number, [Hence there
are no dull numbersl]" [51].
"Among transfinite ordinals some can be defined,
while others cannot; for the total number of possible
definitions is l'o' while the number of transfinite ordinals
exceeds l'o' Hence there must be indefinable ordinals,
and among these there must be a least. But this is de-
fined as 'the least indefinable ordinal,' which is a contra-
diction" [52].
Here is our incompleteness theorem for formal axiom-
atic theories whose arithmetic consequences are true.
The setup is as follows: The axioms are a finite string,
the rules of inference are an algorithm for enumerating
the theorems given the axioms, and we fix the rules of
inference and vary the axioms. Within such a formal
theory a specific string cannot be proven to be of en-
tropy more than 0 (1) greater than the entropy of the ax-
ioms of the theory. Conversely, there are formal theories
whose axioms have entropy n + 0 ( 1) in which it is pos-
sible to establish all true propositions of the form
"H(specific string) ::: n:"
Proof Consider the enumeration of the theorems of the
formal axiomatic theory in order of the size of their
proofs. For each natural number k, let s* be the string in
the theorem of the form "H(s) ::: n" with n greater than
H(axioms) + k which appears first in this enumeration.
On the one hand, if all theorems are true, then H (s*) >
H(axioms) + k. On the other hand, the above prescrip-
tion for calculating s* shows that H(s*) ::::: H(axioms) +
H(k) + 0(1). It follows that k < H(k) + 0(1). How-
ever, this inequality is false for all k ::: k*, where k* de-
pends only on the rules of inference. The apparent con-
tradiction is avoided only if s* does not exist for k = k* ,
i.e., only if it is impossible to prove in the formal the-
ory that a specific string has H greater than H (axioms)
+ k*.
Proof of Converse The set T of all true propositions of
the form "H (s) < k" is r.e. Choose a fixed enumeration
of T without repetitions, and for each natural number n
IBM J. RES. DEVELOP.
9. 358
G. J. CHAITIN
2. M. Gardner, "An Inductive Card Game," Sci. Amer. 200,
No.6, 160 (1959).
3. R. J. Solomonoff, "A Formal Theory of Inductive Infer-
ence," Info. Control 7, 1,224 (1964).
4. D. G. Willis, "Computational Complexity and Probability
Constructions," J. ACM 17, 241 (1970).
5. T. M. Cover, "Universal Gambling Schemes and the Com-
plexity Measures of Kolmogorov and Chaitin," Statistics
Department Report 12, Stanford University, CA, October,
1974.
6. R. J. Solomonoff, "Complexity Based Induction Systems:
Comparisons and Convergence Theorems," Report RR-
329, Rockford Research, Cambridge, MA, August, 1976.
7. A. N. Kolmogorov, "On Tables of Random Numbers,"
Sankhya A25, 369 (1963).
8. A. N. Kolmogorov, "Three Approaches to the Quantitative
Definition of Information," Prob. Info. Transmission 1,
No.l,1 (1965).
9. A. N. Kolmogorov, "Logical Basis for Information Theory
and Probability Theory," IEEE Trans. Info. Theor. IT.14,
662 (1968).
10. P. Martin-Lot, "The Definition of Random Sequences,"
Info. Control 9, 602 (1966).
11. P. Martin-Lof, "Algorithms and Randomness," Inti. Stat.
Rev. 37, 265 (1969).
12. P. Martin-Lof, "The Literature on von Mises' Kollektivs
Revisited," Theoria 35, Part 1, 12 (1969).
13. P. Martin-Lof, "On the Notion of Randomness," Intuition-
ism and Proof Theory, A. Kino, J. Myhill, and R. E. Ves-
ley, eds., North-Holland Publishing Co., Amsterdam, 1970,
p.73.
14. P. Martin-Lof, "Complexity Oscillations in Infinite Binary
Sequences," Z. Wahrscheinlichk. verwand. Geb. 19, 225
(1971).
15. G. J. Chaitin, "On the Length of Programs for Computing
Finite Binary Sequences," J. ACM 13, 547 (1966).
16. G. J. Chaitin, "On the Length of Programs for Computing
Finite Binary Sequences: Statistical Considerations," J.
ACM 16, 145 (1969).
17. G. J. Chaitin, "On the Simplicity and Speed of Programs
for Computing Infinite Sets of Natural Numbers," J. ACM
16,407 (1969).
18. G. J. Chaitin, "On the Difficulty of Computations," IEEE
Trans. Info. Theor. IT·16, 5 (1970).
19. G. J. Chaitin, "To a Mathematical Definition of 'Life,'''
ACM SICACT News 4,12 (1970).
20. G. J. Chaitin, "Information-theoretic Limitations of Formal
Systems," J. ACM 21, 403 (1974).
21. G. J. Chaitin, "Information-theoretic Computational Corn-
plexity,' IEEE Trans. Info. Theor. IT·20, 10 (1974).
22. G. J. Chaitin, "Randomness and Mathematical Proof," Sci.
A mer. 232, No.5, 47 (1975). (Also published in the Jap-
anese and Italian editions of Sci. Amer.)
23. G. J. Chaitin, "A Theory of Program Size Formally Identi-
cal to Information Theory," J. ACM 22,329 (1975).
24. G. J. Chaitin, "Algorithmic Entropy of Sets," Comput, &
Math. Appls. 2, 233 (1976).
25. G. J. Chaitin, "Information-theoretic Characterizations of
Recursive Infinite Strings," Theoret. Comput. Sci. 2, 45
(1976) .
26. G. J. Chaitin, "Program Size, Oracles, and the Jump Oper-
ation," Osaka J. Math., to be published in Vol. 14, No.1,
1977.
27. R. M. Solovay, "Draft of a paper ... on Chaitin's work
. . . done for the most part during the period of Sept. - Dec.
1974," unpublished manuscript, IBM Thomas J. Watson
Research Center, Yorktown Heights, NY, May, 1975.
28. R. M. Solovay, "On Random R. E. Sets," Proceedings of
the Third Latin American Symposium on Mathematical
Logic, Campinas, Brazil, July, 1976. To be published.
29. A. K. Zvonkin and L. A. Levin, "The Complexity of Finite
Objects and the Development of the Concepts of Informa-
tion and Randomness by Means of the Theory of Algo-
rithms," Russ. Math. Surv. 25, No.6, 83 (1970).
30. L. A. Levin, "On the Notion of a Random Sequence," So-
viet Math. Dokl. 14, 1413 (1973).
31. P. Gac, "On the Symmetry of Algorithmic Information,"
Soviet Math. Dokl. 15, 1477 (1974). "Corrections," Soviet
Math. Dokl. 15, No.6, v (1974).
32. L. A. Levin, "Laws of Information Conservation (Non-
growth) and Aspects of the Foundation of Probability
Theory," Prob.Lnfo. Transmission 10, 206 (1974).
33. L. A. Levin, "Uniform Tests of Randomness," Soviet
Math. Dokl. 17, 337 (1976).
34. L. A. Levin, "Various Measures of Complexity for Finite
Objects (Axiomatic Description)," Soviet Math. Dokl. 17,
522 (1976).
35. L. A. Levin, "On the Principle of Conservation of Informa-
tion in Intuitionistic Mathematics," Soviet Math. Dokl. 17,
601 (1976).
36. D. E. Knuth, Seminumerical Algorithms. The Art of Com-
puter Programming, Volume 2, Addison-Wesley Publishing
Co., Inc., Reading, MA, 1969. See Ch. 2, "Random Num-
bers," p. 1.
37. D. W. Loveland, "A Variant of the Kolmogorov Concept
of Complexity," Info. Control 15, 510 (1969).
38. T. L. Fine, Theories of Probability-An Examination of
Foundations, Academic Press, Inc., New York, 1973. See
Ch. V, "Computational Complexity, Random Sequences,
and Probability," p. 118.
39. J. T. Schwartz, On Programming: An Interim Report on
the SETL Project. Installment I: Generalities, Lecture
Notes, Courant Institute of Mathematical Sciences, New
York University, 1973. See Item 1, "On the Sources of
Difficulty in Programming," p. 1, and Item 2, "A Second
General Reflection on Programming," p. 12.
40. T. Kamae, "On Kolmogorov's Complexity and Informa-
tion," Osaka J. Math. 10, 305 (1973).
41. C. P. Schnorr, "Process Complexity and Effective Random
Tests," 1. Comput. Syst. Sci. 7, 376 (1973).
42. M. E. Hellman, "The Information Theoretic Approach to
Cryptography," Information Systems Laboratory, Center
for Systems Research, Stanford University, April, 1974.
43. W. L. Gewirtz, "Investigations in the Theory of Descrip-
tive Complexity," Courant Computer Science Report 5,
Courant Institute of Mathematical Sciences, New York
University, October, 1974.
44. R. P. Daley, "Minimal-program Complexity of Pseudo-
recursive and Pseudo-random Sequences," Math. Syst.
Theor. 9, 83 (1975).
45. R. P. Daley, "Noncomplex Sequences: Characterizations
and Examples," J. Symbol. Logic 41, 626 (1976).
46. J. Gruska, "Descriptional Complexity (of Languages l s-A
Short Survey," Mathematical Foundations of Computer
Science 1976, A. Mazurkiewicz, ed., Lecture Notes in
Computer Science 45, Springer-Verlag, Berlin, 1976, p. 65.
47. J. Ziv, "Coding Theorems for Individual Sequences," un-
dated manuscript, Bell Laboratories, Murray Hill, NJ.
48. R. M. Solovay, "A Model of Set-theory in which Every Set
of Reals is Lebesgue Measurable," Ann. Math. 92, 1
(1970).
49. R. Solovay and V. Strassen, "A Fast Monte-Carlo Test
for Primality," SIAM J. Comput. 6, 84 (1977).
50. G. H. Hardy, A Course of Pure Mathematics, Tenth edi-
tion, Cambridge University Press, London, 1952. See
Section 218, "Logarithmic Tests of Convergence for Series
and Integrals," p. 417.
51. M. Gardner, "A Collection of Tantalizing Fallacies of
Mathematics," Sci. Amer. 198, No.1, 92 (1958).
52. B. Russell, "Mathematical Logic as Based on the Theory of
Types," From Frege to Godel: A Source Book in Mathe-
matical Logic, 1879- 1931, J. van Heijenoort, ed., Harvard
University Press, Cambridge, MA, 1967, p. 153; reprinted
from Amer. J. Math. 30, 222 (1908).
IBM J. RES. DEVELOP.
10. 53. M. Levin, "Mathematical Logic for Computer Scientists,"
MIT Project MAC TR-13I, June, 1974, pp. 145, 153.
54. J. von Neumann, Theory of Self-reproducing Automata,
University of Illinois Press, Urbana, 1966; edited and
completed by A. W. Burks.
55. C. H. Bennett, "On the Thermodynamics of Computation,"
undated manuscript, IBM Thomas J. Watson Research
Center, Yorktown Heights, NY.
56. C. H. Bennett, "Logical Reversibility of Computation,"
IBM J. Res. Develop. 17,525 (1973).
JULY 1977
Received February 2, 1977; revised March 9, 1977
The author is located at the IBM Thomas J. Watson
Research Center, Yorktown Heights, New York 10598.
359
ALGORITHMIC INFORMATION THEORY