This article introduces a general notion of physical bit information that is compatible with the basics of quantum mechanics and incorporates the Shannon entropy as a special case. This notion of physical information leads to the Binary data matrix model (BDM), which predicts the basic results of quantum mechanics, general relativity, and black hole thermodynamics. The compatibility of the model with holographic, information conservation, and Landauer’s principle are investigated. After deriving the “Bit Information principle” as a consequence of BDM, the fundamental equations of Planck, De Broglie, Bekenstein, and mass-energy equivalence are derived.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two
possible and known rates, and one counter. Through a switch, the counter can observe
the sources individually or the counts can be combined so that the counter observes the
sum of the two. The sensor scheduling problem is to determine an optimal proportion of
the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between
the sources and the observed counts, and probability of detection for the associated
source detection problem. Our results, which are primarily computational, indicate
similar but not identical results under the two cost functions.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two possible and known rates, and one counter. Through a switch, the counter can observe the sources individually or the counts can be combined so that the counter observes the sum of the two. The sensor scheduling problem is to determine an optimal proportion of the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between the sources and the observed counts, and probability of detection for the associated source detection problem. Our results, which are primarily computational, indicate similar but not identical results under the two cost functions.
Quantum information as the information of infinite seriesVasil Penchev
The quantum information introduced by quantum mechanics is equivalent to that generalization of the classical information from finite to infinite series or collections. The quantity of information is the quantity of choices measured in the units of elementary choice. The qubit, can be interpreted as that generalization of bit, which is a choice among a continuum of alternatives. The axiom of choice is necessary for quantum information. The coherent state is transformed into a well-ordered series of results in time after measurement. The quantity of quantum information is the ordinal corresponding to the infinity series in question.
The document discusses calculating and plotting the allocation of entropy for bipartite and tripartite quantum systems. It provides tables of entropy calculations for a bipartite system of |0> and |1> states, and checks that the results satisfy the subadditivity inequality. It also outlines the methodology to perform similar calculations and plots for other systems to visualize the convex cones of entropy allocation.
Application for Logical Expression Processing csandit
Processing of logical expressions – especially a conversion from conjunctive normal form (CNF) to disjunctive normal form (DNF) – is very common problem in many aspects of information retrieval and processing. There are some existing solutions for the logical symbolic calculations, but none of them offers a functionality of CNF to DNF conversion. A new application for this purpose is presented in this paper.
Measuring Social Complexity and the Emergence of Cooperation from Entropic Pr...IJEAB
This document presents a theoretical framework for measuring social complexity based on the relationship between complexity, entropy, and evolutionary dynamics. It uses entropy principles to study the emergence of cooperation in social systems. The collapse of the Rapa Nui civilization is used as a case study. The framework defines social complexity using information-theoretic entropy measures and relates higher entropy production and cooperation to greater sustainability. Cooperation emerges when average fitness exceeds local fitness.
Accelerating materials property predictions using machine learningGhanshyam Pilania
The materials discovery process can be significantly expedited and simplified if we can learn effectively from available knowledge and data. In the present contribution, we show that efficient and accurate prediction of a diverse set of properties of material systems is possible by employing machine (or statistical) learning
methods trained on quantum mechanical computations in combination with the notions of chemical similarity. Using a family of one-dimensional chain systems, we present a general formalism that allows us to discover decision rules that establish a mapping between easily accessible attributes of a system and its properties. It is shown that fingerprints based on either chemo-structural (compositional and configurational information) or the electronic charge density distribution can be used to make ultra-fast, yet accurate, property predictions. Harnessing such learning paradigms extends recent efforts to systematically explore and mine vast chemical spaces, and can significantly accelerate the discovery of new application-specific materials.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two
possible and known rates, and one counter. Through a switch, the counter can observe
the sources individually or the counts can be combined so that the counter observes the
sum of the two. The sensor scheduling problem is to determine an optimal proportion of
the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between
the sources and the observed counts, and probability of detection for the associated
source detection problem. Our results, which are primarily computational, indicate
similar but not identical results under the two cost functions.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two possible and known rates, and one counter. Through a switch, the counter can observe the sources individually or the counts can be combined so that the counter observes the sum of the two. The sensor scheduling problem is to determine an optimal proportion of the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between the sources and the observed counts, and probability of detection for the associated source detection problem. Our results, which are primarily computational, indicate similar but not identical results under the two cost functions.
Quantum information as the information of infinite seriesVasil Penchev
The quantum information introduced by quantum mechanics is equivalent to that generalization of the classical information from finite to infinite series or collections. The quantity of information is the quantity of choices measured in the units of elementary choice. The qubit, can be interpreted as that generalization of bit, which is a choice among a continuum of alternatives. The axiom of choice is necessary for quantum information. The coherent state is transformed into a well-ordered series of results in time after measurement. The quantity of quantum information is the ordinal corresponding to the infinity series in question.
The document discusses calculating and plotting the allocation of entropy for bipartite and tripartite quantum systems. It provides tables of entropy calculations for a bipartite system of |0> and |1> states, and checks that the results satisfy the subadditivity inequality. It also outlines the methodology to perform similar calculations and plots for other systems to visualize the convex cones of entropy allocation.
Application for Logical Expression Processing csandit
Processing of logical expressions – especially a conversion from conjunctive normal form (CNF) to disjunctive normal form (DNF) – is very common problem in many aspects of information retrieval and processing. There are some existing solutions for the logical symbolic calculations, but none of them offers a functionality of CNF to DNF conversion. A new application for this purpose is presented in this paper.
Measuring Social Complexity and the Emergence of Cooperation from Entropic Pr...IJEAB
This document presents a theoretical framework for measuring social complexity based on the relationship between complexity, entropy, and evolutionary dynamics. It uses entropy principles to study the emergence of cooperation in social systems. The collapse of the Rapa Nui civilization is used as a case study. The framework defines social complexity using information-theoretic entropy measures and relates higher entropy production and cooperation to greater sustainability. Cooperation emerges when average fitness exceeds local fitness.
Accelerating materials property predictions using machine learningGhanshyam Pilania
The materials discovery process can be significantly expedited and simplified if we can learn effectively from available knowledge and data. In the present contribution, we show that efficient and accurate prediction of a diverse set of properties of material systems is possible by employing machine (or statistical) learning
methods trained on quantum mechanical computations in combination with the notions of chemical similarity. Using a family of one-dimensional chain systems, we present a general formalism that allows us to discover decision rules that establish a mapping between easily accessible attributes of a system and its properties. It is shown that fingerprints based on either chemo-structural (compositional and configurational information) or the electronic charge density distribution can be used to make ultra-fast, yet accurate, property predictions. Harnessing such learning paradigms extends recent efforts to systematically explore and mine vast chemical spaces, and can significantly accelerate the discovery of new application-specific materials.
PCA is a technique to reduce the dimensionality of multivariate data while retaining essential information. It works by transforming the data to a new coordinate system such that the greatest variance by any projection of the data lies on the first coordinate, called the first principal component. Subsequent components account for remaining variance while being orthogonal to previous components. PCA is performed by computing the eigenvalues and eigenvectors of the covariance matrix of the data, with the principal components being the eigenvectors. This allows visualization and interpretation of high-dimensional data in lower dimensions.
The mathematical and philosophical concept of vectorGeorge Mpantes
What is behind the physical phenomenon of the velocity; of the force; there is the mathematical concept of the vector. This is a new concept, since force has direction, sense, and magnitude, and we accept the physical principle that the forces exerted on a body can be added to the rule of the parallelogram. This is the first axiom of Newton. Newton essentially requires that the power is a " vectorial " size , without writing clearly , and Galileo that applies the principle of the independence of forces .
Informational Nature of Dark Matter and Dark Energy and the Cosmological Cons...OlivierDenis15
In this article, realistic quantitative estimation of dark matter and dark energy considered as informational phenomena
have been computed, thereby explaining certain anomalies and effects within the universe. Moreover, by the same conceptual
approach, the cosmological constant problem has been reduced by almost 120 orders of magnitude in the prediction of the
vacuum energy from a quantum point of view. We argue that dark matter is an informational field with finite and quantifiable
negative mass, distinct from the conventional fields of matter of quantum field theory and associated with the number of bits of
information in the observable universe, while dark energy is negative energy, calculated as the energy associated with dark
matter. Since dark energy is vacuum energy, it emerges from dark matter as a collective potential of all particles with their
individual zero-point energy via Landauer's principle
This document describes rsnSieve, a MATLAB application that automatically selects resting state networks from fMRI data. It uses independent component analysis to decompose fMRI signals, then applies several processing steps like Pearson's index evaluation, k-means clustering, power spectral analysis to identify and select components corresponding to resting state networks. The selected networks are then used for further analysis.
Abstract: Complex dynamical reaction networks consisting of many components that interact and produce each other are difficult to understand, especially, when new component types may appear and present component types may vanish com- pletely. Inspired by Fontana and Buss (Bull. Math. Biol., 56, 1–64) we outline a theory to deal with such systems. The theory consists of two parts. The first part introduces the concept of a chemical organisation as a closed and self-maintaining set of components. This concept allows to map a complex (reaction) network to the set of organisations, providing a new view on the system’s structure. The sec- ond part connects dynamics with the set of organisations, which allows to map a movement of the system in state space to a movement in the set of organisations. The relevancy of our theory is underlined by a theorem that says that given a dif- ferential equation describing the chemical dynamics of the network, then every stationary state is an instance of an organisation. For demonstration, the theory is applied to a small model of HIV-immune system interaction by Wodarz and Nowak (Proc. Natl. Acad. USA, 96, 14464–14469) and to a large model of the sugar metabolism of E. Coli by Puchalka and Kierzek (Biophys. J., 86, 1357–1372). In both cases organisations where uncovered, which could be related to functions.
1. The document discusses the history and concepts of set theory, including how it was founded by Georg Cantor and how work by Zermelo and Fraenkel led to the commonly used ZFC set of axioms.
2. Various concepts in set theory are defined, such as empty sets, singleton sets, finite and infinite sets, unions, intersections, differences, and subsets.
3. The document also discusses applications of set theory and related fields like fuzzy logic, rough set theory, and how fuzzy set theory has been applied in rock engineering characterization.
The document summarizes early neural network models from the 1940s including McCulloch and Pitts' threshold model neuron and Hebb's learning rule. It then describes Rosenblatt's perceptron model from 1958 which consists of input neurons connected to output neurons that learn to classify patterns through a training algorithm. The perceptron convergence theorem guarantees the algorithm will find a solution if the classes are linearly separable in the feature space. However, the perceptron has limitations as many real problems are not linearly separable.
REPRESENTATION OF UNCERTAIN DATA USING POSSIBILISTIC NETWORK MODELScscpconf
Uncertainty is a pervasive in real world environment due to vagueness, is associated with the
difficulty of making sharp distinctions and ambiguity, is associated with situations in which the
choices among several precise alternatives cannot be perfectly resolved. Analysis of large
collections of uncertain data is a primary task in the real world applications, because data is
incomplete, inaccurate and inefficient. Representation of uncertain data in various forms such
as Data Stream models, Linkage models, Graphical models and so on, which is the most simple,
natural way to process and produce the optimized results through Query processing. In this
paper, we propose the Uncertain Data model can be represented as Possibilistic data model
and vice versa for the process of uncertain data using various data models such as possibilistic
linkage model, Data streams, Possibilistic Graphs. This paper presents representation and
process of Possiblistic Linkage model through Possible Worlds with the use of product-based
operator.
The document discusses Einstein's field equations and Heisenberg's uncertainty principle. It begins by providing background on Einstein's field equations, which relate the geometry of spacetime to the distribution of mass and energy within it. It then discusses some key mathematical aspects of the field equations, including their nonlinear partial differential form. Finally, it notes that the field equations can be consolidated with Heisenberg's uncertainty principle to provide a unified description of gravity and quantum mechanics.
The document discusses clustering documents using a multi-viewpoint similarity measure. It begins with an introduction to document clustering and common similarity measures like cosine similarity. It then proposes a new multi-viewpoint similarity measure that calculates similarity between documents based on multiple reference points, rather than just the origin. This allows a more accurate assessment of similarity. The document outlines an optimization algorithm used to cluster documents by maximizing the new similarity measure. It compares the new approach to existing document clustering methods and similarity measures.
A Rough Set based Fuzzy Inference System for Mining Temporal Medical Databases ijsc
This document describes a proposed system for constructing a fuzzy temporal rule-based classifier to mine temporal patterns in medical databases. The system uses fuzzy rough set theory and temporal logic. It performs feature selection using a hybrid genetic algorithm on a diabetes dataset. A fuzzy decision table is constructed from the lower approximations. Rules are then generated by transforming the rule-based classifier into a fuzzy inference system using Allen's temporal algebra. The rules are used to predict patient conditions and the severity of diabetes. Experiments on a diabetic dataset showed the approach can accurately predict disease severity and support clinical decision making.
A ROUGH SET BASED FUZZY INFERENCE SYSTEM FOR MINING TEMPORAL MEDICAL DATABASES ijsc
The main objective of this research work is to construct a Fuzzy Temporal Rule Based Classifier that uses
fuzzy rough set and temporal logic in order to mine temporal patterns in medical databases. The lower
approximation concepts and fuzzy decision table with the fuzzy features are used to obtain fuzzy decision
classes for building the classifier. The goals are pre-processing for feature selection, construction of
classifier, and rule induction based on increment rough set approach. The features are selected using
Hybrid Genetic Algorithm. Moreover the elementary sets are obtained from lower approximations are
categorized into the decision classes. Based on the decision classes a discernibility vector is constructed to
define the temporal consistency degree among the objects. Now the Rule Based Classifier is transformed
into a temporal rule based fuzzy inference system by incorporating the Allen’s temporal algebra to induce
rules. It is proposed to use incremental rough set to update rule induction in dynamic databases. Ultimately
these rules are categorized as rules with range values to perform prediction effectively. The efficiency of
the approach is compared with other classifiers in order to assess the accuracy of the fuzzy temporal rule
based classifier. Experiments have been carried out on the diabetic dataset and the simulation results
obtained prove that the proposed temporal rule-based classifier on clinical diabetic dataset stays as an
evidence for predicting the severity of the disease and precision in decision support system.
On The Correct Formulation Of The Law Of The External Photoelectric Effect (I...ijifr
The document proposes a critical analysis of Einstein's formulation of the law of the external photoelectric effect. It argues that Einstein's formula violates logical laws because it relates quantities that characterize different material objects (photon, electron in metal, electron not in metal). The document then presents a new, correct mathematical formulation of the law based on the relationship between relative increments of the photon energy and emitted electron energy. This proportion relationship is argued to satisfy logical identity laws and correctly describe the photoelectric effect process.
IJIFR- Volume 4 Issue 1, September 2016 vikas sharma
The Journal aims to publish quality material that contributes to accumulate dynamic knowledge which is able to revitalize and foster the research and development carried out in different disciplines. IJIFR brings into the world the understanding of various faculties in one platform to have access to accurate information and new innovations & provide a rapid turn-around time possible for reviewing and publishing, and to disseminate the articles freely for teaching and reference purposes. Its multidisciplinary approach is deliberate to bring the worldwide eminent intellectuals on one platform to illumine the world of knowledge. The journal follows a Blind-Review Peer Review System in order to bring in a high-quality intellectual platform for researchers across the world thereby bringing in total transparency in its journal review system. It delivers eventual platform in order to have genuine, speculative, knowledgeable research which has the visualization to understand fact-finding experiences that describes significant developments of changing global scenarios. IJIFR provide access not only to ultimate research resources, but through its high-quality professionals are ambitious to bring in a significant transformation in the dominion of open access journals and online publishing which the globe is observing nowadays. All manuscripts are reviewed in fairness based on the intellectual content of the paper regardless of gender, race, ethnicity, religion, citizenry nor political values of author(s).the IJIFR provides free access to research information to the international community without financial, legal or technical barriers. All submitted articles should report original, previously unpublished research results, experimental or theoretical, and must not be under consideration for publication elsewhere. All the accepted papers of the journal will be processed for indexing into different citation databases that track citation frequency/data for each paper. Contributions will therefore be welcomed from practitioners, researchers, scholars and professional experts working in private, public and other organizations or industries.
25 16285 a hybrid ijeecs 012 edit septianIAESIJEECS
Feature selection aims to choose an optimal subset of features that are necessary and sufficient
to improve the generalization performance and the running efficiency of the learning algorithm. To get the
optimal subset in the feature selection process, a hybrid feature selection based on mutual information and
genetic algorithm is proposed in this paper. In order to make full use of the advantages of filter and
wrapper model, the algorithm is divided into two phases: the filter phase and the wrapper phase. In the
filter phase, this algorithm first uses the mutual information to sort the feature, and provides the heuristic
information for the subsequent genetic algorithm, to accelerate the search process of the genetic
algorithm. In the wrapper phase, using the genetic algorithm as the search strategy, considering the
performance of the classifier and dimension of subset as an evaluation criterion, search the best subset of
features. Experimental results on benchmark datasets show that the proposed algorithm has higher
classification accuracy and smaller feature dimension, and its running time is less than the time of using
genetic algorithm.
Calculation of the Minimum Computational Complexity Based on Information Entropyijcsa
In order to find out the limiting speed of solving a specific problem using computer, this essay provides a method based on information entropy. The relationship between the minimum computational complexity and information entropy change is illustrated. A few examples are served as evidence of such connection. Meanwhile some basic rules of modeling problems are established. Finally, the nature of solving problems with computer programs is disclosed to support this theory and a redefinition of information entropy in this filed is proposed. This will develop a new field of science.
This document presents an algorithm for interactively learning monotone Boolean functions. The algorithm is based on Hansel's lemma, which states that algorithms based on finding maximal upper zeros and minimal lower units are optimal for learning monotone Boolean functions. The algorithm allows decreasing the number of queries needed to learn non-monotone functions that can be represented as combinations of monotone functions. The effectiveness of the approach is demonstrated through computational experiments in engineering and medical applications.
This document presents an algorithm for interactively learning monotone Boolean functions from examples. The algorithm is based on Hansel's lemma, which states that the optimal number of queries needed to learn a monotone Boolean function of n variables is O(n). The algorithm learns the target function by finding its maximal upper zeros and minimal lower units, representing the borders of the negative and positive patterns, respectively. The algorithm is optimal in the sense that it minimizes the maximum number of queries needed for any monotone Boolean function.
The philosophy of fuzzy logic was formed by introducing the membership degree of a linguistic value or variable instead of divalent membership of 0 or 1. Membership degree is obtained by mapping the variable on the graphical shape of fuzzy numbers. Because of simplicity and convenience, triangular membership numbers (TFN) are widely used in different kinds of fuzzy analysis problems. This paper suggests a simple method using statistical data and frequency chart for constructing non-isosceles TFN when we are using direct rating for evaluating a variable in a predefined scale. In this method, the relevancy between assessment uncertainties and statistical parameters such as mean value and the standard deviation is established in a way that presents an exclusive form of triangle number for each set of data. The proposed method with regard to the graphical shape of the frequency chart distributes the standard deviation around the mean value and forms the TFN with the membership degree of 1 for mean value. In the last section of the paper modification of the proposed method is presented through a practical case study.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
Call for Papers - 5th International Conference on Cloud, Big Data and IoT (CB...ijistjournal
5th International Conference on Cloud, Big Data and IoT (CBIoT 2024) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the areas of Cloud, Big Data and IoT. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the area of Cloud, Big Data and IoT.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in Cloud, Big Data and IoT.
PERFORMANCE ANALYSIS OF PARALLEL IMPLEMENTATION OF ADVANCED ENCRYPTION STANDA...ijistjournal
Cryptography is the study of mathematical techniques related to aspects of information security such as confidentiality, data integrity, entity authentication, and data origin authentication. Most cryptographic algorithms function more efficiently when implemented in hardware than in software running on single processor. However, systems that use hardware implementations have significant drawbacks: they are unable to respond to flaws discovered in the implemented algorithm or to changes in standards. As an alternative, it is possible to implement cryptographic algorithms in software running on multiple processors. However, most of the cryptographic algorithms like DES (Data Encryption Standard) or 3DES have some drawbacks when implemented in software: DES is no longer secure as computers get more powerful while 3DES is relatively sluggish in software. AES (Advanced Encryption Standard), which is rapidly being adopted worldwide, provides a better combination of performance and enhanced network security than DES or 3DES by being computationally more efficient than these earlier standards. Furthermore, by supporting large key sizes of 128, 192, and 256 bits, AES offers higher security against brute-force attacks.
In this paper, AES has been implemented with single processor. Then the result has been compared with parallel implementations of AES with 2 varying different parameters such as key size, number of rounds and extended key size, and show how parallel implementation of the AES offers better performance yet flexible enough for cryptographic algorithms.
More Related Content
Similar to A PHYSICAL THEORY OF INFORMATION VS. A MATHEMATICAL THEORY OF COMMUNICATION
PCA is a technique to reduce the dimensionality of multivariate data while retaining essential information. It works by transforming the data to a new coordinate system such that the greatest variance by any projection of the data lies on the first coordinate, called the first principal component. Subsequent components account for remaining variance while being orthogonal to previous components. PCA is performed by computing the eigenvalues and eigenvectors of the covariance matrix of the data, with the principal components being the eigenvectors. This allows visualization and interpretation of high-dimensional data in lower dimensions.
The mathematical and philosophical concept of vectorGeorge Mpantes
What is behind the physical phenomenon of the velocity; of the force; there is the mathematical concept of the vector. This is a new concept, since force has direction, sense, and magnitude, and we accept the physical principle that the forces exerted on a body can be added to the rule of the parallelogram. This is the first axiom of Newton. Newton essentially requires that the power is a " vectorial " size , without writing clearly , and Galileo that applies the principle of the independence of forces .
Informational Nature of Dark Matter and Dark Energy and the Cosmological Cons...OlivierDenis15
In this article, realistic quantitative estimation of dark matter and dark energy considered as informational phenomena
have been computed, thereby explaining certain anomalies and effects within the universe. Moreover, by the same conceptual
approach, the cosmological constant problem has been reduced by almost 120 orders of magnitude in the prediction of the
vacuum energy from a quantum point of view. We argue that dark matter is an informational field with finite and quantifiable
negative mass, distinct from the conventional fields of matter of quantum field theory and associated with the number of bits of
information in the observable universe, while dark energy is negative energy, calculated as the energy associated with dark
matter. Since dark energy is vacuum energy, it emerges from dark matter as a collective potential of all particles with their
individual zero-point energy via Landauer's principle
This document describes rsnSieve, a MATLAB application that automatically selects resting state networks from fMRI data. It uses independent component analysis to decompose fMRI signals, then applies several processing steps like Pearson's index evaluation, k-means clustering, power spectral analysis to identify and select components corresponding to resting state networks. The selected networks are then used for further analysis.
Abstract: Complex dynamical reaction networks consisting of many components that interact and produce each other are difficult to understand, especially, when new component types may appear and present component types may vanish com- pletely. Inspired by Fontana and Buss (Bull. Math. Biol., 56, 1–64) we outline a theory to deal with such systems. The theory consists of two parts. The first part introduces the concept of a chemical organisation as a closed and self-maintaining set of components. This concept allows to map a complex (reaction) network to the set of organisations, providing a new view on the system’s structure. The sec- ond part connects dynamics with the set of organisations, which allows to map a movement of the system in state space to a movement in the set of organisations. The relevancy of our theory is underlined by a theorem that says that given a dif- ferential equation describing the chemical dynamics of the network, then every stationary state is an instance of an organisation. For demonstration, the theory is applied to a small model of HIV-immune system interaction by Wodarz and Nowak (Proc. Natl. Acad. USA, 96, 14464–14469) and to a large model of the sugar metabolism of E. Coli by Puchalka and Kierzek (Biophys. J., 86, 1357–1372). In both cases organisations where uncovered, which could be related to functions.
1. The document discusses the history and concepts of set theory, including how it was founded by Georg Cantor and how work by Zermelo and Fraenkel led to the commonly used ZFC set of axioms.
2. Various concepts in set theory are defined, such as empty sets, singleton sets, finite and infinite sets, unions, intersections, differences, and subsets.
3. The document also discusses applications of set theory and related fields like fuzzy logic, rough set theory, and how fuzzy set theory has been applied in rock engineering characterization.
The document summarizes early neural network models from the 1940s including McCulloch and Pitts' threshold model neuron and Hebb's learning rule. It then describes Rosenblatt's perceptron model from 1958 which consists of input neurons connected to output neurons that learn to classify patterns through a training algorithm. The perceptron convergence theorem guarantees the algorithm will find a solution if the classes are linearly separable in the feature space. However, the perceptron has limitations as many real problems are not linearly separable.
REPRESENTATION OF UNCERTAIN DATA USING POSSIBILISTIC NETWORK MODELScscpconf
Uncertainty is a pervasive in real world environment due to vagueness, is associated with the
difficulty of making sharp distinctions and ambiguity, is associated with situations in which the
choices among several precise alternatives cannot be perfectly resolved. Analysis of large
collections of uncertain data is a primary task in the real world applications, because data is
incomplete, inaccurate and inefficient. Representation of uncertain data in various forms such
as Data Stream models, Linkage models, Graphical models and so on, which is the most simple,
natural way to process and produce the optimized results through Query processing. In this
paper, we propose the Uncertain Data model can be represented as Possibilistic data model
and vice versa for the process of uncertain data using various data models such as possibilistic
linkage model, Data streams, Possibilistic Graphs. This paper presents representation and
process of Possiblistic Linkage model through Possible Worlds with the use of product-based
operator.
The document discusses Einstein's field equations and Heisenberg's uncertainty principle. It begins by providing background on Einstein's field equations, which relate the geometry of spacetime to the distribution of mass and energy within it. It then discusses some key mathematical aspects of the field equations, including their nonlinear partial differential form. Finally, it notes that the field equations can be consolidated with Heisenberg's uncertainty principle to provide a unified description of gravity and quantum mechanics.
The document discusses clustering documents using a multi-viewpoint similarity measure. It begins with an introduction to document clustering and common similarity measures like cosine similarity. It then proposes a new multi-viewpoint similarity measure that calculates similarity between documents based on multiple reference points, rather than just the origin. This allows a more accurate assessment of similarity. The document outlines an optimization algorithm used to cluster documents by maximizing the new similarity measure. It compares the new approach to existing document clustering methods and similarity measures.
A Rough Set based Fuzzy Inference System for Mining Temporal Medical Databases ijsc
This document describes a proposed system for constructing a fuzzy temporal rule-based classifier to mine temporal patterns in medical databases. The system uses fuzzy rough set theory and temporal logic. It performs feature selection using a hybrid genetic algorithm on a diabetes dataset. A fuzzy decision table is constructed from the lower approximations. Rules are then generated by transforming the rule-based classifier into a fuzzy inference system using Allen's temporal algebra. The rules are used to predict patient conditions and the severity of diabetes. Experiments on a diabetic dataset showed the approach can accurately predict disease severity and support clinical decision making.
A ROUGH SET BASED FUZZY INFERENCE SYSTEM FOR MINING TEMPORAL MEDICAL DATABASES ijsc
The main objective of this research work is to construct a Fuzzy Temporal Rule Based Classifier that uses
fuzzy rough set and temporal logic in order to mine temporal patterns in medical databases. The lower
approximation concepts and fuzzy decision table with the fuzzy features are used to obtain fuzzy decision
classes for building the classifier. The goals are pre-processing for feature selection, construction of
classifier, and rule induction based on increment rough set approach. The features are selected using
Hybrid Genetic Algorithm. Moreover the elementary sets are obtained from lower approximations are
categorized into the decision classes. Based on the decision classes a discernibility vector is constructed to
define the temporal consistency degree among the objects. Now the Rule Based Classifier is transformed
into a temporal rule based fuzzy inference system by incorporating the Allen’s temporal algebra to induce
rules. It is proposed to use incremental rough set to update rule induction in dynamic databases. Ultimately
these rules are categorized as rules with range values to perform prediction effectively. The efficiency of
the approach is compared with other classifiers in order to assess the accuracy of the fuzzy temporal rule
based classifier. Experiments have been carried out on the diabetic dataset and the simulation results
obtained prove that the proposed temporal rule-based classifier on clinical diabetic dataset stays as an
evidence for predicting the severity of the disease and precision in decision support system.
On The Correct Formulation Of The Law Of The External Photoelectric Effect (I...ijifr
The document proposes a critical analysis of Einstein's formulation of the law of the external photoelectric effect. It argues that Einstein's formula violates logical laws because it relates quantities that characterize different material objects (photon, electron in metal, electron not in metal). The document then presents a new, correct mathematical formulation of the law based on the relationship between relative increments of the photon energy and emitted electron energy. This proportion relationship is argued to satisfy logical identity laws and correctly describe the photoelectric effect process.
IJIFR- Volume 4 Issue 1, September 2016 vikas sharma
The Journal aims to publish quality material that contributes to accumulate dynamic knowledge which is able to revitalize and foster the research and development carried out in different disciplines. IJIFR brings into the world the understanding of various faculties in one platform to have access to accurate information and new innovations & provide a rapid turn-around time possible for reviewing and publishing, and to disseminate the articles freely for teaching and reference purposes. Its multidisciplinary approach is deliberate to bring the worldwide eminent intellectuals on one platform to illumine the world of knowledge. The journal follows a Blind-Review Peer Review System in order to bring in a high-quality intellectual platform for researchers across the world thereby bringing in total transparency in its journal review system. It delivers eventual platform in order to have genuine, speculative, knowledgeable research which has the visualization to understand fact-finding experiences that describes significant developments of changing global scenarios. IJIFR provide access not only to ultimate research resources, but through its high-quality professionals are ambitious to bring in a significant transformation in the dominion of open access journals and online publishing which the globe is observing nowadays. All manuscripts are reviewed in fairness based on the intellectual content of the paper regardless of gender, race, ethnicity, religion, citizenry nor political values of author(s).the IJIFR provides free access to research information to the international community without financial, legal or technical barriers. All submitted articles should report original, previously unpublished research results, experimental or theoretical, and must not be under consideration for publication elsewhere. All the accepted papers of the journal will be processed for indexing into different citation databases that track citation frequency/data for each paper. Contributions will therefore be welcomed from practitioners, researchers, scholars and professional experts working in private, public and other organizations or industries.
25 16285 a hybrid ijeecs 012 edit septianIAESIJEECS
Feature selection aims to choose an optimal subset of features that are necessary and sufficient
to improve the generalization performance and the running efficiency of the learning algorithm. To get the
optimal subset in the feature selection process, a hybrid feature selection based on mutual information and
genetic algorithm is proposed in this paper. In order to make full use of the advantages of filter and
wrapper model, the algorithm is divided into two phases: the filter phase and the wrapper phase. In the
filter phase, this algorithm first uses the mutual information to sort the feature, and provides the heuristic
information for the subsequent genetic algorithm, to accelerate the search process of the genetic
algorithm. In the wrapper phase, using the genetic algorithm as the search strategy, considering the
performance of the classifier and dimension of subset as an evaluation criterion, search the best subset of
features. Experimental results on benchmark datasets show that the proposed algorithm has higher
classification accuracy and smaller feature dimension, and its running time is less than the time of using
genetic algorithm.
Calculation of the Minimum Computational Complexity Based on Information Entropyijcsa
In order to find out the limiting speed of solving a specific problem using computer, this essay provides a method based on information entropy. The relationship between the minimum computational complexity and information entropy change is illustrated. A few examples are served as evidence of such connection. Meanwhile some basic rules of modeling problems are established. Finally, the nature of solving problems with computer programs is disclosed to support this theory and a redefinition of information entropy in this filed is proposed. This will develop a new field of science.
This document presents an algorithm for interactively learning monotone Boolean functions. The algorithm is based on Hansel's lemma, which states that algorithms based on finding maximal upper zeros and minimal lower units are optimal for learning monotone Boolean functions. The algorithm allows decreasing the number of queries needed to learn non-monotone functions that can be represented as combinations of monotone functions. The effectiveness of the approach is demonstrated through computational experiments in engineering and medical applications.
This document presents an algorithm for interactively learning monotone Boolean functions from examples. The algorithm is based on Hansel's lemma, which states that the optimal number of queries needed to learn a monotone Boolean function of n variables is O(n). The algorithm learns the target function by finding its maximal upper zeros and minimal lower units, representing the borders of the negative and positive patterns, respectively. The algorithm is optimal in the sense that it minimizes the maximum number of queries needed for any monotone Boolean function.
The philosophy of fuzzy logic was formed by introducing the membership degree of a linguistic value or variable instead of divalent membership of 0 or 1. Membership degree is obtained by mapping the variable on the graphical shape of fuzzy numbers. Because of simplicity and convenience, triangular membership numbers (TFN) are widely used in different kinds of fuzzy analysis problems. This paper suggests a simple method using statistical data and frequency chart for constructing non-isosceles TFN when we are using direct rating for evaluating a variable in a predefined scale. In this method, the relevancy between assessment uncertainties and statistical parameters such as mean value and the standard deviation is established in a way that presents an exclusive form of triangle number for each set of data. The proposed method with regard to the graphical shape of the frequency chart distributes the standard deviation around the mean value and forms the TFN with the membership degree of 1 for mean value. In the last section of the paper modification of the proposed method is presented through a practical case study.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
Similar to A PHYSICAL THEORY OF INFORMATION VS. A MATHEMATICAL THEORY OF COMMUNICATION (20)
Call for Papers - 5th International Conference on Cloud, Big Data and IoT (CB...ijistjournal
5th International Conference on Cloud, Big Data and IoT (CBIoT 2024) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the areas of Cloud, Big Data and IoT. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the area of Cloud, Big Data and IoT.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in Cloud, Big Data and IoT.
PERFORMANCE ANALYSIS OF PARALLEL IMPLEMENTATION OF ADVANCED ENCRYPTION STANDA...ijistjournal
Cryptography is the study of mathematical techniques related to aspects of information security such as confidentiality, data integrity, entity authentication, and data origin authentication. Most cryptographic algorithms function more efficiently when implemented in hardware than in software running on single processor. However, systems that use hardware implementations have significant drawbacks: they are unable to respond to flaws discovered in the implemented algorithm or to changes in standards. As an alternative, it is possible to implement cryptographic algorithms in software running on multiple processors. However, most of the cryptographic algorithms like DES (Data Encryption Standard) or 3DES have some drawbacks when implemented in software: DES is no longer secure as computers get more powerful while 3DES is relatively sluggish in software. AES (Advanced Encryption Standard), which is rapidly being adopted worldwide, provides a better combination of performance and enhanced network security than DES or 3DES by being computationally more efficient than these earlier standards. Furthermore, by supporting large key sizes of 128, 192, and 256 bits, AES offers higher security against brute-force attacks.
In this paper, AES has been implemented with single processor. Then the result has been compared with parallel implementations of AES with 2 varying different parameters such as key size, number of rounds and extended key size, and show how parallel implementation of the AES offers better performance yet flexible enough for cryptographic algorithms.
Submit Your Research Articles - International Journal of Information Sciences...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
INFORMATION THEORY BASED ANALYSIS FOR UNDERSTANDING THE REGULATION OF HLA GEN...ijistjournal
Considering information entropy (IE), HLA surface expression (SE) regulation phenomenon is considered as information propagation channel with an amount of distortion. HLA gene SE is considered as sink regulated by the inducible transcription factors (TFs) (source). Previous work with a certain number of bin size, IEs for source and receiver is computed and computation of mutual information characterizes the dependencies of HLA gene SE on some certain TFs in different cells types of hematopoietic system under the condition of leukemia. Though in recent time information theory is utilized for different biological knowledge generation and different rules are available in those specific domains of biomedical areas; however, no such attempt is made regarding gene expression regulation, hence no such rule is available. In this work, IE calculation with varying bin size considering the number of bins is approximately half of the sample size of an attribute also confirms the previous inferences.
Call for Research Articles - 5th International Conference on Artificial Intel...ijistjournal
5th International Conference on Artificial Intelligence and Machine Learning (CAIML 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Artificial Intelligence and Machine Learning. The Conference looks for significant contributions to all major fields of the Artificial Intelligence, Machine Learning in theoretical and practical aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Computer Science, Engineering and Applications.
Online Paper Submission - International Journal of Information Sciences and T...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
System identification from the experimental data plays a vital role for model based controller design. Derivation of process model from first principles is often difficult due to its complexity. The first stage in the development of any control and monitoring system is the identification and modeling of the system. Each model is developed within the context of a specific control problem. Thus, the need for a general system identification framework is warranted. The proposed framework should be able to adapt and emphasize different properties based on the control objective and the nature of the behavior of the system. Therefore, system identification has been a valuable tool in identifying the model of the system based on the input and output data for the design of the controller. The present work is concerned with the identification of transfer function models using statistical model identification, process reaction curve method, ARX model, genetic algorithm and modeling using neural network and fuzzy logic for interacting and non interacting tank process. The identification technique and modeling used is prone to parameter change & disturbance. The proposed methods are used for identifying the mathematical model and intelligent model of interacting and non interacting process from the real time experimental data.
Call for Research Articles - 4th International Conference on NLP & Data Minin...ijistjournal
4th International Conference on NLP & Data Mining (NLDM 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Natural Language Computing and Data Mining.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Research Article Submission - International Journal of Information Sciences a...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
Call for Papers - International Journal of Information Sciences and Technique...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
Implementation of Radon Transformation for Electrical Impedance Tomography (EIT)ijistjournal
Radon Transformation is generally used to construct optical image (like CT image) from the projection data in biomedical imaging. In this paper, the concept of Radon Transformation is implemented to reconstruct Electrical Impedance Topographic Image (conductivity or resistivity distribution) of a circular subject. A parallel resistance model of a subject is proposed for Electrical Impedance Topography(EIT) or Magnetic Induction Tomography(MIT). A circular subject with embedded circular objects is segmented into equal width slices from different angles. For each angle, Conductance and Conductivity of each slice is calculated and stored in an array. A back projection method is used to generate a two-dimensional image from one-dimensional projections. As a back projection method, Inverse Radon Transformation is applied on the calculated conductance and conductivity to reconstruct two dimensional images. These images are compared to the target image. In the time of image reconstruction, different filters are used and these images are compared with each other and target image.
Online Paper Submission - 6th International Conference on Machine Learning & ...ijistjournal
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Submit Your Research Articles - International Journal of Information Sciences...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
BER Performance of MPSK and MQAM in 2x2 Almouti MIMO Systemsijistjournal
Almouti published the error performance of the 2x2 space-time transmit diversity scheme using BPSK. One of the key techniques employed for correcting such errors is the Quadrature amplitude modulation (QAM) because of its efficiency in power and bandwidth.. In this paper we explore the error performance of the 2x2 MIMO system using the Almouti space-time codes for higher order PSK and M-ary QAM. MATLAB was used to simulate the system; assuming slow fading Rayleigh channel and additive white Gaussian noise. The simulated performance curves were compared and evaluated with theoretical curves obtained using BER tool on the MATLAB by setting parameters for random generators. The results shows that the technique used do find a place in correcting error rates of QAM system of higher modulation schemes. The model can equally be used not only for the criteria of adaptive modulation but for a platform to design other modulation systems as well.
Online Paper Submission - International Journal of Information Sciences and T...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
Call for Papers - International Journal of Information Sciences and Technique...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
International Journal of Information Sciences and Techniques (IJIST)ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
BRAIN TUMOR MRIIMAGE CLASSIFICATION WITH FEATURE SELECTION AND EXTRACTION USI...ijistjournal
Feature extraction is a method of capturing visual content of an image. The feature extraction is the process to represent raw image in its reduced form to facilitate decision making such as pattern classification. We have tried to address the problem of classification MRI brain images by creating a robust and more accurate classifier which can act as an expert assistant to medical practitioners. The objective of this paper is to present a novel method of feature selection and extraction. This approach combines the Intensity, Texture, shape based features and classifies the tumor as white matter, Gray matter, CSF, abnormal and normal area. The experiment is performed on 140 tumor contained brain MR images from the Internet Brain Segmentation Repository. The proposed technique has been carried out over a larger database as compare to any previous work and is more robust and effective. PCA and Linear Discriminant Analysis (LDA) were applied on the training sets. The Support Vector Machine (SVM) classifier served as a comparison of nonlinear techniques Vs linear ones. PCA and LDA methods are used to reduce the number of features used. The feature selection using the proposed technique is more beneficial as it analyses the data according to grouping class variable and gives reduced feature set with high classification accuracy.
Research Article Submission - International Journal of Information Sciences a...ijistjournal
The International Journal of Information Science & Techniques (IJIST) focuses on information systems science and technology coercing multitude applications of information systems in business administration, social science, biosciences, and humanities education, library sciences management, depiction of data and structural illustration, big data analytics, information economics in real engineering and scientific problems.
This journal provides a forum that impacts the development of engineering, education, technology management, information theories and application validation. It also acts as a path to exchange novel and innovative ideas about Information systems science and technology.
A MEDIAN BASED DIRECTIONAL CASCADED WITH MASK FILTER FOR REMOVAL OF RVINijistjournal
In this paper A Median Based Directional Cascaded with Mask (MBDCM) filter has been proposed, which is based on three different sized cascaded filtering windows. The differences between the current pixel and its neighbors aligned with four main directions are considered for impulse detection. A direction index is used for each edge aligned with a given direction. Minimum of these four direction indexes is used for impulse detection under each masking window. Depending on the minimum direction indexes among these three windows new value to substitute the noisy pixel is calculated. Extensive simulations showed that the MBDCM filter provides good performances of suppressing impulses from both gray level and colored benchmarked images corrupted with low noise level as well as for highly dense impulses. MBDCM filter gives better results than MDWCMM filter in suppressing impulses from highly corrupted digital images.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...amsjournal
The Fourth Industrial Revolution is transforming industries, including healthcare, by integrating digital,
physical, and biological technologies. This study examines the integration of 4.0 technologies into
healthcare, identifying success factors and challenges through interviews with 70 stakeholders from 33
countries. Healthcare is evolving significantly, with varied objectives across nations aiming to improve
population health. The study explores stakeholders' perceptions on critical success factors, identifying
challenges such as insufficiently trained personnel, organizational silos, and structural barriers to data
exchange. Facilitators for integration include cost reduction initiatives and interoperability policies.
Technologies like IoT, Big Data, AI, Machine Learning, and robotics enhance diagnostics, treatment
precision, and real-time monitoring, reducing errors and optimizing resource utilization. Automation
improves employee satisfaction and patient care, while Blockchain and telemedicine drive cost reductions.
Successful integration requires skilled professionals and supportive policies, promising efficient resource
use, lower error rates, and accelerated processes, leading to optimized global healthcare outcomes.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
A PHYSICAL THEORY OF INFORMATION VS. A MATHEMATICAL THEORY OF COMMUNICATION
1. International Journal of Information Sciences and Techniques (IJIST) Vol.13, No.3, May 2023
DOI : 10.5121/ijist.2023.13301 1
A PHYSICAL THEORY OF INFORMATION VS.
A MATHEMATICAL THEORY OF COMMUNICATION
Manouchehr Amiri
Department of Urology, Tandis Hospital, Tehran, Iran
ABSTRACT
This article introduces a general notion of physical bit information that is compatible with the basics of
quantum mechanics and incorporates the Shannon entropy as a special case. This notion of physical
information leads to the Binary data matrix model (BDM), which predicts the basic results of quantum
mechanics, general relativity, and black hole thermodynamics. The compatibility of the model with
holographic, information conservation, and Landauer’s principle are investigated. After deriving the “Bit
Information principle” as a consequence of BDM, the fundamental equations of Planck, De Broglie,
Bekenstein, and mass-energy equivalence are derived.
KEYWORDS
Physical theory of information, Binary data matrix model, Shannon information theory, Bit information
principle
1. INTRODUCTION
Information in a broad sense implies a collection of data of unmeasurable concepts or measurable
quantities. The usual notion of measurable information in physics invokes the subject of Shannon
entropy and information. Claude Shannon in his seminal paper [1] developed a mathematical
theory of signal transmission [2]. He denied the semantic aspects of communication and related
information theory. According to his theory, the information refers to the opportunity to reduce
uncertainty and equals the entropy of the communicated message. He had got the idea of entropy
from the second law of thermodynamics [2], [3] and concluded that the information of a message
could be measured by its predictability, the less predictability the more information it carries [2],
[3]. It is clear that Shannon’s definition of information was not unique and merely was fitted for
his engineering requirements [2], [3]. In this notion of information, the source, channel, and
receiver of data are crucial components of communication engineering. Shannon entropy
(information) is just concerned with the statistical properties of a given system, independent of
the meaning and semantic content of system states [5]. As he emphasized in his seminal article,
the meaning of communication and related information content are irrelevant to engineering
problem [1]. Subsequently, there have been emerged some critiques around the Shannon notion
of physical and biological information [3]. The notion of information independent of its meaning
is the subject of main criticism announced by MacKay and others [3], [4]. Subsequently there
have been attempts to add a semantic dimension to a formal theory of information, particularly to
Shannon’s theory [5]-[7]. Shannon’s theory is not concerned with the individual message but
rather the averages of source messages [8]. Although the physical information basically is related
to physical measurable quantities, the current notion of physical information remained as the
same definition introduced by Shannon and seems to be insufficient for physical systems. This
has been reminded in recent works of Bruckner and Zeilinger [9]. Their main reason for this
claim is the measurement problem in quantum mechanics. In other words, there is no definite real
2. International Journal of Information Sciences and Techniques (IJIST) Vol.13, No.3, May 2023
2
value for observables before measurement in quantum physics sense and there is no reality
independent of observer or measurement [9]. In quantum information theory, VonNeumann
entropy is a candidate for replacing Shannon entropy to measure quantum information
[9].However, recent trends toward deterministic quantum mechanics and its local versions [10]-
[12] motivates one to investigate the existence of a fundamental version of information which
carries the physical meaning of the system and is compatible with quantum mechanics. The main
purpose of this article focused on introducing a new version of physical information and its broad
consequences. In this version of information theory that I call, “Physical Theory of Information”,
a bit of information reflects a real physical quantity in phase space, and it does not concern with
the observer dependent reality of physical parameters. This approach will lead to a new attitude
in the theory of information in which each bit of information has a specific meaning and implies a
physical value. Just as the mathematical numbers in the governing equations of the laws of
physics have a physical meaning, and show the relationship between physical variables, the bits
of information in physics must also carry a physical meaning. Some authors showed the short
comings of Shannon information content in the field of climatology and its related data. They
presented that Shannon information may fail to be applied for certain kinds of useful information
yielded by a measurement or observation [18].Introducing the new information model and
development of this physical approach to information, and substituting it with physical bit
information, could result in more applications of information and entropy in a wide range of
physics fields.
1.1 Binary Matrix Model
Does information do anything with physics? The response of J.A. Wheeler was "It from bit,"
which means that the entire universe is constituted from "bits." Generally, information is defined
as “an answer to a specific question” [3]. Therefore, if we restrict answers to “Yes” or “No”, the
information could be represented by a set of 0s and 1s or a binary array. The formalism of
information can be generalized by starting from this more detailed notion of information i.e., the
arrays of binary data 0 and 1 for each physical parameter of particles in a system. Any variable
(physical parameter)𝑥𝜐when attributed to a subject or object (particles),carries the information
that reflects the “quantity” of that variables. 𝜐refers to the specific parameter and varies between
1 and 𝑑 (degree of freedom). If we divide the range of this variable into a large number of
infinitesimal intervals, then the value of variable is represented by a binary column matrix with 1
entry at the interval where the value of variable 𝑥𝜐 restricted, as depicted in Fig.1
𝑥𝜐↓
↑
[
0
0
⋮
1
⋮
0]
→ 𝑥𝜐0
Fig.1. The column matrix’s entries return 0 except for the entry which corresponds to the
value𝑥𝜐0 of parameter 𝑥𝜐 and represented by 1.
The unique entry 1 corresponds to the specific interval that represents the quantity 𝑥𝜐0of
variable 𝑥𝜐.
𝑥𝜐0 − 𝛿𝑥𝜐 < 𝑥𝜐 < 𝑥𝜐0 + 𝛿𝑥𝜐
Other values can be presented by displacing the 1 entry along the range of variable 𝑥𝜐 of the
column matrix. For a set of different variables there are similar column matrices. By
3. International Journal of Information Sciences and Techniques (IJIST) Vol.13, No.3, May 2023
3
consideration of a system of N identical particles (N is a large number), for each particle there is
a binary column matrix that specifies the𝑥𝜐 parameter of that particle. Assembling all these
column matrices results in a binary data matrix 𝐷𝜐 (Fig. 2).As an example, the column matrix
specified by the red box, determines the value of 𝑥𝜐 for second particle (particle assigned by
number2). The rows of 𝐷𝜐 are binary arrays 𝑒∗𝜐
(𝑥𝜐0) which are defined at any point 𝑥𝜐0 and
returns the information of particles at that point, depicted by the blue box in Fig.2:
𝑒∗𝜐(𝑥𝜐0) = (0110 . . . 10) (1)
Fig. 2. The structure of matrix 𝐷𝜐. The red box contains a unique “1” at the specific value 𝑥𝜐0
which is the parameter value of particle with label 2. The blue box represents the particles whose
parametric values is 𝑥𝜐0. Any “1” entry in this array corresponds the particle with physical value
𝑥𝜐0.
Any particle with the parameter value 𝑥𝜐0is represented by entry 1 in this array. The sum of all 1
entries in 𝑒∗𝜐
(𝑥𝜐0) gives the total number of particles whose parameter’s value is 𝑥𝜐0
simultaneously. In phase space, there are 𝑑 different phase space parameters i.e., degree of
freedom (including spatial, linear, and angular momentum), which lead to different 𝐷𝜐. For a
comprehensive binary information data system, we embed all 𝐷𝜐s in a columnar matrix 𝐷:
𝐷 = [
𝐷1
𝐷2
⋮
𝐷𝑑
] (2)
Where 𝑑 is the degree of freedom. For any point (𝑥10 , 𝑥20 , . .. 𝑥𝑑0) on phase space, there
correspond a set of𝑒∗𝜐
(𝑥𝜐0) that could be gathered in a matrix 𝑃:
𝑃 =
[
[𝑒∗1
]
[𝑒∗2
]
⋮
[𝑒∗𝑑
]]
(3)
Where [𝑒∗𝜐
] are row binary matrices defined at the point (𝑥10 , 𝑥20 , . .. 𝑥𝑑0).
Theorem 1: The entries of matrix 𝐺 = ‖𝑔𝜐𝜇‖ = 𝑃𝑃𝑇
give the number of particles with the same
values of 𝑥𝑖0 and 𝑥𝑗0 and
1
𝑁
𝑓
𝜐𝜇 represents the joint probabilities.
4. International Journal of Information Sciences and Techniques (IJIST) Vol.13, No.3, May 2023
4
Proof: With the above definitions, it can be easily proved that the inner products,
𝑔𝜐𝜇
= [𝑒∗𝜐
][𝑒∗𝜇
]𝑇
(4)
are the number of particles with the values 𝑥𝜐0and 𝑥𝜇0, simultaneously. Obviously 𝐺 is
symmetric. Dividing 𝑔𝜐𝜇 by total particles number obtains the joint probabilities 𝑓
𝜐𝜇:
𝑓𝜐𝜇
=
1
𝑁
𝑔𝜐𝜇
(5)
Matrix 𝐷contains all physical information of system constituents at any time. For a dynamical
system, D evolves over time while the total number of "1" bits is preserved. This number that
reflects the information measure of the system is the product of total number of particles and
degree of freedom:
𝕀 = 𝑁𝑑 (6)
Which is a constant. This formalism of physical bit information is called BDM (Binary Data
Matrix Model) and ensues three immediate consequences:
a) The invariance of𝕀is compatible with information conservation principle.
b) If the system’s temperature is denoted by 𝑇, then the average energy per particle due to
equipartition principle reads as:
𝜖 =
𝑑
2
𝑘𝑇 (7)
Since any particle contains𝑑 bits, therefore the energy per bit could be obtained by dividing𝜖by
𝑑: 𝜀 ≅
1
2
𝑘𝑇
This is compatible with Landauer’s principle i.e.,𝜀 = 𝑘𝑇 𝑙𝑜𝑔 2
c) All information of system represented by a 2-dimensional binary matrix 𝐷. This is
compatible with Holographic principle.
These are physical principles that correlate the pure physical concepts to bit information and
could not be justified in the context of Shannon information theory.
2. CONSEQUENCES OF BDM
If we consider [𝑒∗𝜐
] as the dual base vectors in the parameter space (phase space), the inner
product (4) gives the dual metric tensor of this space. The author in his previous work [13]
proved that the evolution of this metric tensor over time, obeys the quantum Liouville equation
and leads to the equation (by assumption natural unit i.e.,ℏ = 1) [13]:
〈𝐸〉 =
1
2
𝜕
𝜕𝜏
log 𝑔 (8)
𝑔 stands for determinant of ‖𝑔𝜐𝜇‖ and〈𝐸〉 for energy per system constituents i.e., particles. On
the other hand, for spatial displacement, we obtain the corresponding equation [13]:
𝑝𝑖 = −
1
2
𝜕
𝜕𝑥𝑖
𝑙𝑜𝑔𝑔 (9)
5. International Journal of Information Sciences and Techniques (IJIST) Vol.13, No.3, May 2023
5
The term −
1
2
log 𝑔 shows the Hamilton’s principal function 𝐹 in classical mechanics [15]:
𝐸 = 𝐻 = −
𝜕𝐹
𝜕𝜏
; 𝑝𝑖 =
𝜕𝐹
𝜕𝑥𝑖
(10)
2.1. Entropy and Information
In classical statistics, for jointly normal random variables 𝒙𝟏,𝒙𝟐,. . 𝒙𝒏, the entropy is calculated
as [14]:
ℋ =
1
2
log ∆ + 𝑘 (11)
Where ∆is the determinant of inverse of covariance matrix (𝜎𝑖𝑗
) i.e.,∆ = 𝑑𝑒𝑡 𝜎𝑖𝑗 and 𝑘 is
constant 𝑘 = log 2𝜋𝑒. We show at the limit 𝜎 → 0 (𝜎 is the variance of random variables) as
can be realized in the case of black holes, we can replace the Fisher metric [14] with the metric of
the BDM model.
Theorem 2: In the limit as σ→0, which can be realized in the case of black holes [16], the
inverse of the Fisher metric determinant and the determinant of the BDM metric are identical.
Proof: In the case of black holes where all the physical variables of its constituent confined to
constant infinitesimal intervals 𝛿𝑥𝑖
, around the singularity point, if we fix the center of mass of
black hole on the origin of spatial coordinates, the expected values of position and momentum of
constituents are near zero corresponding BDM metric will be concentrated over these intervals
with negligible values out of 𝛿𝑥𝑖
and mean values near to zero. Thus, the correlation (covariance)
matrix elementℜ𝑖𝑗while the mean of all random variables vanishes 𝑥̅ = 0 reads as:
ℜ𝑖𝑗 = 𝜎𝑖𝑗 = 〈𝑔𝑖𝑗𝛿𝑥𝑖𝛿𝑥𝑗〉 = 𝑔𝑖𝑗𝛿𝑥𝑖𝛿𝑥𝑗 (12)
Determinant of 𝜎𝑖𝑗 matrix (denoted by ∆ ) could be calculated as:
∆= ∑ 𝜀𝑖𝑗𝑘…
𝑖𝑗𝑘… 𝜎𝑖1𝜎𝑗2𝜎𝑘3 … (13)
Substitution of𝜎𝑖𝑗 with 𝑔𝑖𝑗𝛿𝑥𝑖𝛿𝑥𝑗 gives rise to:
∆ = 𝑔 ∏ (𝛿𝑥𝑖)2
𝑖 (14)
Logarithm of both sides results in:
log ∆ = log 𝑔 + 2 ∑ log 𝛿𝑥𝑖
𝑖 = log 𝑔 + 𝐶 (15)
On the other hand, definition of Fisher information metric ℊ𝑖𝑗 results in the equivalence of
inverse of covariance matrix 𝜎𝑖𝑗
and Fisher metric tensor:
ℊ𝑖𝑗 = (𝜎𝑖𝑗)−1
= 𝜎𝑖𝑗
(16)
If the determinant of ℊ𝑖𝑗is denoted by ℊ, then by equation (15) and (16) we have:
ℊ = ∆−1
; 𝑙𝑜𝑔ℊ−1
= log 𝑔 + 𝐶′ (17)
and the theorem is proved.
6. International Journal of Information Sciences and Techniques (IJIST) Vol.13, No.3, May 2023
6
Therefore, the equation (11) for entropy can be replaced by:
ℋ =
1
2
log ℊ−1
+ 𝐶 =
1
2
log 𝑔 + 𝐶′′ (18)
Respect to [16], the Euclidean action ℋ𝐵 of black holes are equivalent to its entropy i.e.,
Beckenstein-Hawking entropy:
𝑆BH = ℋ𝐵
Due to the equivalence of action and Hamilton principal function and equations (10) and (18) we
have:
𝐹 = −
1
2
log 𝑔 = ℋ𝐵 = 𝑆BH
This equation reveals the equality of Shannon entropy of black hole and entropy derived by
BDM, because it has been proved that the 𝑆BH could be derived of Shannon information entropy.
This reveals that the BDM derived information and Shannon information (entropy) are
equivalent at the limit𝜎 → 0.
The equivalence of entropy ℋ and
1
2
log 𝑔 results in “Bit Information Principle”.
2.2. Bit Information Principle
The equation (18) implies that the entropy ℋ is equivalent to
1
2
log 𝑔. By definition, the entropy
is equivalent to the measure of information, then we conclude an important result which I call the
“Bit Information principle”:
𝟏
𝟐
𝐥𝐨𝐠 ∆ is a measure of information at the limit 𝝈 → 𝟎 and its
spatial and temporal densities give the expected energy and momentum per particle.
To conclude this principle, first consider Equation (18). It implies the entropic nature of
1
2
log 𝑔 at
the limit 𝜎 → 0. Assuming equivalence of entropy and information, if we denote information by
𝕀, we get:
𝕀 =
1
2
log 𝑔 (19)
Respect to equations (8) and (9) we infer that the time and spatial derivatives of
1
2
log 𝑔 obtains
〈𝐸〉 and 𝑝𝑖 as the expected energy and momentum of system constituents. After returning to MKS
units by multiplying the equations by ℏ( Planck constant) we get:
〈𝐸〉 = ℏ
𝜕
𝜕𝜏
𝕀 ; 𝑝𝑖 = −ℏ
𝜕
𝜕𝑥𝑖
𝕀 (20)
In the BDM model, the information 𝕀, is the number of bits. Therefore, Equation (20)reveals the
relation between the density of bits over spatial and temporal intervals and the momentum and
energy per constituent of the system. In a brief notation:
𝑒𝑛𝑒𝑟𝑔𝑦~
𝑏𝑖𝑡𝑠
𝑠𝑒𝑐𝑜𝑛𝑑
, 𝑚𝑜𝑚𝑒𝑛𝑡𝑢𝑚~
𝑏𝑖𝑡𝑠
𝑢𝑛𝑖𝑡 𝑙𝑒𝑛𝑔𝑡ℎ
, 𝑎𝑛𝑔𝑢𝑙𝑎𝑟 𝑚𝑜𝑚𝑒𝑛𝑡𝑢𝑚~
𝑏𝑖𝑡𝑠
𝑢𝑛𝑖𝑡 𝑎𝑛𝑔𝑙𝑒
7. International Journal of Information Sciences and Techniques (IJIST) Vol.13, No.3, May 2023
7
Any system with negligible variance of its constituents’ parameters exhibits a set of identical bits.
For example, a monochromatic electromagnetic or acoustic wave possess a set of identical full
wavelengths as bits.
2.3. Outcomes of Bit Information Principle
This principle justifies the Planck and De Broglie equation. When all bits are identical, it means
that 𝜎 → 0 and the required condition for Bit Information Principle is met. The situation of
identical bits can be realized in monochromatic light (electromagnetic wave) beam because every
full wavelength (pulse) should be considered as a bit of information. For the temporal (time)
density of these pulses, the number of bits (full wavelength) per unit time is:
𝑓 =
1
𝑇
(21)
Where 𝑇 is the period of monochromatic wave. Then, due to (20) we have:
〈𝐸〉 = ℏ
1
𝑇
= ℏ𝑓 (22)
This is the Planck formula for energy of light wave constituents which is called photons.For the
spatial density of these pulse, the number of full wavelengths (bit) per unit length is:
𝑛 =
1
𝜆
(23)
Then, with respect to (20) and ignoring the sign, we obtain:
𝑝 =
ℏ
𝜆
(24)
This is the De Broglie relation for the wavelength and momentum of a particle. The same
relations are also valid for phonons as quanta of mechanical waves. For a black hole, the
conditions for the bit information principle are also met because all the mass and its constituents
are confined to an infinitesimal interval of space and momentum, and consequently, their
variances tend to zero as 𝜎 → 0. What is the amount of mass that represents a "bit" of a black
hole? Theoretically, the mass of the smallest possible black hole, respecting the limitations
imposed by the Schwarzschild radius, is the Planck mass:
𝑚𝑝 = √
ℏ𝑐
𝐺
(25)
Then, for a macroscopic body like a black hole, the number of contained bits should be derived
by dividing its mass by the Planck mass:
𝑛 =
𝑚
𝑚𝑝
(26)
According to the bit information principle, energy is equivalent to bit density over time. The
Planck time is the smallest time interval and serves as the universal time unit.
𝑡𝑝 = √
𝐺ℏ
𝑐5 (27)
Therefore, the expected energy will be obtained by density of 𝑛 bits in (25) over the Planck time
(27) multiplied by ℏ:
𝐸 = ℏ
𝑛
𝑡𝑝
= ℏ
𝑚
𝑚𝑝
1
𝑡𝑝
= ℏ𝑚√
𝐺
ℏ𝑐
√
𝑐5
𝐺ℏ
= 𝑚𝑐2
(28)
8. International Journal of Information Sciences and Techniques (IJIST) Vol.13, No.3, May 2023
8
This is the Einstein’s mass-energy relation.
The bit information principle is also valid for linear and angular pseudo-momentum in crystals.
The ordered arrangement of atoms, ions and molecules in a crystalline material provides the
required conditions for the principle. Each atomic plane in a crystal contains atoms that are
confined to a small interval of space and momentum. Hence, these atoms can be considered as
bits over the interval of the interplanar surfaces of atoms. For bit density along the spatial
coordinates in crystal lattices, the total density of a plane of atoms along the axis perpendicular to
that plane, is proportional to
1
𝑑
where 𝑑 is the distance between atoms planes in crystal. The
magnitude of the corresponding reciprocal base lattice is also
1
𝑑
:
|𝐺| =
1
𝑑
(29)
The vector with this magnitude perpendicular to the atoms plane is called crystal momentum or
pseudo-momentum. This momentum appears just in the interactions of atoms lattice with an
incident photon or particle waves. With respect to the bit density principle from the previous
section, this pseudo-momentum is equal to the density of bits (atoms) over the interval 𝑑 by the
equation:
𝒫 =
𝑏𝑖𝑡𝑠
𝑢𝑛𝑖𝑡 𝑙𝑒𝑛𝑔𝑡ℎ
=
𝑛
𝑑
(30)
The related momentum is the product of density and ℏ i.e.,
𝒫 = ℏ
𝑛
𝑑
(31)
Then,the pseudo-momentum per atom reads as:
𝒫 = ℏ
1
𝑑
= ℏ𝐺 (32)
This is the main relation for crystal momentum which is derived by the bit information principle.
Curiously, the similar relation should govern the angular momentum. The suggested relation is as
follows:
𝐿𝜃 =
ℏ
𝜃
(33)
Where 𝜃 is the angular period of bits over the whole 2𝜋 as unit angle. For crystals with 𝑁-fold
rotational symmetries, it has been verified that the difference of pseudo angular momentum of
incident and diffracted photons on a crystal with 𝑁-fold rotational symmetries obey the relation
[17]:
∆𝑚ℏ = 𝜎ℏ + 𝑁𝑃ℏ (34)
And,for Rayleigh scattering with 𝜎𝑖 and 𝜎𝑠 as incident and scattered helicity of photons, we have
[17]:
𝜎𝑖 − 𝜎𝑠 = 𝑁𝑃 (35)
Where 𝑃 denotes an integer and 𝑁 determines the particles or bits density per unit angle 2𝜋
(recall the definition of 𝑁-fold rotational symmetries). Thus,the equation (33) in the BDM
context is compatible with the pseudo-angular momentum relation in (34) and (35).
Bekenstein Bound is another consequence of bit information principle. Due to this bound, the
maximum information confined in a region with radius 𝑅 and energy 𝐸 due to Bekenstein is:
9. International Journal of Information Sciences and Techniques (IJIST) Vol.13, No.3, May 2023
9
𝐼 ≤
2𝜋𝑅𝐸
ℏ𝑐 𝑙𝑛 2
(36)
For calculating this bound, we use an example of a black hole, because the information within a
black hole bounded to its horizon where the light rays can not pass to exit the black hole and
therefore are confined in the radius of horizon of black hole. In this case information bits as the
wavelength of the light rays, are spanned over the time interval by which the light travels from
center of black hole (singularity point) to horizon at the radius 𝑅:
∆𝑡 =
𝑅
𝑐
(37)
Hence, In the language of bit information principle, the maximum bit density 𝐼𝑀 over this time
interval is proportional to energy 𝐸 up to a coefficient ℏ:
𝐸 = ℏ
𝐼𝑀
∆𝑡
= ℏ𝐼𝑀
𝑐
𝑅
⇒ 𝐼𝑀 =
𝑅𝐸
ℏ𝑐
(38)
Because of the possible information leak from black hole, the true information𝐼 confined in this
radius is lesser than 𝐼𝑀 :
𝐼 ≤ 𝐼𝑀 =
𝑅𝐸
ℏ𝑐
(39)
Which is compatible with (36).
3. CONCLUSION
By introducing a physical based theory of information, this article proves the authenticity and
benefits of a new approach to the information concept in physics. This notion of information
reduces to Shannon information as a special case. The BDM theory approach to information, not
only endows a meaning to information, but also is compatible with the well-known principles
such asLandauer’s principle, conservation of information and Holographic principles. After
deriving the crucial “bit information principle”, the theory predicts the diverse principles such as,
De Broglie, Planck and Bekenstein equations and mass-energy equivalence.
REFERENCES
[1] Shannon, Claude Elwood. "A mathematical theory of communication." ACM SIGMOBILE mobile
computing and communications review 5.1 (2001): 3-55.
[2] Griffin, Emory A. A first look at communication theory. McGraw-Hill, 2003.
[3] Logan, Robert K. "What is information?: Why is it relativistic and what is its relationship to
materiality, meaning and organization." Information 3.1 (2012): 68-91.
[4] MacKay, D. M. "Proceedings of the information theory symposium." (1950).
[5] Lombardi, Olimpia, Federico Holik, and Leonardo Vanni. "What is Shannon?" Synthese 193.7
(2016): 1983-2012.
[6] Barwise, Jon, and Jerry Seligman. Information flow: the logic of distributed systems. Cambridge
University Press, 1997.
[7] Dretske, Fred. "Knowledge and the Flow of Information." (1981).
[8] Timpson, Christopher Gordon. "Quantum information theory and the foundations of quantum
mechanics." arXiv preprint quant-ph/0412063 (2004).
[9] Brukner, Časlav, and Anton Zeilinger. "Conceptual inadequacy of the Shannon information in
quantum measurements." Physical Review A 63.2 (2001): 022113.
10. International Journal of Information Sciences and Techniques (IJIST) Vol.13, No.3, May 2023
10
[10] Lindgren, Jussi, and Jukka Liukkonen. "The Heisenberg Uncertainty Principle as an Endogenous
Equilibrium Property of Stochastic Optimal Control Systems in Quantum
Mechanics." Symmetry 12.9 (2020): 1533.
[11] Haag, Rudolf. Local quantum physics: Fields, particles, algebras. Springer Science & Business
Media, 2012.
[12] Hooft, Gerard T. "Deterministic quantum mechanics: the mathematical equations." arXiv preprint
arXiv:2005.06374 (2020).
[13] Amiri, M. Binary Data Matrix Theory. Preprints.org 2016, 2016120049.
https://doi.org/10.20944/preprints201612.0049.v2.
[14] Papoulis, Athanasios, and S. Unnikrishna Pillai. Probability, random variables, and stochastic
processes. Tata McGraw-Hill Education, 2002.
[15] H. Goldstein, Classical mechanics: Pearson Education India, 1965
[16] C.Castro, (2008). The Euclidean gravitational action as black hole entropy, singularities, and
spacetime voids. Journal of Mathematical Physics, 49(4), 042501.
[17] Tatsumi, Yuki, Tomoaki Kaneko, and Riichiro Saito. "Conservation law of angular momentum in
helicity-dependent Raman and Rayleigh scattering." Physical Review B 97.19 (2018): 195444.
[18] Petty, Grant W. "On some shortcomings of Shannon entropy as a measure of information content in
indirect measurements of continuous variables." Journal of Atmospheric and Oceanic Technology
35.5 (2018): 1011-1021.