The consensus operator provides a method for combining possibly conflicting beliefs within the Dempster-Shafer belief theory, and represents an alternative to the traditional Dempster's rule. In everyday discourse dogmatic beliefs are expressed by observers when they have a strong and rigid opinion about a subject of interest. Such beliefs can be expressed and formalised within the Demspter- Shafer belief theory. This paper describes how the consensus operator can be applied to dogmatic conflicting opinions, i.e. when the degree of conflict is very high. It overcomes shortcomings of Dempster's rule and other operators that have been proposed for combining possibly conflicting beliefs.
Possibility Theory versus Probability Theory in Fuzzy Measure TheoryIJERA Editor
The purpose of this paper is to compare probability theory with possibility theory, and to use this comparison in comparing probability theory with fuzzy set theory. The best way of comparing probabilistic and possibilistic conceptualizations of uncertainty is to examine the two theories from a broader perspective. Such a perspective is offered by evidence theory, within which probability theory and possibility theory are recognized as special branches. While the various characteristic of possibility theory within the broader framework of evidence theory are expounded in this paper, we need to introduce their probabilistic counterparts to facilitate our discussion.
This document discusses various techniques for predicting the best answer to a question using a Yahoo Answers dataset. It pre-processes the data, extracts features using TF-IDF and LSA, and evaluates models using cosine similarity, longest answer length, Jaccard similarity, and LSA. Accuracy is highest using longest answer length. LSA and Gensim perform better at selecting related answers and their accuracy improves when considering multiple best answers. Future work includes expanding the analysis to more categories and optimizing the models.
Probabilistic Quantifier Logic for General Intelligence: An Indefinite Proba...Matthew Ikle
The document discusses an approach to probabilistic quantifier logic called indefinite probabilities. It uses third-order probabilities to assign truth values to expressions with unbound variables, extending standard quantifier logic. For universal quantifiers, it calculates the probability that the true envelope of distributions is contained within an interval representing 'essentially 1'. For existential quantifiers, it calculates the probability that the true envelope is not contained within an interval representing 'essentially 0'. It also explains how this approach can handle fuzzy quantifiers using proxy confidence levels.
Truth maintenance systems (TMS) are used to represent beliefs and their justifications in a knowledge base and maintain consistency when new information is added or inferred facts are retracted. A TMS models inferences as a dependency network and uses an algorithm to track which inferences need to be retracted if assumptions change. It differs from traditional logic which treats all facts equally. Nonmonotonic logics also use TMS to represent default reasoning and maintain beliefs when assumptions are defeated. Fuzzy logic uses similar concepts to handle imprecise values and measurements in systems like control applications.
The document outlines the key objectives and concepts related to hypotheses in research. It defines a hypothesis as a tentative explanation for a phenomenon that can be tested. The objectives are to understand what a hypothesis is, its characteristics and purposes, how to formulate different types of hypotheses including null and alternative hypotheses, and appreciate their importance in research. The document also discusses the components and various definitions of a hypothesis provided by authors, and provides examples of universal, existential, null, and alternative hypotheses.
This document outlines techniques for representing uncertainty in expert systems, including Bayesian reasoning and certainty factors theory. It discusses sources of uncertain knowledge, probabilistic reasoning using Bayes' rule, and an example of computing posterior probabilities of hypotheses given observed evidence. Certainty factors theory is presented as an alternative to Bayesian reasoning that uses numerical factors between -1 and 1 to represent degrees of belief.
This document discusses the concepts of fuzzy logic and fuzzy reasoning. It explains that fuzzy logic provides an alternative way to handle uncertainty compared to traditional binary logic. Natural language contains many vague concepts that cannot be easily translated to binary logic without losing meaning. Fuzzy logic allows for reasoning with imprecise data in a similar way to how humans reason. The document provides a brief history of fuzzy logic, tracing ideas back to ancient Greek philosophers and more recent developments in the 20th century that led to modern fuzzy set theory and fuzzy logic.
The document summarizes a class discussion on management theory from Chang Jung Christian University's PhD program. The discussion focused on the papers "What Theory is Not" by Robert I. Sutton and Barry M. Staw, and comments on it by Paul J. DiMaggio. It also discussed the paper "The Case for Qualitative Research" by Gareth Morgan and Linda Smircich. The class was led by Professor Lee Yuan-te.
Possibility Theory versus Probability Theory in Fuzzy Measure TheoryIJERA Editor
The purpose of this paper is to compare probability theory with possibility theory, and to use this comparison in comparing probability theory with fuzzy set theory. The best way of comparing probabilistic and possibilistic conceptualizations of uncertainty is to examine the two theories from a broader perspective. Such a perspective is offered by evidence theory, within which probability theory and possibility theory are recognized as special branches. While the various characteristic of possibility theory within the broader framework of evidence theory are expounded in this paper, we need to introduce their probabilistic counterparts to facilitate our discussion.
This document discusses various techniques for predicting the best answer to a question using a Yahoo Answers dataset. It pre-processes the data, extracts features using TF-IDF and LSA, and evaluates models using cosine similarity, longest answer length, Jaccard similarity, and LSA. Accuracy is highest using longest answer length. LSA and Gensim perform better at selecting related answers and their accuracy improves when considering multiple best answers. Future work includes expanding the analysis to more categories and optimizing the models.
Probabilistic Quantifier Logic for General Intelligence: An Indefinite Proba...Matthew Ikle
The document discusses an approach to probabilistic quantifier logic called indefinite probabilities. It uses third-order probabilities to assign truth values to expressions with unbound variables, extending standard quantifier logic. For universal quantifiers, it calculates the probability that the true envelope of distributions is contained within an interval representing 'essentially 1'. For existential quantifiers, it calculates the probability that the true envelope is not contained within an interval representing 'essentially 0'. It also explains how this approach can handle fuzzy quantifiers using proxy confidence levels.
Truth maintenance systems (TMS) are used to represent beliefs and their justifications in a knowledge base and maintain consistency when new information is added or inferred facts are retracted. A TMS models inferences as a dependency network and uses an algorithm to track which inferences need to be retracted if assumptions change. It differs from traditional logic which treats all facts equally. Nonmonotonic logics also use TMS to represent default reasoning and maintain beliefs when assumptions are defeated. Fuzzy logic uses similar concepts to handle imprecise values and measurements in systems like control applications.
The document outlines the key objectives and concepts related to hypotheses in research. It defines a hypothesis as a tentative explanation for a phenomenon that can be tested. The objectives are to understand what a hypothesis is, its characteristics and purposes, how to formulate different types of hypotheses including null and alternative hypotheses, and appreciate their importance in research. The document also discusses the components and various definitions of a hypothesis provided by authors, and provides examples of universal, existential, null, and alternative hypotheses.
This document outlines techniques for representing uncertainty in expert systems, including Bayesian reasoning and certainty factors theory. It discusses sources of uncertain knowledge, probabilistic reasoning using Bayes' rule, and an example of computing posterior probabilities of hypotheses given observed evidence. Certainty factors theory is presented as an alternative to Bayesian reasoning that uses numerical factors between -1 and 1 to represent degrees of belief.
This document discusses the concepts of fuzzy logic and fuzzy reasoning. It explains that fuzzy logic provides an alternative way to handle uncertainty compared to traditional binary logic. Natural language contains many vague concepts that cannot be easily translated to binary logic without losing meaning. Fuzzy logic allows for reasoning with imprecise data in a similar way to how humans reason. The document provides a brief history of fuzzy logic, tracing ideas back to ancient Greek philosophers and more recent developments in the 20th century that led to modern fuzzy set theory and fuzzy logic.
The document summarizes a class discussion on management theory from Chang Jung Christian University's PhD program. The discussion focused on the papers "What Theory is Not" by Robert I. Sutton and Barry M. Staw, and comments on it by Paul J. DiMaggio. It also discussed the paper "The Case for Qualitative Research" by Gareth Morgan and Linda Smircich. The class was led by Professor Lee Yuan-te.
Mining Frequent Item set Using Genetic Algorithmijsrd.com
By applying rule mining algorithms, frequent itemsets are generated from large data sets e.g. Apriori algorithm. It takes so much computer time to compute all frequent itemsets. We can solve this problem much efficiently by using Genetic Algorithm(GA). GA performs global search and the time complexity is less compared to other algorithms. Genetic Algorithms (GAs) are adaptive heuristic search & optimization method for solving both constrained and unconstrained problems based on the evolutionary ideas of natural selection and genetic. The main aim of this work is to find all the frequent itemsets from given data sets using genetic algorithm & compare the results generated by GA with other algorithms. Population size, number of generation, crossover probability, and mutation probability are the parameters of GA which affect the quality of result and time of calculation.
Design and Simulation of 4-bit DAC Decoder Using Custom Designerijsrd.com
Digital to Analog Converter (DAC), are the most complex structures and having digital input and analog output, it is found in almost all Analog and Mixed Signal Design today. There are different types of DACs currently on the market. The goal of this paper is to implement 4-bit Resistor String D/A converter using 4-bit AND gate and 4-to-16 Decoder with the help of 4 numbers Inverter. Binary (digital) coded 4-bit data was input to the converter. The data converter will convert all 4-bit binary coded data into correspondent different level of "staircase" voltage.
Performance Analysis of Small Sized Engine with Supercharger using Gasoline -...ijsrd.com
This paper determines whether the mechanical action of a supercharger improves engine performance. Full power is needed only for accelerating and hill-climbing during the remainder of the time the excess weight of the engine and other parts must be carried at a loss of efficiency. That smaller engine can be used advantageously when equipped with superchargers, the supercharger being used only when excess power is required. Supercharger capable of generating the more power from given capacity of the engine. With edition to that ethanol helps in controlling the some parameter affected by the ethanol and decreasing the exhaust emission. Experiments carried out at low speed of 1500 rpm for different performance characteristics. Paper includes Curves and tables are given to show the results of comparative tests with and without a supercharger.
An authentication framework for wireless sensor networks using Signature Base...ijsrd.com
Authentication in Wireless Sensor Networks (WSNs) is a challenging process. Providing authentication for the Nodes in WSN is a vital issue in Secure Group communication among WSNs. Massive group of tiny sensor Nodes forms WSNs and these are placed in open, unattended milieu. Due to this reason, Nodes in WSN can endure exclusive encounters. WSNs are more vulnerable to active and passive attacks than wired ones due to their broadcasting nature, limitations in resources and unrestrained environments. However, security will be a significant factor for their complete implementation. In this proposal, a new approach has been introduced to achieve secure authentication among Nodes in WSNs.
Segment Combination based Approach for Energy- Aware Multipath Communication ...ijsrd.com
Underwater acoustic communication is a technique of sending and receiving message below water. There are several ways of employing such communication but the most common is using hydrophones. Under water communication is difficult due to factors like multi-path propagation, time variations of the channel, small available bandwidth and strong signal attenuation, especially over long ranges. In underwater communication there are low data rates compared to terrestrial communication, since underwater communication uses acoustic waves instead of electromagnetic waves. The data can be collected from the sensor node and transfer the data to the destination. The same source information can be send through multiple paths through the same destination. So the packet bit error rate is high and power and energy consumption for transferring data is high. The bandwidth and the energy can be consumed. And then the packet bit rate is the serious problem in the existing system. It can be overcome by using the segment combination in the hamming code technique. The packet bit rate can be overcome by increasing the number of paths. The number of paths can be increased based on calculating the cost. For calculating the cost, least cost algorithm is used. And based on the minimum cost path, the path is chosen and data is transferred to the same destination.
Review on variants of Security aware AODVijsrd.com
Mobile ad-hoc network (MANET) is very sensitive network to security due to challenging characteristic such as decentralization, dynamic changing topology, and neighbor based routing. All existing MANET protocol are simply trust their neighbor and make route through them due to neighbor based routing network is disturbed by malicious node or intruder. Trust calculation is challenging task due to computation complexity constraints in MANET. In this paper we have presented variants of trust based security protocol in an on demand distance vector routing protocol and also proposed new mechanism for Network coding with RSA based Encryption and Decryption in an AODV which will improve the security level of MANET with acceptable overhead limit.
Survey of Modified Routing Protocols for Mobile Ad-hoc Networkijsrd.com
In last few years extensive research work has been done in the field of routing protocols for Ad-hoc Network. Various routing protocols have been evaluated in different network conditions using different performance metrics. A lot of research has been done how to modify standard routing protocol in ad-hoc network to improve its performance. The hop count is not only metric that gives efficient routing path. There are various modified protocols which make the use of other parameters along with hop count to select the best routing path to the destination. In standard Ad-hoc On-demand Distance Vector (AODV) routing protocol only hop count is used for selecting the routing path. In this paper we have studied variants of AODV protocols with modified routing metric.
The demand for portable electronic devices that offers increase in functions, performance at lower costs and smaller sizes increased rapidly. Designing complex SOCs is a challenge, especially at 90 nanometers Technology, where new problems crop up - power efficiency is being the biggest of problems. For different modes we cannot design different operating circuits, better to have technique that will have minimum circuit changes and in all modes it will save power which is wasted. This paper provides some guidelines on how Low Power design using UPF approach can be introduced for a design.
This document summarizes the state of the art in cloud security techniques. It begins by introducing cloud computing and its benefits. It then surveys existing literature on cloud security, summarizing 5 papers on topics like policy reconciliation and attribute-based encryption. It describes several techniques for cloud security like attribute-based encryption and fuzzy identity-based encryption. Finally, it discusses future work on developing a novel sequential rule mining algorithm for market basket data.
Design & Development of Articulated Inspection ARM for in House Inspection in...ijsrd.com
Magnetic confinement of plasma can be achieved using a device Tokamak. Concept of Tokamak confinement is also been tried to develop a fusion reactor. Reactor environment is of high hostile for human intervention due to radiation from activated atoms. For any kind of maintenance works Remote operation is necessary. An attempt was made to arrive at a concept of using articulated inspection arm to check the components if any damage or replacement required during shutdown operation of reactor machine. Proof of concept towards Design and development of articulated inspection arm for Torus Vessel was carried out & a robot having multi link joint with a camera mounted on the end point and also an arm having three degrees of freedom was attempted. The concept was materialized developing an articulated inspection robotic arm.
A Review on Image Denoising using Wavelet Transformijsrd.com
This document discusses image denoising using wavelet transforms. It begins with an introduction to wavelet transforms and their advantages over Fourier transforms for denoising non-stationary signals like images. It then describes the basic steps of image denoising using wavelets: decomposing the noisy image into wavelet coefficients, modifying the coefficients using thresholding, and reconstructing the denoised image. Thresholding techniques like hard and soft thresholding are explained. The document concludes that wavelet-based denoising is computationally efficient and can effectively remove noise from images.
Analysis and Detection of Image Forgery Methodologiesijsrd.com
"Forgery" is a subjective word. An image can become a forgery based upon the context in which it is used. An image altered for fun or someone who has taken a bad photo, but has been altered to improve its appearance cannot be considered a forgery even though it has been altered from its original capture. The other side of forgery are those who perpetuate a forgery for gain and prestige. They create an image in which to dupe the recipient into believing the image is real and from this they are able to gain payment and fame. Detecting these types of forgeries has become serious problem at present. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. Now these marks of tampering can be done by various operations such as rotation, scaling, JPEG compression, Gaussian noise etc. called as attacks. There are various methods proposed in this field in recent years to detect above mentioned attacks. This paper provides a detailed analysis of different approaches and methodologies used to detect image forgery. It is also analysed that block-based features methods are robust to Gaussian noise and JPEG compression and the key point-based feature methods are robust to rotation and scaling.
The Optimizing Multiple Travelling Salesman Problem Using Genetic Algorithmijsrd.com
The traveling salesman problem (TSP) supports the idea of a single salesperson traveling in a continuous trip visiting all n cities exactly once and returning to the starting point. The multiple traveling salesman problems (mTSP) is complex combinatorial optimization problem, which is a generalization of the well-known Travelling Salesman Problem (TSP), where one or more salesman can be used in the path. In this paper mTSP has also been studied and solved with GA in the form of the vehicle scheduling problem. The existing model is new models are compared to both mathematically and experimentally. This work presents a chromosome methodology setting up the MTSP to be solved using a GA.
Optimization Approach for Capacitated Vehicle Routing Problem Using Genetic A...ijsrd.com
Vehicle Routing Problem (VRP) is a combinatorial optimization problem which deals with fleet of vehicles to serve n number of customers from a central depot. Each customer has a certain demand that must be satisfied using each vehicle that has the same capacity (homogeneous fleet). Each customer is served by a particular vehicle in such a way that the same customer is not served by another vehicle. In this paper, Genetic Algorithm (GA) is used to get the optimized vehicle route with minimum distance for Capacitated Vehicle Routing Problem (CVRP). The outcomes of GA achieve better optimization and gives good performance. Further, GA is enhanced to minimize the number of vehicles.
SDH (Synchronous Digital Hierarchy) & Its Architectureijsrd.com
The SDH (Synchronous Digital Hierarchy) tell us about transferring large amount of data over an same optical fiber and this document gives us the information about the structure and architecture of SDH.
This document discusses the Dempster-Shafer theory of evidence combination and provides examples of applying the Dempster-Shafer combination rule. It begins by defining key concepts in Dempster-Shafer theory including frames of discernment, basic probability assignments (bpa), belief, and plausibility. It then gives a simplified medical diagnosis problem involving jaundice as an example frame of discernment. Three examples are provided showing how different pieces of evidence would be represented by assigning basic probability values to subsets of the frame of discernment.
The document discusses using a hybrid approach combining predictability and the Dempster-Shafer method of modeling randomness and fuzziness. It provides motivation for considering external factors beyond what is in training data. Basic definitions of randomness and fuzziness are given. The Dempster-Shafer theory allows combining evidence from independent sources and handling contradictions. Several datasets are tested using a modified Dempster-Shafer approach to potentially increase predictive performance over solely using historic data.
The document discusses hypotheses, including defining a hypothesis, the characteristics of a good hypothesis, and the steps involved in hypothesis testing. It notes that a hypothesis is a tentative explanation that can be tested, and outlines criteria for constructing strong hypotheses, such as being clear, precise, and testable. The document also covers the sources of hypotheses, approaches to testing hypotheses like classical and Bayesian statistics, and the types of errors that can occur in hypothesis testing like type 1 and type 2 errors.
On Severity, the Weight of Evidence, and the Relationship Between the Twojemille6
Margherita Harris
Visiting fellow in the Department of Philosophy, Logic and Scientific Method at the London
School of Economics and Political Science.
ABSTRACT: According to the severe tester, one is justified in declaring to have evidence in support of a
hypothesis just in case the hypothesis in question has passed a severe test, one that it would be very
unlikely to pass so well if the hypothesis were false. Deborah Mayo (2018) calls this the strong
severity principle. The Bayesian, however, can declare to have evidence for a hypothesis despite not
having done anything to test it severely. The core reason for this has to do with the
(infamous) likelihood principle, whose violation is not an option for anyone who subscribes to the
Bayesian paradigm. Although the Bayesian is largely unmoved by the incompatibility between
the strong severity principle and the likelihood principle, I will argue that the Bayesian’s never-ending
quest to account for yet an other notion, one that is often attributed to Keynes (1921) and that is
usually referred to as the weight of evidence, betrays the Bayesian’s confidence in the likelihood
principle after all. Indeed, I will argue that the weight of evidence and severity may be thought of as
two (very different) sides of the same coin: they are two unrelated notions, but what brings them
together is the fact that they both make trouble for the likelihood principle, a principle at the core of
Bayesian inference. I will relate this conclusion to current debates on how to best conceptualise
uncertainty by the IPCC in particular. I will argue that failure to fully grasp the limitations of an
epistemology that envisions the role of probability to be that of quantifying the degree of belief to
assign to a hypothesis given the available evidence can be (and has been) detrimental to an
adequate communication of uncertainty.
Is there any a novel best theory for uncertainty? Andino Maseleno
The document discusses several uncertainty modeling techniques including fuzzy logic, Dempster-Shafer theory, neural networks, and genetic algorithms. It provides background on each technique, how they were developed and applied to problems. Fuzzy logic allows for imprecise variables rather than binary true/false. Dempster-Shafer theory generalizes probability to assign beliefs to sets rather than individual outcomes. Neural networks learn from examples via weighted connections. Genetic algorithms use evolutionary principles to optimize solutions. These soft computing methods complement each other and help model real-world uncertainty.
Subjective Probabilistic Knowledge Grading and ComprehensionWaqas Tariq
Probabilistic Comprehension and Modeling is one of the newest areas in information extraction, text linguistics. Though much of the research vested in linguistics and information extraction is probabilistic, the importance is disappeared in 80’s. This is just because of the input language is noisy, ambiguous and segmented. Probability theory is certainly normative for solving the problems related to uncertainty. Perhaps human language processing is simply non-optimal, non-rational process. Subjective Probabilistic approach fixes this problem, through scenario, evidence and hypothesis.
The document defines and discusses hypotheses in research contexts. It provides that a hypothesis is a formal, testable statement of the expected relationship between independent and dependent variables. The document outlines several definitions of a hypothesis provided by authors and discusses the key characteristics of a good hypothesis. It also differentiates between different types of hypotheses such as universal, existential, null, alternate, non-directional, directional, and research hypotheses. The purpose, components, and process of hypothesis making and testing are described.
Mining Frequent Item set Using Genetic Algorithmijsrd.com
By applying rule mining algorithms, frequent itemsets are generated from large data sets e.g. Apriori algorithm. It takes so much computer time to compute all frequent itemsets. We can solve this problem much efficiently by using Genetic Algorithm(GA). GA performs global search and the time complexity is less compared to other algorithms. Genetic Algorithms (GAs) are adaptive heuristic search & optimization method for solving both constrained and unconstrained problems based on the evolutionary ideas of natural selection and genetic. The main aim of this work is to find all the frequent itemsets from given data sets using genetic algorithm & compare the results generated by GA with other algorithms. Population size, number of generation, crossover probability, and mutation probability are the parameters of GA which affect the quality of result and time of calculation.
Design and Simulation of 4-bit DAC Decoder Using Custom Designerijsrd.com
Digital to Analog Converter (DAC), are the most complex structures and having digital input and analog output, it is found in almost all Analog and Mixed Signal Design today. There are different types of DACs currently on the market. The goal of this paper is to implement 4-bit Resistor String D/A converter using 4-bit AND gate and 4-to-16 Decoder with the help of 4 numbers Inverter. Binary (digital) coded 4-bit data was input to the converter. The data converter will convert all 4-bit binary coded data into correspondent different level of "staircase" voltage.
Performance Analysis of Small Sized Engine with Supercharger using Gasoline -...ijsrd.com
This paper determines whether the mechanical action of a supercharger improves engine performance. Full power is needed only for accelerating and hill-climbing during the remainder of the time the excess weight of the engine and other parts must be carried at a loss of efficiency. That smaller engine can be used advantageously when equipped with superchargers, the supercharger being used only when excess power is required. Supercharger capable of generating the more power from given capacity of the engine. With edition to that ethanol helps in controlling the some parameter affected by the ethanol and decreasing the exhaust emission. Experiments carried out at low speed of 1500 rpm for different performance characteristics. Paper includes Curves and tables are given to show the results of comparative tests with and without a supercharger.
An authentication framework for wireless sensor networks using Signature Base...ijsrd.com
Authentication in Wireless Sensor Networks (WSNs) is a challenging process. Providing authentication for the Nodes in WSN is a vital issue in Secure Group communication among WSNs. Massive group of tiny sensor Nodes forms WSNs and these are placed in open, unattended milieu. Due to this reason, Nodes in WSN can endure exclusive encounters. WSNs are more vulnerable to active and passive attacks than wired ones due to their broadcasting nature, limitations in resources and unrestrained environments. However, security will be a significant factor for their complete implementation. In this proposal, a new approach has been introduced to achieve secure authentication among Nodes in WSNs.
Segment Combination based Approach for Energy- Aware Multipath Communication ...ijsrd.com
Underwater acoustic communication is a technique of sending and receiving message below water. There are several ways of employing such communication but the most common is using hydrophones. Under water communication is difficult due to factors like multi-path propagation, time variations of the channel, small available bandwidth and strong signal attenuation, especially over long ranges. In underwater communication there are low data rates compared to terrestrial communication, since underwater communication uses acoustic waves instead of electromagnetic waves. The data can be collected from the sensor node and transfer the data to the destination. The same source information can be send through multiple paths through the same destination. So the packet bit error rate is high and power and energy consumption for transferring data is high. The bandwidth and the energy can be consumed. And then the packet bit rate is the serious problem in the existing system. It can be overcome by using the segment combination in the hamming code technique. The packet bit rate can be overcome by increasing the number of paths. The number of paths can be increased based on calculating the cost. For calculating the cost, least cost algorithm is used. And based on the minimum cost path, the path is chosen and data is transferred to the same destination.
Review on variants of Security aware AODVijsrd.com
Mobile ad-hoc network (MANET) is very sensitive network to security due to challenging characteristic such as decentralization, dynamic changing topology, and neighbor based routing. All existing MANET protocol are simply trust their neighbor and make route through them due to neighbor based routing network is disturbed by malicious node or intruder. Trust calculation is challenging task due to computation complexity constraints in MANET. In this paper we have presented variants of trust based security protocol in an on demand distance vector routing protocol and also proposed new mechanism for Network coding with RSA based Encryption and Decryption in an AODV which will improve the security level of MANET with acceptable overhead limit.
Survey of Modified Routing Protocols for Mobile Ad-hoc Networkijsrd.com
In last few years extensive research work has been done in the field of routing protocols for Ad-hoc Network. Various routing protocols have been evaluated in different network conditions using different performance metrics. A lot of research has been done how to modify standard routing protocol in ad-hoc network to improve its performance. The hop count is not only metric that gives efficient routing path. There are various modified protocols which make the use of other parameters along with hop count to select the best routing path to the destination. In standard Ad-hoc On-demand Distance Vector (AODV) routing protocol only hop count is used for selecting the routing path. In this paper we have studied variants of AODV protocols with modified routing metric.
The demand for portable electronic devices that offers increase in functions, performance at lower costs and smaller sizes increased rapidly. Designing complex SOCs is a challenge, especially at 90 nanometers Technology, where new problems crop up - power efficiency is being the biggest of problems. For different modes we cannot design different operating circuits, better to have technique that will have minimum circuit changes and in all modes it will save power which is wasted. This paper provides some guidelines on how Low Power design using UPF approach can be introduced for a design.
This document summarizes the state of the art in cloud security techniques. It begins by introducing cloud computing and its benefits. It then surveys existing literature on cloud security, summarizing 5 papers on topics like policy reconciliation and attribute-based encryption. It describes several techniques for cloud security like attribute-based encryption and fuzzy identity-based encryption. Finally, it discusses future work on developing a novel sequential rule mining algorithm for market basket data.
Design & Development of Articulated Inspection ARM for in House Inspection in...ijsrd.com
Magnetic confinement of plasma can be achieved using a device Tokamak. Concept of Tokamak confinement is also been tried to develop a fusion reactor. Reactor environment is of high hostile for human intervention due to radiation from activated atoms. For any kind of maintenance works Remote operation is necessary. An attempt was made to arrive at a concept of using articulated inspection arm to check the components if any damage or replacement required during shutdown operation of reactor machine. Proof of concept towards Design and development of articulated inspection arm for Torus Vessel was carried out & a robot having multi link joint with a camera mounted on the end point and also an arm having three degrees of freedom was attempted. The concept was materialized developing an articulated inspection robotic arm.
A Review on Image Denoising using Wavelet Transformijsrd.com
This document discusses image denoising using wavelet transforms. It begins with an introduction to wavelet transforms and their advantages over Fourier transforms for denoising non-stationary signals like images. It then describes the basic steps of image denoising using wavelets: decomposing the noisy image into wavelet coefficients, modifying the coefficients using thresholding, and reconstructing the denoised image. Thresholding techniques like hard and soft thresholding are explained. The document concludes that wavelet-based denoising is computationally efficient and can effectively remove noise from images.
Analysis and Detection of Image Forgery Methodologiesijsrd.com
"Forgery" is a subjective word. An image can become a forgery based upon the context in which it is used. An image altered for fun or someone who has taken a bad photo, but has been altered to improve its appearance cannot be considered a forgery even though it has been altered from its original capture. The other side of forgery are those who perpetuate a forgery for gain and prestige. They create an image in which to dupe the recipient into believing the image is real and from this they are able to gain payment and fame. Detecting these types of forgeries has become serious problem at present. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. Now these marks of tampering can be done by various operations such as rotation, scaling, JPEG compression, Gaussian noise etc. called as attacks. There are various methods proposed in this field in recent years to detect above mentioned attacks. This paper provides a detailed analysis of different approaches and methodologies used to detect image forgery. It is also analysed that block-based features methods are robust to Gaussian noise and JPEG compression and the key point-based feature methods are robust to rotation and scaling.
The Optimizing Multiple Travelling Salesman Problem Using Genetic Algorithmijsrd.com
The traveling salesman problem (TSP) supports the idea of a single salesperson traveling in a continuous trip visiting all n cities exactly once and returning to the starting point. The multiple traveling salesman problems (mTSP) is complex combinatorial optimization problem, which is a generalization of the well-known Travelling Salesman Problem (TSP), where one or more salesman can be used in the path. In this paper mTSP has also been studied and solved with GA in the form of the vehicle scheduling problem. The existing model is new models are compared to both mathematically and experimentally. This work presents a chromosome methodology setting up the MTSP to be solved using a GA.
Optimization Approach for Capacitated Vehicle Routing Problem Using Genetic A...ijsrd.com
Vehicle Routing Problem (VRP) is a combinatorial optimization problem which deals with fleet of vehicles to serve n number of customers from a central depot. Each customer has a certain demand that must be satisfied using each vehicle that has the same capacity (homogeneous fleet). Each customer is served by a particular vehicle in such a way that the same customer is not served by another vehicle. In this paper, Genetic Algorithm (GA) is used to get the optimized vehicle route with minimum distance for Capacitated Vehicle Routing Problem (CVRP). The outcomes of GA achieve better optimization and gives good performance. Further, GA is enhanced to minimize the number of vehicles.
SDH (Synchronous Digital Hierarchy) & Its Architectureijsrd.com
The SDH (Synchronous Digital Hierarchy) tell us about transferring large amount of data over an same optical fiber and this document gives us the information about the structure and architecture of SDH.
This document discusses the Dempster-Shafer theory of evidence combination and provides examples of applying the Dempster-Shafer combination rule. It begins by defining key concepts in Dempster-Shafer theory including frames of discernment, basic probability assignments (bpa), belief, and plausibility. It then gives a simplified medical diagnosis problem involving jaundice as an example frame of discernment. Three examples are provided showing how different pieces of evidence would be represented by assigning basic probability values to subsets of the frame of discernment.
The document discusses using a hybrid approach combining predictability and the Dempster-Shafer method of modeling randomness and fuzziness. It provides motivation for considering external factors beyond what is in training data. Basic definitions of randomness and fuzziness are given. The Dempster-Shafer theory allows combining evidence from independent sources and handling contradictions. Several datasets are tested using a modified Dempster-Shafer approach to potentially increase predictive performance over solely using historic data.
The document discusses hypotheses, including defining a hypothesis, the characteristics of a good hypothesis, and the steps involved in hypothesis testing. It notes that a hypothesis is a tentative explanation that can be tested, and outlines criteria for constructing strong hypotheses, such as being clear, precise, and testable. The document also covers the sources of hypotheses, approaches to testing hypotheses like classical and Bayesian statistics, and the types of errors that can occur in hypothesis testing like type 1 and type 2 errors.
On Severity, the Weight of Evidence, and the Relationship Between the Twojemille6
Margherita Harris
Visiting fellow in the Department of Philosophy, Logic and Scientific Method at the London
School of Economics and Political Science.
ABSTRACT: According to the severe tester, one is justified in declaring to have evidence in support of a
hypothesis just in case the hypothesis in question has passed a severe test, one that it would be very
unlikely to pass so well if the hypothesis were false. Deborah Mayo (2018) calls this the strong
severity principle. The Bayesian, however, can declare to have evidence for a hypothesis despite not
having done anything to test it severely. The core reason for this has to do with the
(infamous) likelihood principle, whose violation is not an option for anyone who subscribes to the
Bayesian paradigm. Although the Bayesian is largely unmoved by the incompatibility between
the strong severity principle and the likelihood principle, I will argue that the Bayesian’s never-ending
quest to account for yet an other notion, one that is often attributed to Keynes (1921) and that is
usually referred to as the weight of evidence, betrays the Bayesian’s confidence in the likelihood
principle after all. Indeed, I will argue that the weight of evidence and severity may be thought of as
two (very different) sides of the same coin: they are two unrelated notions, but what brings them
together is the fact that they both make trouble for the likelihood principle, a principle at the core of
Bayesian inference. I will relate this conclusion to current debates on how to best conceptualise
uncertainty by the IPCC in particular. I will argue that failure to fully grasp the limitations of an
epistemology that envisions the role of probability to be that of quantifying the degree of belief to
assign to a hypothesis given the available evidence can be (and has been) detrimental to an
adequate communication of uncertainty.
Is there any a novel best theory for uncertainty? Andino Maseleno
The document discusses several uncertainty modeling techniques including fuzzy logic, Dempster-Shafer theory, neural networks, and genetic algorithms. It provides background on each technique, how they were developed and applied to problems. Fuzzy logic allows for imprecise variables rather than binary true/false. Dempster-Shafer theory generalizes probability to assign beliefs to sets rather than individual outcomes. Neural networks learn from examples via weighted connections. Genetic algorithms use evolutionary principles to optimize solutions. These soft computing methods complement each other and help model real-world uncertainty.
Subjective Probabilistic Knowledge Grading and ComprehensionWaqas Tariq
Probabilistic Comprehension and Modeling is one of the newest areas in information extraction, text linguistics. Though much of the research vested in linguistics and information extraction is probabilistic, the importance is disappeared in 80’s. This is just because of the input language is noisy, ambiguous and segmented. Probability theory is certainly normative for solving the problems related to uncertainty. Perhaps human language processing is simply non-optimal, non-rational process. Subjective Probabilistic approach fixes this problem, through scenario, evidence and hypothesis.
The document defines and discusses hypotheses in research contexts. It provides that a hypothesis is a formal, testable statement of the expected relationship between independent and dependent variables. The document outlines several definitions of a hypothesis provided by authors and discusses the key characteristics of a good hypothesis. It also differentiates between different types of hypotheses such as universal, existential, null, alternate, non-directional, directional, and research hypotheses. The purpose, components, and process of hypothesis making and testing are described.
This document is highly important for the learners of research methodology. A number of statistical terminologies are defined with examples for the simplicity of learners.
1. A hypothesis is a proposed explanation for a phenomenon that can be tested. It represents a relationship between two or more variables that can be measured.
2. There are different types of hypotheses, including simple, complex, directional, and null hypotheses. A good hypothesis is clear, testable, parsimonious, and related to existing theories.
3. Formulating a research hypothesis can be challenging due to a lack of theoretical knowledge or awareness of research techniques. Hypotheses must be stated in a way that allows for testing, such as using an if-then structure.
Hypothesis testing involves stating a null hypothesis (H0) and an alternative hypothesis (H1). H0 assumes there is no effect or relationship in the population. H1 states there is an effect. A study is conducted and statistics are used to determine if the data supports rejecting H0 in favor of H1. The p-value indicates the probability of obtaining results as extreme as the observed data or more extreme if H0 is true. If p ≤ the predetermined significance level (α = 0.05), H0 is rejected in favor of H1. Otherwise, H0 is retained but not proven true. Type I and II errors can occur when the true hypothesis is incorrectly rejected or retained.
This document discusses hypotheses, providing definitions, types, and characteristics. It defines a hypothesis as a provisional explanation for a phenomenon that can be tested. Hypotheses guide research and can be universal, existential, null, alternate, directional, or non-directional. They specify variables, populations, and relationships to be empirically tested. The document also outlines the process of developing a hypothesis from selecting a topic to writing a simple, testable statement.
This document discusses hypotheses, including their characteristics, criteria for construction, testing approaches, and types of errors. It defines a hypothesis as a tentative explanation for behaviors or events that can be scientifically tested. Key points include:
- Hypotheses must be clear, precise, testable and specify the relationship between variables.
- The null hypothesis is what is being tested, while the alternative is what may be accepted if the null is rejected.
- Tests can be two-tailed, testing in both directions, or one-tailed, testing in one specified direction.
- Type I error occurs when a true null hypothesis is rejected, while Type II error is accepting a false null hypothesis.
This document provides an introduction to Dempster-Shafer theory and artificial intelligence. It discusses how Dempster-Shafer theory accounts for ignorance and uncertainty differently than probability theory by allowing beliefs to range from 0 to 1. It also explains how Dempster-Shafer theory represents beliefs as intervals bounded by belief and plausibility, and how conflicting evidence is handled. Several examples are provided to illustrate key concepts like belief, plausibility, and the effects of conflicting evidence.
This document provides an overview of knowledge representation using predicate logic. It begins by defining knowledge and distinguishing it from data, beliefs, and hypotheses. It then discusses knowledge representation and some of its issues. The document introduces predicate logic as a formal method for knowledge representation and provides examples of representing statements in predicate logic. It discusses issues that can arise in translation, such as ambiguity and choosing representations. Finally, it covers resolution, a technique for theorem proving using knowledge represented in predicate logic.
This document discusses hypothesis, its definition, characteristics, and the steps of hypothesis testing. A hypothesis is an educated guess or claim about a population that is tentatively accepted to guide further investigation. Important characteristics of a hypothesis include being clear, testable, limited in scope, consistent with known facts, and able to be tested within a reasonable time. The steps of hypothesis testing are specifying the null hypothesis, significance level, computing the p-value, and comparing it to the significance level to determine whether to reject the null hypothesis. Limitations of hypothesis testing include only providing evidence for or against the null hypothesis and the ability to detect even small differences as sample sizes increase.
This document defines and discusses hypotheses. It begins by defining a hypothesis as an assumption or educated guess that is intended to be tested. It then provides definitions of hypotheses from several scholars and discusses their key characteristics. Hypotheses are categorized as null or alternative, directional or non-directional, and deductive or inductive. The document outlines guidelines for forming a hypothesis and testing it, including relating it to the research problem, making it clear and testable, and falsifiable. In summary, this document provides an overview of what a hypothesis is and best practices for developing and testing one.
1. Statistical tests are used in fisheries science to test hypotheses and make quantitative decisions about fisheries processes. Common statistical tests include correlation tests, comparison of means tests, regression analyses, and hypothesis tests.
2. The appropriate statistical test to use depends on the research design, data distribution, and variable type. Parametric tests are used for normally distributed data, while non-parametric tests are used when assumptions are not met.
3. Accuracy of statistical tests relies on quality survey data. Both fishery-dependent and fishery-independent data are important, though confounding factors must be considered with dependent data. Proper study design and use of statistics allows prediction of fish production.
This document summarizes Lightbrown's classifications of research on second language learning and discusses theories, hypotheses, and the evaluation of theories. It outlines three types of research - descriptive studies, experimental pedagogical studies, and hypothesis-testing studies. Theories aim to interpret and unify facts through illumination, while hypotheses are uncertain suggestions tested through data. Theories are evaluated based on correspondence norms like explanatory power, coherence norms like simplicity, and pragmatic norms like elegance and practicality.
1) The document discusses different interpretations of probability, as it relates to causal modeling in the social sciences, including both generic and single-case causal claims.
2) It argues that objective Bayesianism, which treats probabilities as degrees of rational belief that incorporate empirical constraints, provides the best interpretation for probabilistic causal modeling.
3) Under an objective Bayesian view, probabilities can be used to evaluate hypotheses in both generic and single cases, allowing for rational decision making in areas like hypothesis testing and policy decisions.
The document discusses interpreting probability in causal modelling approaches in the social sciences. It argues that objective Bayesianism provides the best interpretation of probability for causal modelling by treating probabilities as frequency-driven epistemic probabilities that make use of empirical constraints. This allows probabilities to be applied to both generic and single-case causal claims and helps guide decisions through rational degrees of belief informed by evidence and experience.
Similar to Analysis on Data Fusion Techniques for Combining Conflicting Beliefs (20)
Due to availability of internet and evolution of embedded devices, Internet of things can be useful to contribute in energy domain. The Internet of Things (IoT) will deliver a smarter grid to enable more information and connectivity throughout the infrastructure and to homes. Through the IoT, consumers, manufacturers and utility providers will come across new ways to manage devices and ultimately conserve resources and save money by using smart meters, home gateways, smart plugs and connected appliances. The future smart home, various devices will be able to measure and share their energy consumption, and actively participate in house-wide or building wide energy management systems. This paper discusses the different approaches being taken worldwide to connect the smart grid. Full system solutions can be developed by combining hardware and software to address some of the challenges in building a smarter and more connected smart grid.
A Survey Report on : Security & Challenges in Internet of Thingsijsrd.com
In the era of computing technology, Internet of Things (IoT) devices are now popular in each and every domains like e-governance, e-Health, e-Home, e-Commerce, and e-Trafficking etc. Iot is spreading from small to large applications in all fields like Smart Cities, Smart Grids, Smart Transportation. As on one side IoT provide facilities and services for the society. On the other hand, IoT security is also a crucial issues.IoT security is an area which totally concerned for giving security to connected devices and networks in the IoT .As, IoT is vast area with usability, performance, security, and reliability as a major challenges in it. The growth of the IoT is exponentially increases as driven by market pressures, which proportionally increases the security threats involved in IoT The relationship between the security and billions of devices connecting to the Internet cannot be described with existing mathematical methods. In this paper, we explore the opportunities possible in the IoT with security threats and challenges associated with it.
In today’s emerging world of Internet, each and every thing is supposed to be in connected mode with the help of billions of smart devices. By connecting all the devises used in our day to day life, make our life trouble less and easy. We are incorporated in a world where we are used to have smart phones, smart cars, smart gadgets, smart homes and smart cities. Different institutes and researchers are working for creating a smart world for us but real question which we need to emphasis on is how to make dumb devises talk with uncommon hardware and communication technology. For the same what kind of mechanism to use with various protocols and less human interaction. The purpose is to provide the key area for application of IoT and a platform on which various devices having different mechanism and protocols can communicate with an integrated architecture.
Study on Issues in Managing and Protecting Data of IOTijsrd.com
This paper discusses variety of issues for preserving and managing data produced by IoT. Every second large amount of data are added or updated in the IoT databases across the heterogeneous environment. While managing the data each phase of data processing for IoT data is exigent like storing data, querying, indexing, transaction management and failure handling. We also refer to the problem of data integration and protection as data requires to be fit in single layout and travel securely as they arrive in the pool from diversified sources in different structure. Finally, we confer a standardized pathway to manage and to defend data in consistent manner.
Interactive Technologies for Improving Quality of Education to Build Collabor...ijsrd.com
Today with advancement in Information Communication Technology (ICT) the way the education is being delivered is seeing a paradigm shift from boring classroom lectures to interactive applications such as 2-D and 3-D learning content, animations, live videos, response systems, interactive panels, education games, virtual laboratories and collaborative research (data gathering and analysis) etc. Engineering is emerging with more innovative solutions in the field of education and bringing out their innovative products to improve education delivery. The academic institutes which were once hesitant to use such technology are now looking forward to such innovations. They are adopting the new ways as they are realizing the vast benefits of using such methods and technology. The benefits are better comprehensibility, improved learning efficiency of students, and access to vast knowledge resources, geographical reach, quick feedback, accountability and quality research. This paper focuses on how engineering can leverage the latest technology and build a collaborative learning environment which can then be integrated with the national e-learning grid.
Internet of Things - Paradigm Shift of Future Internet Application for Specia...ijsrd.com
In the world more than 15% people are living with disability that also include children below age of 10 years. Due to lack of independent support services specially abled (handicap) people overly rely on other people for their basic needs, that excludes them from being financially and socially active. The Internet of Things (IoT) can give support system and a better quality of life as well as participation in routine and day to day life. For this purpose, the future solutions for current problems has been introduced in this paper. Daunting challenges have been considered as future research and glimpse of the IoT for specially abled person is given in the paper.
A Study of the Adverse Effects of IoT on Student's Lifeijsrd.com
Internet of things (IoT) is the most powerful invention and if used in the positive direction, internet can prove to be very productive. But, now a days, due to the social networking sites such as Face book, WhatsApp, twitter, hike etc. internet is producing adverse effects on the student life, especially those students studying at college Level. As it is rightly said, something which has some positive effects also has some of the negative effects on the other hand. In this article, we are discussing some adverse effects of IoT on student’s life.
Pedagogy for Effective use of ICT in English Language Learningijsrd.com
The use of information and communications technology (ICT) in education is a relatively new phenomenon and it has been the educational researchers' focus of attention for more than two decades. Educators and researchers examine the challenges of using ICT and think of new ways to integrate ICT into the curriculum. However, there are some barriers for the teachers that prevent them to use ICT in the classroom and develop supporting materials through ICT. The purpose of this study is to examine the high school English teachers’ perceptions of the factors discouraging teachers to use ICT in the classroom.
In recent years usage of private vehicles create urban traffic more and more crowded. As result traffic becomes one of the important problems in big cities in all over the world. Some of the traffic concerns are traffic jam and accidents which have caused a huge waste of time, more fuel consumption and more pollution. Time is very important parameter in routine life. The main problem faced by the people is real time routing. Our solution Virtual Eye will provide the current updates as in the real time scenario of the specific route. This research paper presents smart traffic navigation system, based on Internet of Things, which is featured by low cost, high compatibility, easy to upgrade, to replace traditional traffic management system and the proposed system can improve road traffic tremendously.
Ontological Model of Educational Programs in Computer Science (Bachelor and M...ijsrd.com
In this work there is illustrated an ontological model of educational programs in computer science for bachelor and master degrees in Computer science and for master educational program “Computer science as second competence†by Tempus project PROMIS.
Understanding IoT Management for Smart Refrigeratorijsrd.com
1) The document discusses a proposed design for an intelligent refrigerator that leverages sensor technology and wireless communication to identify food items and order more through an internet connection when supplies are low.
2) Key aspects of the proposal include using RFID to uniquely identify each food item, storing item and usage data in an XML database, monitoring usage patterns to determine reordering needs, and executing orders through an online retailer using stored payment details.
3) Security and privacy concerns with such an internet-connected refrigerator are discussed, such as potential hacking of personal information or unauthorized device control. The proposal aims to minimize human interaction for household management.
DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...ijsrd.com
Double wishbone designs allow the engineer to carefully control the motion of the wheel throughout suspension travel. 3-D model of the Lower Wishbone Arm is prepared by using CAD software for modal and stress analysis. The forces and moments are used as the boundary conditions for finite element model of the wishbone arm. By using these boundary conditions static analysis is carried out. Then making the load as a function of time; quasi-static analysis of the wishbone arm is carried out. A finite element based optimization is used to optimize the design of lower wishbone arm. Topology optimization and material optimization techniques are used to optimize lower wishbone arm design.
A Review: Microwave Energy for materials processingijsrd.com
Microwave energy is a latest largest growing technique for material processing. This paper presents a review of microwave technologies used for material processing and its use for industrial applications. Advantages in using microwave energy for processing material include rapid heating, high heating efficiency, heating uniformity and clean energy. The microwave heating has various characteristics and due to which it has been become popular for heating low temperature applications to high temperature applications. In recent years this novel technique has been successfully utilized for the processing of metallic materials. Many researchers have reported microwave energy for sintering, joining and cladding of metallic materials. The aim of this paper is to show the use of microwave energy not only for non-metallic materials but also the metallic materials. The ability to process metals with microwave could assist in the manufacturing of high performance metal parts desired in many industries, for example in automotive and aeronautical industries.
Web Usage Mining: A Survey on User's Navigation Pattern from Web Logsijsrd.com
With an expontial growth of World Wide Web, there are so many information overloaded and it became hard to find out data according to need. Web usage mining is a part of web mining, which deal with automatic discovery of user navigation pattern from web log. This paper presents an overview of web mining and also provide navigation pattern from classification and clustering algorithm for web usage mining. Web usage mining contain three important task namely data preprocessing, pattern discovery and pattern analysis based on discovered pattern. And also contain the comparative study of web mining techniques.
APPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEMijsrd.com
Application of FACTS controller called Static Synchronous Compensator STATCOM to improve the performance of power grid with Wind Farms is investigated .The essential feature of the STATCOM is that it has the ability to absorb or inject fastly the reactive power with power grid . Therefore the voltage regulation of the power grid with STATCOM FACTS device is achieved. Moreover restoring the stability of the power system having wind farm after occurring severe disturbance such as faults or wind farm mechanical power variation is obtained with STATCOM controller . The dynamic model of the power system having wind farm controlled by proposed STATCOM is developed . To validate the powerful of the STATCOM FACTS controller, the studied power system is simulated and subjected to different severe disturbances. The results prove the effectiveness of the proposed STATCOM controller in terms of fast damping the power system oscillations and restoring the power system stability.
Making model of dual axis solar tracking with Maximum Power Point Trackingijsrd.com
Now a days solar harvesting is more popular. As the popularity become higher the material quality and solar tracking methods are more improved. There are several factors affecting the solar system. Major influence on solar cell, intensity of source radiation and storage techniques The materials used in solar cell manufacturing limit the efficiency of solar cell. This makes it particularly difficult to make considerable improvements in the performance of the cell, and hence restricts the efficiency of the overall collection process. Therefore, the most attainable maximum power point tracking method of improving the performance of solar power collection is to increase the mean intensity of radiation received from the source used. The purposed of tracking system controls elevation and orientation angles of solar panels such that the panels always maintain perpendicular to the sunlight. The measured variables of our automatic system were compared with those of a fixed angle PV system. As a result of the experiment, the voltage generated by the proposed tracking system has an overall of about 28.11% more than the fixed angle PV system. There are three major approaches for maximizing power extraction in medium and large scale systems. They are sun tracking, maximum power point (MPP) tracking or both.
A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...ijsrd.com
This document summarizes a review paper on performance and emission testing of a 4-stroke diesel engine using ethanol-diesel blends at different pressures. The paper reviews several previous studies that tested blends of 5-30% ethanol mixed with diesel fuel. The studies found that a 10-20% ethanol blend can improve brake thermal efficiency compared to pure diesel, while also reducing emissions like NOx and smoke. Higher ethanol blends required advancing the injection timing to allow the engine to run. Ethanol-diesel blends were found to have lower density, viscosity, pour point and higher flash point compared to pure diesel. Overall, ethanol shows potential as a renewable fuel to improve engine performance and reduce emissions when blended with diesel
Study and Review on Various Current Comparatorsijsrd.com
This paper presents study and review on various current comparators. It also describes low voltage current comparator using flipped voltage follower (FVF) to obtain the single supply voltage. This circuit has short propagation delay and occupies a small chip area as compare to other current comparators. The results of this circuit has obtained using PSpice simulator for 0.18 μm CMOS technology and a comparison has been performed with its non FVF counterpart to contrast its effectiveness, simplicity, compactness and low power consumption.
Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...ijsrd.com
Power dissipation is a challenging problem for today's system-on-chip design and test. This paper presents a novel architecture which generates the test patterns with reduced switching activities; it has the advantage of low test power and low hardware overhead. The proposed LP-TPG (test pattern generator) structure consists of modified low power linear feedback shift register (LP-LFSR), m-bit counter, gray counter, NOR-gate structure and XOR-array. The seed generated from LP-LFSR is EXCLUSIVE-OR ed with the data generated from gray code generator. The XOR result of the sequence is single input changing (SIC) sequence, in turn reduces the switching activity and so power dissipation will be very less. The proposed architecture is simulated using Modelsim and synthesized using Xilinx ISE9.2.The Xilinx chip scope tool will be used to test the logic running on FPGA.
Defending Reactive Jammers in WSN using a Trigger Identification Service.ijsrd.com
In the last decade, the greatest threat to the wireless sensor network has been Reactive Jamming Attack because it is difficult to be disclosed and defend as well as due to its mass destruction to legitimate sensor communications. As discussed above about the Reactive Jammers Nodes, a new scheme to deactivate them efficiently is by identifying all trigger nodes, where transmissions invoke the jammer nodes, which has been proposed and developed. Due to this identification mechanism, many existing reactive jamming defending schemes can be benefited. This Trigger Identification can also work as an application layer .In this paper, on one side we provide the several optimization problems to provide complete trigger identification service framework for unreliable wireless sensor networks and on the other side we also provide an improved algorithm with regard to two sophisticated jamming models, in order to enhance its robustness for various network scenarios.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Analysis on Data Fusion Techniques for Combining Conflicting Beliefs
1. IJSRD - International Journal for Scientific Research & Development| Vol. 1, Issue 3, 2013 | ISSN (online): 2321-0613
All rights reserved by www.ijsrd.com 706
Analysis on Data Fusion Techniques for Combining Conflicting
Beliefs
Ms. Zarna Parmar1
Prof. Vrushank Shah2
1
M. E. Student 2
Assistant Professor
1, 2
Department Of Electronics & Communication Engineering,
1, 2
L. J. Institute of Engineering & Technology, Ahmadabad.
Abstract--The consensus operator provides a method for
combining possibly conflicting beliefs within the Dempster-
Shafer belief theory, and represents an alternative to the
traditional Dempster’s rule. In everyday discourse dogmatic
beliefs are expressed by observers when they have a strong
and rigid opinion about a subject of interest. Such beliefs
can be expressed and formalised within the Demspter-
Shafer belief theory. This paper describes how the
consensus operator can be applied to dogmatic conflicting
opinions, i.e. when the degree of conflict is very high. It
overcomes shortcomings of Dempster’s rule and other
operators that have been proposed for combining possibly
conflicting beliefs.
Keywords: Dempster’s rule, belief, conflict, consensus
operator, subjective logic
I. INTRODUCTION
Ever since the publication of Shafer’s book A Mathematical
Theory of Evidence. there has been continuous controversy
around the so-called Dempster’s rule. The purpose of
Dempster’s rule is to combine two conflicting beliefs into a
single belief that reflects the two conflicting beliefs in a fair
and equal way. Dempster’s rule has been criticised mainly
because highly conflicting beliefs tend to produce
counterintuitive results. The problem with Dempster’s rule
is due to its normalisation which redistributes conflicting
belief masses to nonconflicting beliefs, and thereby tends to
eliminate any conflicting characteristics in the resulting
belief mass distribution.
II. DEMPSTER’S RULE
Dempster’s rule allows one to combine evidence from
different sources and arrive at a degree of belief
(represented by a belief function) that takes into account all
the available evidence. The theory was first developed by
Arthur P. Dempster and Glenn Shafer.
Dempster–Shafer theory is a generalization of the
Bayesian theory of subjective probability; whereas the latter
requires probabilities for each question of interest, belief
functions base degrees of belief (or confidence, or trust) for
one question on the probabilities for a related question.
These degrees of belief may or may not have the
mathematical properties of probabilities; how much they
differ depends on how closely the two questions are related.
Put another way, it is a way of representing epistemic
plausibility but it can yield answers that contradict those
arrived at using probability theory.
Often used as a method of sensor fusion, Dempster–Shafer
theory is based on two ideas: obtaining degrees of belief for
one question from subjective probabilities for a related
question, and Dempster's rule for combining such degrees of
belief when they are based on independent items of
evidence. In essence, the degree of belief in a proposition
depends primarily upon the number of answers (to the
related questions) containing the proposition, and the
subjective probability of each answer. Also contributing are
the rules of combination that reflect general assumptions
about the data.
In this formalism a degree of belief (also referred
to as a mass) is represented as a belief function rather than a
Bayesian probability distribution. Probability values are
assigned to sets of possibilities rather than single events:
their appeal rests on the fact they naturally encode evidence
in favor of propositions.
Dempster–Shafer theory assigns its masses to all of
the non-empty subsets of the entities that compose a system.
Belief And PlausibilityA.
Shafer's framework allows for belief about propositions to
be represented as intervals, bounded by two values, belief
(or support) and plausibility:
belief ≤ plausibility.
Belief in a hypothesis is constituted by the sum of the
masses of all sets enclosed by it (i.e. the sum of the masses
of all subsets of the hypothesis). It is the amount of belief
that directly supports a given hypothesis at least in part,
forming a lower bound. Belief (usually denoted Bel)
measures the strength of the evidence in favor of a set of
propositions. It ranges from 0 (indicating no evidence) to 1
(denoting certainty). Plausibility is 1 minus the sum of the
masses of all sets whose intersection with the hypothesis is
empty. It is an upper bound on the possibility that the
hypothesis could be true, i.e. it “could possibly be the true
state of the system” up to that value, because there is only so
much evidence that contradicts that hypothesis. Plausibility
(denoted by Pl) is defined to be Pl(s)=1-Bel(~s). It also
ranges from 0 to 1 and measures the extent to which
evidence in favor of ~s leaves room for belief in s. For
example, suppose we have a belief of 0.5 and a plausibility
of 0.8 for a proposition, say “the cat in the box is dead.”
This means that we have evidence that allows us to state
strongly that the proposition is true with a confidence of 0.5.
However, the evidence contrary to that hypothesis (i.e. “the
cat is alive”) only has a confidence of 0.2. The remaining
2. Analysis on Data Fusion Techniques for Combining Conflicting Beliefs
(IJSRD/Vol. 1/Issue 3/2013/0078)
All rights reserved by www.ijsrd.com
707
mass of 0.3 (the gap between the 0.5 supporting evidence on
the one hand, and the 0.2 contrary evidence on the other) is
“indeterminate,” meaning that the cat could either be dead
or alive. This interval represents the level of uncertainty
based on the evidence in your system.
The null hypothesis is set to zero by definition (it
corresponds to “no solution”). The orthogonal hypotheses
“Alive” and “Dead” have probabilities of 0.2 and 0.5,
respectively. This could correspond to “Live/Dead Cat
Detector” signals, which have respective reliabilities of 0.2
and 0.5. Finally, the all-encompassing “Either” hypothesis
(which simply acknowledges there is a cat in the box) picks
up the slack so that the sum of the masses is 1. The belief
for the “Alive” and “Dead” hypotheses matches their
corresponding masses because they have no subsets; belief
for “Either” consists of the sum of all three masses (Either,
Alive, and Dead) because “Alive” and “Dead” are each
subsets of “Either”. The “Alive” plausibility is 1 − m (Dead)
and the “Dead” plausibility is 1 − m (Alive). Finally, the
“Either” plausibility sums m(Alive) + m(Dead) + m(Either).
The universal hypothesis (“Either”) will always have 100%
belief and plausibility —it acts as a checksum of sorts.
Combining BeliefsB.
Beliefs from different sources can be combined with various
fusion operators to model specific situations of belief
fusion, e.g. with Dempster's rule of combination, which
combines belief constraints that are dictated by independent
belief sources, such as in the case of combining hints or
combining preferences. Note that the probability masses
from propositions that contradict each other can be used to
obtain a measure of conflict between the independent belief
sources. Other situations can be modeled with different
fusion operators, such as cumulative fusion of beliefs from
independent sources which can be modeled with the
cumulative fusion operator.
Dempster's rule of combination is sometimes
interpreted as an approximate generalisation of Bayes' rule.
In this interpretation the priors and conditionals need not be
specified, unlike traditional Bayesian methods, which often
use a symmetry (minimax error) argument to assign prior
probabilities to random variables (e.g. assigning 0.5 to
binary values for which no information is available about
which is more likely). However, any information contained
in the missing priors and conditionals is not used in
Dempster's rule of combination unless it can be obtained
indirectly—and arguably is then available for calculation
using Bayes equations.
Dempster–Shafer theory allows one to specify a
degree of ignorance in this situation instead of being forced
to supply prior probabilities that add to unity. This sort of
situation, and whether there is a real distinction between risk
and ignorance, has been extensively discussed by
statisticians and economists.
Formal DefinitionC.
Let X be the universal set: the set representing all possible
states of a system under consideration. The power set
is the set of all subsets of X, including the empty set . For
example, if:
then
The elements of the power set can be taken to represent
propositions concerning the actual state of the system, by
containing all and only the states in which the proposition is
true.
The theory of evidence assigns a belief mass to
each element of the power set. Formally, a function
is called a basic belief assignment (BBA), when it has two
properties. First, the mass of the empty set is zero:
Second, the masses of the remaining members of the power
set add up to a total of 1:
The mass m(A) of A, a given member of the power set,
expresses the proportion of all relevant and available
evidence that supports the claim that the actual state belongs
to A but to no particular subset of A. The value of m(A)
pertains only to the set A and makes no additional claims
about any subsets of A, each of which have, by definition,
their own mass.
From the mass assignments, the upper and lower
bounds of a probability interval can be defined. This interval
contains the precise probability of a set of interest (in the
classical sense), and is bounded by two non-additive
continuous measures called belief (or support) and
plausibility:
The belief bel(A) for a set A is defined as the sum of all the
masses of subsets of the set of interest:
The plausibility pl(A) is the sum of all the masses of the sets
B that intersect the set of interest A:
3. Analysis on Data Fusion Techniques for Combining Conflicting Beliefs
(IJSRD/Vol. 1/Issue 3/2013/0078)
All rights reserved by www.ijsrd.com
708
The two measures are related to each other as follows:
And conversely, for finite A, given the belief measure
bel(B) for all subsets B of A, we can find the masses m(A)
with the following inverse function:
where |A − B| is the difference of the cardinalities of the two
sets.
It follows from the last two equations that, for a finite
set X, you need know only one of the three (mass, belief, or
plausibility) to deduce the other two; though you may need
to know the values for many sets in order to calculate one of
the other values for a particular set. In the case of an infinite
X, there can be well-defined belief and plausibility functions
but no well-defined mass function.
III. DEMPSTER'S RULE OF COMBINATION
The problem we now face is how to combine two
independent sets of probability mass assignments in specific
situations. In case different sources express their beliefs
over the frame in terms of belief constraints such as in case
of giving hints or in case of expressing preferences, then
Dempster's rule of combination is the appropriate fusion
operator. This rule derives common shared belief between
multiple sources and ignores all the conflicting (non-shared)
belief through a normalization factor. Use of that rule in
other situations than that that of combining belief
constraints has come under serious criticism, such as in case
of fusing separate beliefs estimates from multiple sources
that are to be integrated in a cumulative manner, and not as
constraints. Cumulative fusion means that all probability
masses from the different sources are reflected in the
derived belief, so no probability mass is ignored.
Specifically, the combination (called the joint
mass) is calculated from the two sets of masses m1 and m2
in the following manner:
where
K is a measure of the amount of conflict between the two
mass sets.
IV. THE CONSENSUS OPERATOR
The Opinion SpaceA.
The consensus operator is not defined for general frames of
discernment, but only on binary frames of discernment. If
the original frame of discernment is larger than binary it is
possible to derive a binary frame of discernment containing
any element A and its complement A through simple or
normal coarsening. After normal coarsening, the relative
atomicity of A is equal to the relative cardinality of A in the
original frame of discernment. An opinion basically consists
of a bba on a (coarsened) binary with an additional relative
atomicity parameter that enables the computation of the
probability expectation value (or pignistic belief) of an
opinion.
Opinion1)
Let θ be a (coarsened) binary frame of discernment
containing sets A and Ā, and let mX
be the (coarsened) bba
on θ held by X. Let bA
x
= mX(A), dA
x
= mX(Ā) and uA
x
=
mX(θ)4
be called the belief, disbelief and uncertainty
components respectively, and let aA
x
represent the relative
atomicity of A. Then X’s opinion about A, denoted by wA
x
,
is the ordered tuple:
wA
x
= (bA
x
, dA
x
, uA
x
, aA
x
)
The belief, disbelief and uncertainty components of an
opinion represent exactly the same as a bba, so the
following equality holds:
bA + dA + uA = 1 ; A € 2
θ
Opinions have an equivalent represention as beta probability
density functions (pdf) denoted by beta (œ; ß) through the
following bijective mapping:
This means for example that an opinion with uA = 1 and aA
= 0:5 which maps to beta (1; 1) is equivalent to a uniform
pdf. It also means that a dogmatic opinion with uA = 0
which maps to beta (bAN; dAN) where N → OO is
equivalent to a spike pdf with infinitesimal width and
infinite height. Dogmatic opinions can thus be interpreted as
being based on an infinite amount of evidence.
The Consensus Operator2)
The Consensus Operator defined below is derived from the
combination of two beta pdfs. More precisely, the two input
opinions are mapped to beta pdfs and combined, and the
resulting beta pdf mapped back to the opinion space again.
The Consensus Operator can thus be interpreted as the
statistical combination of two beta pdf
Let wA
x
= (bA
x
, dA
x
, uA
x
, aA
x
) and wA
y
= (bA
y
, dA
y
,
uA
y
, aA
y
) be opinions respectively held by agents X and Y
about the same element A, and let k = uA
x
+ uA
y
- uA
x
uA
y
.
when uA
x
, uA
y
→ 0 the relative dogmatism between wA
x
and
wA
y
is defined by YA
X/Y
so that YA
X/Y
= uA
y
/ uA
x
. Let wA
x,y
=
(bA
x,y
, dA
x,y
, uA
x,y
, aA
x,y
) be the opinion such that:
4. Analysis on Data Fusion Techniques for Combining Conflicting Beliefs
(IJSRD/Vol. 1/Issue 3/2013/0078)
All rights reserved by www.ijsrd.com
709
Then wA
x,y
is called the consensus opinion between wA
x
and
wA
y
, representing an imaginary agent [X, Y ]’s opinion
about A, as if that agent represented both X and Y
The consensus operator is commutative, associative and
non-idempotent. Associativity in case k = 0. In case of two
totally uncertain opinions (i.e. u = 1) it is required that the
observers agree on the relative atomicity so that the
consensus relative atomicity for example can be defined as
aA
x,y
= aA
x
.
V. COMPARISON OF COMBINATION RULES
In this section, we present several examples in order to
compare the performance of the different rules described in
the previous sections.
EXAMPLE 1 : ONE DOGMATIC BELIEFA.
Let m1 and m2 represent two distinct pieces of evidence
about the states in θ = {P1, P2}. In this example, we
suppose that we want to combine a dogmatic belief function
m1 with a non-dogmatic one m2. Table 1 presents these two
bba’s with the results obtained by the previously presented
operators.
EXAMPLE 2: ZADEH’S EXAMPLEB.
Suppose that we have a murder case with three suspects;
Peter, Paul and Mary and two witnesses M1 and M2 who
give highly conflicting testimonies. Table 2 gives the
witnesses’ belief masses in Zadeh’s example and the
resulting belief masses after applying Dempster’s rule, the
non-normalised rule and the consensus operator.
EXAMPLE 3 : ZADEH’S EXAMPLE MODIFIEDC.
By introducing a small amount of uncertainty in the
witnesses testimonies , we get the output as shown below.
VI. CONCLUSION
In this paper we have focused on the problem of combining
highly conflicting and dogmatic beliefs within the belief
functions theory. The opinion metric described here
provides a simple and compact notation for beliefs in the
Shaferian belief model. We have presented an alternative to
Dempster’s rule which is consistent with probabilistic and
statistical analysis, and which seems more suitable for
combining highly conflicting beliefs as well as for
combining harmonious beliefs, than Dempster’s rule and its
non-normalised version. The fact that a binary focused
frame of discernment must be derived in order to apply the
consensus operator puts no restriction on its applicability.
The resulting beliefs for each event can still be compared
and can form the basis for decision making.
REFRENCES
[1] G. Shafer. A Mathematical Theory of Evidence.
Princeton University Press, 1976.
[2] A. Jøsang. A Logic for Uncertain Probabilities.
International Journal of Uncertainty,Fuzziness and
Knowledge-Based Systems, 9(3):279–311, June 2001.
[3] A. Jøsang and V.A. Bondi. Legal Reasoning with
Subjective Logic. ArtificialReasoning and Law,
8(4):289–315, winter 2000.
[4] A. Jøsang. An Algebra for Assessing Trust in
Certification Chains. In J. Kochmar,editor,
Proceedings of the Network and Distributed Systems
Security Symposium(NDSS’99). The Internet Society,
1999
[5] Ph. Smets and R. Kennes. The transferable belief
model. Artificial Intelligence,66:191–234, 1994..